More Recent Comments

Sunday, November 26, 2023

ChatGPT gets two-thirds of science textbook questions wrong: time to bring it into the classroom!

The November 16th issue of Nature has an article about ChatGPT: ChatGPT has entered the classroom: how LLMs could transform education. It reports that the latest version (GPT4) can only answer one third of questions correctly in physical chemistry, physics, and calculus. Nevertheless, the article promotes the idea that ChatGPT should be brought into the classroom!

An editorial in the same issue explains Why teachers should explore ChatGPT’s potential — despite the risks.

Many students now use AI chatbots to help with their assignments. Educators need to study how to include these tools in teaching and learning — and minimize pitfalls.

I don't get it. It seems to me that the problems with ChatGPT far outweigh the advantages and the best approach for now is to warn students that using AI tools may be terribly misleading and could lead to them failing a course if they trust the output. That doesn't mean that there's no potential for improvement in the future but this can only happen if the sources of information used by these tools were to become much more reliable. No improvements in the algorithms are going to help with that.


4 comments :

Georgi Marinov said...

So which problem of education exactly are LLMs supposed to help solve?

I have yet to see a convincing example/argument.

After all the goal of education is to fill human brains with knowledge of the facts about the world around us, and understanding of the conceptual links between them. Human brains, not LLMs...

Rarian Rakista said...

IMHO, for specialist knowledge LLMs are about as useful as the very early stages of Wikipedia. Ask an LLM for some picnic ideas and you you'll get some great answers, ask it a question about abstract algebra and you might not.

However, if Students use the follow up links that are usually provided for the answers for a sanity check instead of just copying the LLM output verbatim, they should do no worse than any other student. Same could be said about Wikipedia 10+ years ago, check your sources.

Georgi Marinov said...

@Rarian Rakista

Let's be honest with ourselves -- was the learning process more or less effective before and after Wikipedia?

In theory, it should have helped. And for a minority of people it has been an invaluable resources to expand their knowledge.

But in practice and for the majority, the effect was that people use it to get a quick answer without trying to get any real understanding of the subject matter.

LLMs are doing the same, but without even the user having to expend the effort to scroll through the Wikipedia page (in which process he might have learned something by accident).

Mikkel Rumraket Rasmussen said...

Perhaps one value you could get out of ChatGPT interactions is to practice convincing it how it's answers are actually wrong. So rather than rely on it for answers, determine which of the answers it gives that are wrong and then have students try to persuade it that it is wrong. Convince it to adopt your position through argumentation (not just mere assertion).

When you have to explain something to someone else it's a really good exercise helping you to understand a subject better yourself!