More Recent Comments

Tuesday, February 10, 2026

How intelligent is artificial intelligence?

Over the past few years I've been assessing AI algorithms to see if they can answer difficult questions about junk DNA, alternative splicing, evolution, epigenetics and a number of other topics. As a general rule, these AI algorithms are good at searching the internet and returning a consensus view of what's out there. Unfortunately, the popular view on some of these topics is wrong and most AI algorithms are incapable of sorting the wheat from the chaff.

In most cases, they aren't even capable of recognizing that there's a controversy and that their preferred answer might not be correct. They are quite capable of getting their answer from known kooks and unreliable, non-scientific, websites, [The scary future of AI is revealed by how it deals with junk DNA].

Others have now recognized that there's a problem with AI so they devised a set of expert questions that have definitive, correct, answers but the answers cannot be retrieved by simple internet searches. The idea is to test whether AI algorithms are actually intelligent or just very fast search engines that can summarize the data they retrieve and create an intelligent-sounding output.

Center for AI Safety, Scale AI & HLE Contributors Consortium (2026) A benchmark of expert-level academic questions to assess AI capabilities. Nature 649:1139–1146 [doi: 10.1038/s41586-025-09962-4]

Benchmarks are important tools for tracking the rapid advancements in large language model (LLM) capabilities. However, benchmarks are not keeping pace in difficulty: LLMs now achieve more than 90% accuracy on popular benchmarks such as Measuring Massive Multitask Language Understanding1, limiting informed measurement of state-of-the-art LLM capabilities. Here, in response, we introduce Humanity’s Last Exam (HLE), a multi-modal benchmark at the frontier of human knowledge, designed to be an expert-level closed-ended academic benchmark with broad subject coverage. HLE consists of 2,500 questions across dozens of subjects, including mathematics, humanities and the natural sciences. HLE is developed globally by subject-matter experts and consists of multiple-choice and short-answer questions suitable for automated grading. Each question has a known solution that is unambiguous and easily verifiable but cannot be quickly answered by internet retrieval. State-of-the-art LLMs demonstrate low accuracy and calibration on HLE, highlighting a marked gap between current LLM capabilities and the expert human frontier on closed-ended academic questions. To inform research and policymaking upon a clear understanding of model capabilities, we publicly release HLE at https://lastexam.ai.

How do the best AI programs score on this HLE test compared to other benchmarks that were designed by AI companies to prove that their algorthims were intelligent? Here are the results.

Oops! It looks like these programs aren't as intelligent as most people think.

Note: The cartoon was generated by ChatGPT in response to the request, "draw a cartoon illustrating GIGO - garbage in garbage out."


4 comments :

Paul said...

I don't disagree with your sentiment that AIs aren't intelligent, but I don't think they were trying to make a metric of intelligence, or at least they say they were trying to make a benchmark hard enough as the old ones are being maxed out - its right in the abstract. And they say in the discussion that it doesn't suggest general intelligence by itself. No normal human could do well on this test, even with the internet unless "the internet means "I can call any expert for help". Did they succeed at their goal, I don't know but viewed from a different bias, the performance is promising, certainly I doubt I could do a single problem on the exam. But your interpretation goes well beyond what they are claiming.

dean said...

Here is another paper you might find interesting.

https://arxiv.org/pdf/2602.06176

SPARC said...

Humanity’s Last Exam doesn't help much if they rely on the wrong person e.g., John Mattick.

Anonymous said...

It is a little misleading- the questions chosen for inclusion in the HLE were specifically chosen because they could not be answered by state of the art LLMs: "each question is tested against state-of-the-art LLMs to verify its difficulty—questions are rejected if LLMs can answer them correctly"