Over the past few years I've been assessing AI algorithms to see if they can answer difficult questions about junk DNA, alternative splicing, evolution, epigenetics and a number of other topics. As a general rule, these AI algorithms are good at searching the internet and returning a consensus view of what's out there. Unfortunately, the popular view on some of these topics is wrong and most AI algorithms are incapable of sorting the wheat from the chaff.
In most cases, they aren't even capable of recognizing that there's a controversy and that their preferred answer might not be correct. They are quite capable of getting their answer from known kooks and unreliable, non-scientific, websites, [The scary future of AI is revealed by how it deals with junk DNA].
Others have now recognized that there's a problem with AI so they devised a set of expert questions that have definitive, correct, answers but the answers cannot be retrieved by simple internet searches. The idea is to test whether AI algorithms are actually intelligent or just very fast search engines that can summarize the data they retrieve and create an intelligent-sounding output.
Center for AI Safety, Scale AI & HLE Contributors Consortium (2026) A benchmark of expert-level academic questions to assess AI capabilities. Nature 649:1139–1146 [doi: 10.1038/s41586-025-09962-4]
Benchmarks are important tools for tracking the rapid advancements in large language model (LLM) capabilities. However, benchmarks are not keeping pace in difficulty: LLMs now achieve more than 90% accuracy on popular benchmarks such as Measuring Massive Multitask Language Understanding1, limiting informed measurement of state-of-the-art LLM capabilities. Here, in response, we introduce Humanity’s Last Exam (HLE), a multi-modal benchmark at the frontier of human knowledge, designed to be an expert-level closed-ended academic benchmark with broad subject coverage. HLE consists of 2,500 questions across dozens of subjects, including mathematics, humanities and the natural sciences. HLE is developed globally by subject-matter experts and consists of multiple-choice and short-answer questions suitable for automated grading. Each question has a known solution that is unambiguous and easily verifiable but cannot be quickly answered by internet retrieval. State-of-the-art LLMs demonstrate low accuracy and calibration on HLE, highlighting a marked gap between current LLM capabilities and the expert human frontier on closed-ended academic questions. To inform research and policymaking upon a clear understanding of model capabilities, we publicly release HLE at https://lastexam.ai.
How do the best AI programs score on this HLE test compared to other benchmarks that were designed by AI companies to prove that their algorthims were intelligent? Here are the results.
Oops! It looks like these programs aren't as intelligent as most people think.
Note: The cartoon was generated by ChatGPT in response to the request, "draw a cartoon illustrating GIGO - garbage in garbage out."


5 comments :
I don't disagree with your sentiment that AIs aren't intelligent, but I don't think they were trying to make a metric of intelligence, or at least they say they were trying to make a benchmark hard enough as the old ones are being maxed out - its right in the abstract. And they say in the discussion that it doesn't suggest general intelligence by itself. No normal human could do well on this test, even with the internet unless "the internet means "I can call any expert for help". Did they succeed at their goal, I don't know but viewed from a different bias, the performance is promising, certainly I doubt I could do a single problem on the exam. But your interpretation goes well beyond what they are claiming.
Here is another paper you might find interesting.
https://arxiv.org/pdf/2602.06176
Humanity’s Last Exam doesn't help much if they rely on the wrong person e.g., John Mattick.
It is a little misleading- the questions chosen for inclusion in the HLE were specifically chosen because they could not be answered by state of the art LLMs: "each question is tested against state-of-the-art LLMs to verify its difficulty—questions are rejected if LLMs can answer them correctly"
I'm late to the party here, but I don't even think AI is a good fast search engine. I asked Claude Sonnet 4.6 how to access the settings menu for a piece of stereo equipment I own. This information is set out in the manual for the equipment, which is publicly available at several online locations.
Sonnet told me I should press the dial on the front of the equipment, then turn it to access the various menu settings.
There is no dial on the front, or anywhere else on the equipment.
Yesterday I ran across an epidemiologist on social media who did a similar thing with AI (asked an epidemiology question to which she knew the answer, which was publicly available information), who also got a completely non-factual answer. When she followed up and asked why the AI had given the answer it did, the response was that the information was not in the AI's database/storage, and that it had simply made up an answer, for which it apologized.
So we can neither rely on AI (in whatever field) to go looking for information it doesn't already have, nor to simply say "I don't know" when that occurs.
Post a Comment