More Recent Comments

Tuesday, February 10, 2026

How intelligent is artificial intelligence?

Over the past few years I've been assessing AI algorithms to see if they can answer difficult questions about junk DNA, alternative splicing, evolution, epigenetics and a number of other topics. As a general rule, these AI algorithms are good at searching the internet and returning a consensus view of what's out there. Unfortunately, the popular view on some of these topics is wrong and most AI algorithms are incapable of sorting the wheat from the chaff.

In most cases, they aren't even capable of recognizing that there's a controversy and that their preferred answer might not be correct. They are quite capable of getting their answer from known kooks and unreliable, non-scientific, websites, [The scary future of AI is revealed by how it deals with junk DNA].

Others have now recognized that there's a problem with AI so they devised a set of expert questions that have definitive, correct, answers but the answers cannot be retrieved by simple internet searches. The idea is to test whether AI algorithms are actually intelligent or just very fast search engines that can summarize the data they retrieve and create an intelligent-sounding output.

Center for AI Safety, Scale AI & HLE Contributors Consortium (2026) A benchmark of expert-level academic questions to assess AI capabilities. Nature 649:1139–1146 [doi: 10.1038/s41586-025-09962-4]

Benchmarks are important tools for tracking the rapid advancements in large language model (LLM) capabilities. However, benchmarks are not keeping pace in difficulty: LLMs now achieve more than 90% accuracy on popular benchmarks such as Measuring Massive Multitask Language Understanding1, limiting informed measurement of state-of-the-art LLM capabilities. Here, in response, we introduce Humanity’s Last Exam (HLE), a multi-modal benchmark at the frontier of human knowledge, designed to be an expert-level closed-ended academic benchmark with broad subject coverage. HLE consists of 2,500 questions across dozens of subjects, including mathematics, humanities and the natural sciences. HLE is developed globally by subject-matter experts and consists of multiple-choice and short-answer questions suitable for automated grading. Each question has a known solution that is unambiguous and easily verifiable but cannot be quickly answered by internet retrieval. State-of-the-art LLMs demonstrate low accuracy and calibration on HLE, highlighting a marked gap between current LLM capabilities and the expert human frontier on closed-ended academic questions. To inform research and policymaking upon a clear understanding of model capabilities, we publicly release HLE at https://lastexam.ai.

How do the best AI programs score on this HLE test compared to other benchmarks that were designed by AI companies to prove that their algorthims were intelligent? Here are the results.

Oops! It looks like these programs aren't as intelligent as most people think.

Note: The cartoon was generated by ChatGPT in response to the request, "draw a cartoon illustrating GIGO - garbage in garbage out."


No comments :