AI robot has information from a human head transferred into its own.

Opinion: How to Solve AI’s ‘Jagged Intelligence’ Problem

AI chatbots still make too many basic errors. It’s time to train AI models on a different source of knowledge.

Modern AI chatbots can do amazing things, from writing research papers to composing Shakespearian sonnets about your cat. But amid the sparks of genius, there are flashes of idiocy. Time and again, the large language models, or LLMs, behind today’s generative AI tools make basic errors — from failing to solve basic high school math problems to stumbling over the rules of Connect Four.

This instability has been called “jagged intelligence” in tech circles, and it isn’t just a quirk — it’s a critical failing and part of the reason many experts believe we’re in an AI bubble. You wouldn’t hire a doctor or lawyer who, despite giving sound medical or legal advice, sometimes acts like they are clueless about how the world works. Enterprises seem to feel the same way about putting “jagged” AI in charge of supply chains, HR processes, or financial operations.

To solve the jagged intelligence problem, we must give our AI models access to a more powerful, more structured, and ultimately far more human stock of knowledge. Having engineered a range of AI systems over 30 years, I have found such knowledge to be an indispensable component of any reliable system.

This is because the technological innovations that launched the AI era aren’t capable of smoothing out these jagged edges. Current AI models don’t possess clear rules about how the world works; instead, they infer things from vast pools of data. In other words, they don’t know things, so they’re forced to guess — and when they guess wrong, the results range from the comical to the catastrophic.

Think about how humans learn. Born into “blooming, buzzing confusion,” babies spot patterns in the world around them: Faces are fun to look at, mom smells great, the cat scratches if you yank its tail. But pattern recognition is soon supplemented by clearly articulated knowledge: rules we’re taught, rather than things we absorb. From ABCs to arithmetic to how to load a dishwasher or drive a car, we use codified knowledge to learn efficiently — and avoid idiotic or dangerous mistakes along the way.

Current AI models don’t possess clear rules about how the world works; instead, they infer things from vast pools of data.

Frontier AI labs are already dabbling in this approach. Early LLMs struggled with grade-school math, so researchers bolted on actual mathematical knowledge — not hazy inferences, but explicit rules about how math works. The result: Google’s latest models can now reliably solve math Olympiad problems.

Adding more data of different types — for example video data, being advocated by AI luminaries such as Yann LeCun — won’t overcome the fundamental challenge of jagged intelligence. Even with extra data, it’s mathematically certain that the models will keep making mistakes — because that’s how probabilistic, data-driven AI works. Instead, we need to give models knowledge — rigidly described concepts and constraints, rules and relationships — that anchor their behavior to the realities of our world.

To give AI models a human stock of knowledge, we need to rapidly build a public database of formal knowledge spanning a range of disciplines. Of course, the rules of math are clear; the workings of other fields — health care, law, economics, or education, say — are, in some ways, vastly more complex. This challenge is now within our reach, as the growth of companies such as Scale AI, which provides high-quality data for training AI models, points to the emergence of a new profession — one that translates human expertise into machine-readable form and, in doing so, shapes not just what AI can do, but what it comes to treat as true.

This knowledge base could be accessed on demand by developers (or even AI agents) to provide verifiable insights covering everything from loading a dishwasher to the intricacies of the tax code. AI models would make fewer absurd mistakes, because they wouldn’t need to deduce everything from first principles. (Some research also suggests that such models would require far less data and energy, though these claims have yet to be proven.)

Unlike today’s opaque AI models, whose knowledge emerges from pattern recognition and is spread across billions of parameters, a formally distilled body of human knowledge could be directly examined, understood, and controlled. Regulators could verify a model’s knowledge, and users could ensure that tools were mathematically guaranteed not to make idiotic mistakes.

We need to give models knowledge — rigidly described concepts and constraints, rules and relationships — that anchor their behavior to the realities of our world.

The ambition to create such a knowledge resource is nothing new in AI. Even though previous efforts produced inconclusive results, it’s time to make a fresh start. Much as biologists use algorithms to speedrun the once-laborious process of modeling proteins, AI researchers could leverage generative AI to aid knowledge modeling.

It’s clear that current AI models are getting smarter and will get better by using different data. And yet, to overcome the challenge of jagged intelligence — and turn AI models into trusted partners and true drivers of value — we need to redefine the way models relate to and learn about the world. Data-driven algorithms allowed us to start talking to machines. But knowledge, not data, is the key to sustaining the future of AI past the potential bubble.


Vinay Chaudhri is principal scientist at Knowledge Systems Research and former program director of SRI International’s Artificial Intelligence Center. He has also taught knowledge graphs and logic programming at Stanford University, and was engaged in leveraging AI for financial services at JPMorganChase.

Republish