Richard Sutton Doesn't Believe in Hacking AGI
Richard Sutton thinks LLMs are a dead end for AGI. His arguments are valid but hackers and engineers have pragmatic solutions. Exploring the future of AI development.
Sutton's Argument
Richard Sutton, a famous AI researcher widely recognized as one of the founding fathers of modern computational reinforcement learning, believes LLMs are a dead end in the pursuit of true artificial intelligence.
Absence of Goals: Intelligence is about achieving goals, and LLMs do not have goals in a meaningful sense—they are "simply" predicting the next token.
Imitation vs Experience: True learning outcomes come from active learning—trying and seeing what happens—not from imitation.
LLM Limitations
- Don't have persistent memory across sessions
- Can't actively experiment with the world
- Don't have intrinsic goals or drives
- Learn from human-generated text, not direct experience
The Pragmatic Response
But hackers and engineers have pragmatic solutions. We give LLMs goals through prompts, add external memory through RAG and tool use, let them interact with tools and APIs, and fine-tune on task-specific feedback.
Where We Stand
LLMs with good prompting, tools, and orchestration can accomplish remarkable things today. They're not AGI, but they're useful. The best approach: use what works today while keeping an eye on the fundamental research that will shape tomorrow.