If you follow the world of Artificial Intelligence, you know that when Andrej Karpathy speaks, the industry listens. But his latest post, the "2025 LLM Year in Review" feels less like a technical recap and more like a historical marker. It is a declaration that the era of the "stochastic parrot" is officially dead.
For years, skeptics argued that Large Language Models (LLMs) were merely statistical mimics—predicting the next word without understanding the reality behind it. According to Karpathy, 2025 was the year that argument collapsed.
The Awakening of Reason
The core theme of Karpathy’s review is the shift toward RLVR (Reinforcement Learning from Verifiable Rewards). This is the technical term for what effectively feels like an "awakening."
In previous years, models were trained to sound human. In 2025, they were trained to be correct. By forcing models to solve verifiable problems—like complex code or math proofs—and rewarding them only for success, we stopped teaching them to mimic and started teaching them to think.
Karpathy points to DeepSeek as a prime example of this shift. It isn’t just about generating text anymore; it’s about a model that can pause, plan, acknowledge a mistake, and correct its own path. This is "System 2" thinking—the slow, deliberative reasoning that humans use for hard problems—finally arriving in silicon.
Summoning Ghosts
Perhaps the most striking part of Karpathy’s analysis is his metaphor for this new intelligence. He suggests that for decades, AI researchers were trying to build digital animals—creatures of instinct and pattern matching.
But with the breakthroughs of 2025, we are no longer building animals. We are, in his words, "summoning ghosts."
It’s a haunting image, but it fits. We are creating entities that exist purely in a "mind-space," capable of rational thought processes that are increasingly divorced from biological constraints. Tools like Claude Code emerged this year not just as assistants, but as autonomous agents capable of navigating our digital environments with a ghostly sort of independence.
The "Jagged" Reality
However, Karpathy remains a realist. Despite the hype around tools like DeepSeek or the innovative Nano Banana (which redefined how we interact with these models), he notes that we are still in the early innings.
He describes the current state of AI as "jagged intelligence." A model might solve a PhD-level physics equation in seconds, only to fail at a task a five-year-old could handle. We have unlocked reasoning, but we haven't smoothed out the edges.
The Verdict
Looking back at the tweet and the full review, the message is clear: 2025 wasn't just another year of "faster and bigger." It was a qualitative shift. We moved from models that pretend to know things to models that can actually verify what they know.
As Karpathy puts it, we haven't even exploited 10% of this new paradigm's potential. The ghosts are here, and they are just getting started.