With OpenAI discussing trillion-dollar infrastructure plans and Nvidia’s market cap soaring, the question on everyone’s mind is inevitable: Are we in an AI bubble?
With so much capital flooding the market, it is natural to ask if speculation has driven valuations beyond reality. However, viewing "AI" as a single, monolithic entity is a mistake. To understand the real financial landscape, we have to peel back the layers of the tech stack.
When you analyze the industry by sector—Applications, Inference, and Training—it becomes clear that different areas look "bubbly" to very different degrees.
(Disclaimer: This analysis is for educational purposes only and does not constitute investment advice.)

1. The Application Layer: The Sleeping Giant
Contrary to the hype around hardware, there is currently significant underinvestment in the application layer.
The logic is simple: Applications built on top of AI infrastructure (like LLM APIs) must, by definition, eventually generate more value than the infrastructure itself. Why? Because the applications are what generate the revenue needed to pay the infrastructure providers.
We are currently seeing "green shoots" across various industries, particularly those deploying agentic workflows. Despite this, many Venture Capital investors remain hesitant. They feel comfortable deploying $1 billion into infrastructure (where the recipe for success is clear) but struggle to pick winners in the application space.
There is a fear that "wrapper" applications will be wiped out by updates to frontier models. However, the potential for bespoke, agentic applications is massive, and this area is likely to see the most sustainable growth over the coming decade.
2. Infrastructure for Inference: Supply Constrained
While training gets the headlines, inference (the processing power needed to actually run the models) is facing a critical bottleneck.
Infrastructure providers are struggling to fulfill the exploding demand for token generation. This is a classic "good problem" to have: the market is supply-constrained, not demand-constrained.
What is driving this hunger for tokens? Coding agents.
Tools like Claude Code, OpenAI’s Codex, and Google’s CLI tools are advancing rapidly. As these agents become more capable, developers rely on them more, driving up the aggregate demand for inference.
We are not just chatting with bots anymore; we have software writing software. This requires massive compute capacity. While there is a risk of overbuilding in the long run, right now, society needs more inference capacity to support the next generation of productivity tools.
3. Infrastructure for Training: The High-Risk Zone
This is the sector that warrants the most caution.
Billions are being poured into training massive frontier models, but this area carries the highest risk of a bubble. Why?
• The Open Source Threat: If open-weight models continue to capture market share, companies spending billions on proprietary training may not see an attractive ROI.
• Weakening Moats: Algorithmic and hardware improvements are making it cheaper to train models every year. The "technology moat" is shrinking.
While brands like ChatGPT and distribution networks like Google’s Gemini have strong defensive positions, the pure business of model training is becoming increasingly difficult to monetize relative to the capital expenditure required.
The Real Danger: A Sentiment Collapse
The fundamental long-term outlook for AI is incredibly strong. However, there is a distinct downside scenario we must consider.
If the training infrastructure sector—the riskiest bucket—suffers a collapse due to overinvestment, it could trigger negative market sentiment for the entire AI industry. We could see an irrational outflow of capital even from the healthy, high-potential sectors (like applications and inference).
Conclusion: The Weighing Machine
Warren Buffett, quoting Benjamin Graham, famously said: "In the short run, the market is a voting machine, but in the long run, it is a weighing machine."
In the short term, prices are driven by votes (sentiment and hype). It is difficult to predict when or if sentiment will shift. But in the long run, the market weighs intrinsic value. The fundamentals of AI—its ability to write code, solve problems, and drive agency—are heavy with intrinsic value.
Regardless of short-term fluctuations or bubbles in specific sectors, the plan remains the same: Keep building.