Cover Image

AI’s $650 Billion Question: Hype, Hallucinations, and Real Business Value

February 15, 20265 min read

The Stakes: $650 Billion and the Future of the Economy

AI is no longer a side story in tech.

It is now a macroeconomic wager.

The top four hyperscalers are expected to spend roughly $650 billion on AI-related infrastructure and technology.

That level of capital deployment raises the core question:

Will AI generate returns large enough to justify the spend?

To explore that, Eisman brought on Columbia Business School professor Daniel Guetta for a deep dive into the mechanics—not just the market narrative.


What Exactly Is a Large Language Model?

Guetta begins with a crucial distinction:

Large language models are not the entirety of AI.

There are two broad categories:

  • Predictive AI / Machine Learning (older, numerical, structured)

  • Generative AI / Large Language Models (newer, unstructured, text-based)

Traditional machine learning works well when inputs are clean numbers:

  • square footage

  • credit scores

  • transaction history

But it struggles with unstructured data like:

  • text

  • images

  • documents

LLMs changed the game by being able to process language directly.


How LLMs Actually Work: Autocomplete on Steroids

At the core, Guetta emphasizes a point that often gets lost:

LLMs are sophisticated next-word prediction engines.

They generate responses one token at a time:

  • “What is the capital of Argentina?”

  • → “Buenos”

  • → “Aires”

They are not “thinking” in the human sense.

They are pattern-matching across enormous training data.

This is also why they are computationally expensive:

To generate each new word, the model must reprocess the entire context of the conversation.


Embeddings: Turning Words Into Numbers

The technical breakthrough behind LLMs is something called an embedding.

Embeddings convert words into numerical representations so models can compute relationships.

Words that appear in similar contexts get placed “near” each other in mathematical space:

  • king and queen cluster together

  • France and Italy share structural similarity

  • unrelated concepts sit far apart

The model doesn’t assign meaning directly.

It learns meaning statistically from patterns in text.


Why Do LLMs Hallucinate?

Eisman asks the question investors and executives keep coming back to:

Why do these systems make things up?

Guetta’s answer is blunt:

The surprising thing isn’t that they hallucinate.
The surprising thing is that they ever don’t.

Since LLMs generate language probabilistically, hallucination is an inherent feature:

  • they don’t verify truth

  • they don’t reason like humans

  • they predict what “sounds” plausible

Novel events are especially difficult because novelty is not in the training data.

That’s why models often fail in real-time breaking situations.


Gary Marcus vs Guetta: Diminishing Returns and AGI

Gary Marcus argues:

  • scaling is producing diminishing improvements

  • hallucinations will always remain

  • LLMs may never deliver returns that justify current spending

Guetta agrees on one major point:

LLMs are unlikely to achieve true artificial general intelligence.

But he disagrees on the implication.

Even if they never become “human-level thinkers,” they can still deliver enormous business value.


The Real Value of AI Today: Three Buckets

Guetta frames AI’s practical utility in three categories:

1. Supercharging Traditional Machine Learning

LLMs can improve legacy predictive models by extracting meaning from text.

Example: content moderation.

Old systems looked for keywords.

LLMs can interpret context, intent, and nuance.

Even if imperfect, they dramatically reduce human workload.


2. Agentic AI: A Chatbot With Hands

Agentic AI is the buzziest term in the industry.

Guetta defines it simply:

Agentic AI = an LLM connected to tools that can take actions.

Instead of only answering questions, it can:

  • send emails

  • process refunds

  • book flights

  • update spreadsheets

  • execute workflows

It’s not magic.

It’s an LLM paired with real-world functions.

The key bottleneck is not intelligence.

It’s infrastructure.

Companies must have clean systems and usable data pipelines before agents can do anything meaningful.


3. Chatbots as Enterprise Search Engines

LLMs are already transformative inside organizations through document retrieval.

By embedding thousands of internal documents, companies can build systems that allow employees to ask:

  • “Find the riskiest contract”

  • “Summarize the policy change”

  • “Locate the relevant clause”

This is already producing measurable productivity gains.


Corporate America’s Real Constraint: Data Readiness

Eisman presses the critical implementation question:

How many companies are actually ready?

Guetta’s view:

Most firms still have fragmented, messy, acquisition-driven data environments.

However, GenAI can help clean data too.

Example: unifying customer records where “Coca-Cola” appears under multiple names across databases.

AI becomes both the tool and the catalyst for modernization.


Software and Consulting: Are Moats Collapsing?

Two groups have struggled under the AI narrative:

  • enterprise software firms

  • management consulting

Guetta argues software isn’t just code.

Platforms like Salesforce provide structure, coordination, and standardization.

If every employee builds their own CRM, the result is chaos.

Consulting also has value beyond answers:

Often the value is convening stakeholders and forcing alignment—something chatbots don’t solve.


The Hidden Risk: Bias and Guardrails

LLMs inevitably absorb bias from two sources:

  • internet training data

  • human feedback tuning (RLHF)

As systems become agentic, the risk increases:

Hallucinating is one thing.

Hallucinating while executing actions is another.

The future depends heavily on guardrails:

  • limits

  • approvals

  • constrained tool access

  • oversight layers


Where AI Goes Next: World Models and New Training Paradigms

Guetta highlights emerging research directions:

  • reinforcement learning with verifiable rewards

  • training based on final answer quality, not token prediction

  • experimental “world models” that simulate reality internally

These approaches aim to reduce hallucination and improve reasoning.

Still early.

Still uncertain.


Eisman’s Takeaway: Value Is Real, Returns Are Unknown

Eisman concludes that Guetta’s optimism is more convincing than Marcus’s pessimism.

Not because AI will reach AGI.

But because:

Agentic workflows and enterprise applications are already shaking industries.

The open question remains financial:

Will the $650B spend generate sufficient ROI?

We won’t know this year.

We may not know until 2027 or 2028.

Until then, the AI story stays the same:

  • hyperscalers keep spending

  • markets keep questioning

  • narratives keep swinging


Bottom Line

AI does not need to become Terminator-level intelligence to matter.

Even flawed, hallucination-prone models can:

  • automate workflows

  • transform enterprise search

  • supercharge legacy analytics

  • reshape service industries

The technology is real.

The spending is real.

The returns are still the unknown.

That is the $650 billion question.


Until next time, this is Steve Eisman, and this has been The Real Eyes Playbook. .
If you’d like to catch my interviews and market breakdowns, visit The Real Eisman Playbook or subscribe to the Weekly Wrap channel on YouTube.


This post is for informational purposes only and does not constitute investment advice. Please consult a licensed financial adviser before making investment decisions.

I’m Steve Eisman, an investor and fund manager best known for predicting the 2008 housing market collapse. I’ve spent my career studying markets, risk, and the psychology that drives financial decisions. Today, I continue to invest and share lessons from decades of watching cycles repeat.

Steve Eisman

I’m Steve Eisman, an investor and fund manager best known for predicting the 2008 housing market collapse. I’ve spent my career studying markets, risk, and the psychology that drives financial decisions. Today, I continue to invest and share lessons from decades of watching cycles repeat.

LinkedIn logo icon
Instagram logo icon
Youtube logo icon
Back to Blog