
The Financial Crisis Explained: Leverage, Subprime Mortgages, and the System We Live With Today (Part One)
Why the Financial Crisis Still Matters
Today I want to discuss a very complicated but crucial topic: the financial crisis and how it created today’s financial system.
The crisis was not just a US event. It was global. It brought the entire financial system to the brink and very nearly triggered a worldwide depression. The recession that followed was horrific, but it was far from the worst-case outcome. That outcome was narrowly avoided.
The regulatory changes that followed permanently altered banking and finance. They made the system safer, but also created unintended consequences. Because of those consequences, I believe the next administration will revisit parts of the regulatory framework.
Because of the scope of this topic, I am breaking this lecture into two parts. In part one, I will focus on leverage, banking fundamentals, and the subprime mortgage machine that ultimately collapsed the system.
The Four Interlocking Causes of the Crisis
The financial crisis had four major, interconnected causes.
First, there was far too much leverage throughout the financial system, especially at globally systemic institutions, meaning very large banks and investment banks.
Second, a massive asset class blew up: subprime mortgages.
Third, systemically important financial institutions owned enormous amounts of this asset class.
Fourth, derivatives tied the balance sheets of these institutions together in a web so complex that no one truly understood the exposure.
To understand how this unfolded, we must begin with leverage and with how banks actually work.
How a Bank Really Works
Banks are fundamentally different from most businesses. They require leverage to generate acceptable returns. More importantly, a bank does not know its cost of goods sold at the point of sale.
For a bank, cost of goods sold is future loan losses. When a bank makes a loan, it can only estimate those losses. It will not know the truth for years.
Conceptually, a bank sells access to its balance sheet. The product is the loan. The price is the interest rate.
The key measure of profitability is return on assets, or ROA, which is net income divided by total assets. Bank executives, however, are paid based on return on equity, or ROE.
The crucial formula is simple:
ROE equals ROA multiplied by leverage.
This one equation explains almost everything that went wrong.
Why Banks Use Leverage
Consider a simple thought experiment.
A bank raises 1 billion dollars in equity and makes 1 billion dollars in loans with no leverage. If it earns a 1 percent ROA, it generates 10 million dollars in net income. That is a 1 percent ROE. This is terrible.
Now add leverage.
If the same bank raises 9 billion dollars in deposits and makes 10 billion dollars in loans, with the same 1 percent ROA, it earns 100 million dollars. The ROE is now 10 percent.
Increase leverage further. With 99 billion in deposits and 100 billion in loans, a 1 percent ROA produces 1 billion dollars in net income. The ROE becomes 100 percent.
The lesson is clear. As long as a bank is profitable, more leverage means higher ROE.
This creates a powerful incentive. Executives are rewarded for higher ROE. Higher leverage delivers it.
But leverage cuts both ways.
If that same highly leveraged bank suffers a negative 1 percent ROA, it loses 1 billion dollars and wipes out all its equity. The bank is insolvent.
This does not mean leverage is bad. Banks must be levered to serve their economic purpose. Without leverage, borrowing costs would be prohibitively high. The real question is how much leverage is too much.
By 2007, Leverage Was Extreme
By the time the crisis began, leverage had exploded.
Between 1997 and 2007, leverage at large financial institutions roughly tripled. In Europe, average bank leverage rose from about 11 times to more than 30 times. Citigroup’s leverage reached roughly 33 times. Including off-balance-sheet exposure, it was likely over 40 times.
Investment banks like Goldman Sachs and Lehman Brothers were in similar territory.
To understand why regulators allowed this, we need to discuss one concept that is central to the crisis: risk-weighted assets.
Risk-Weighted Assets and a Fatal Blind Spot
Risk-weighted assets, or RWA, were created with good intentions.
Two banks can have the same leverage, but very different risk profiles. One might make safe loans. The other might make risky ones. Regulators wanted capital requirements to reflect that difference.
So each asset was assigned a risk weight, largely determined by credit ratings and historical loss data. Banks were regulated based on equity relative to risk-weighted assets, not total assets.
Here is the problem.
Banks have a built-in incentive to load their balance sheets with assets that appear low risk under historical data. By doing so, they can increase total leverage while keeping risk-weighted leverage seemingly stable.
Worse, risk weights are backward-looking. They rely on historical loss patterns. But lending standards are not fixed laws of nature. They are set by people. They can loosen dramatically without immediate evidence in loss data.
That is exactly what happened.
From 1997 to 2007, banks increased leverage massively on a simple math basis, while risk-weighted leverage appeared relatively flat. Return on equity soared, not because banks were better run, but because they borrowed more.
An entire generation of bank CEOs mistook leverage for genius.
Why Glass-Steagall Was Not the Cause
After the crisis, many argued that repealing Glass-Steagall caused the collapse. I strongly disagree.
Even if Glass-Steagall had remained fully intact, the crisis would still have happened. The leverage explosion driven by risk-weighted assets would have occurred anyway. The same subprime loans would have been made.
The problem was leverage combined with a disastrous asset class.
That asset class was subprime mortgages.
The Rise and Rebirth of Subprime Mortgages
Subprime borrowers are generally defined as having credit scores below 650. In reality, they represent much of the lower middle class and parts of the middle class.
Starting in the early 1990s, real household income growth in the United States stalled. To maintain consumption, households borrowed.
Subprime lending grew rapidly, aided by securitization. Instead of holding loans on their balance sheets, lenders packaged them into securities rated by agencies and sold globally. Funding was no longer a constraint.
By the late 1990s, the first subprime boom collapsed due to bad accounting and excessive risk. Many firms went bankrupt.
After the 2001 recession, interest rates fell to 1 percent. Investors needed yield. Wall Street needed supply. Subprime mortgages returned in force.
This time, the scale was far larger.
Underwriting Collapsed Completely
From 2002 to 2006, underwriting standards deteriorated year after year.
Rising home prices masked risk. Delinquencies stayed low, encouraging even looser standards.
By 2006, roughly 600 billion dollars in subprime loans were being originated annually, about 20 percent of the entire US mortgage market.
At least half of these loans were low-doc or no-doc loans. Income was barely verified, if at all. Subprime borrowers, who required more scrutiny, received less.
I am not exaggerating when I say that by 2006, if you could breathe, you could get a mortgage.
The Subprime Treadmill
Subprime mortgages were typically teaser-rate loans.
Borrowers paid a low rate, around 3 percent, for two or three years. Then the rate reset to LIBOR plus 600 basis points, roughly 9 percent.
Crucially, lenders underwrote the loan to the teaser rate, fully aware the borrower could not afford the reset rate.
The business model relied on refinancing. Every few years, borrowers refinanced into a new teaser loan, paying three to four points each time. Those fees were rolled into the principal.
Borrowers never built equity. They were trapped on a treadmill.
This was socially disastrous, but enormously profitable. Everyone was paid on volume, not loan quality.
Where I Enter the Story
Because the entire subprime market was securitized, there was an extraordinary amount of data available. Every month, each securitization reported delinquencies and losses.
By comparing early-stage delinquencies across vintages, my partners and I saw something alarming in 2006. Delinquencies in new deals were far worse than in any prior year.
The data confirmed what anecdotes suggested. Underwriting had collapsed.
Yet no one stopped. There was no incentive to stop.
We began shorting subprime exposure in late 2006. By the summer of 2007, investors worldwide refused to buy subprime securities. A buyer strike emerged.
Without buyers, securitization stopped. Without securitization, lenders stopped lending. Without lending, borrowers could not refinance. Loans reset. Defaults exploded.
By late summer 2007, the crisis was inevitable.
And that is where part one ends.
In part two, I will explain why this turned into a full-blown global financial crisis and how derivatives tied the system together in a way that made collapse unavoidable.
Until next time, this is Steve Eisman, and this has been The Real Eyesman Playbook. .
If you’d like to catch my interviews and market breakdowns, visit The Real Eisman Playbook or subscribe to the Weekly Wrap channel on YouTube.
This post is for informational purposes only and does not constitute investment advice. Please consult a licensed financial adviser before making investment decisions.
