print-icon
print-icon

The Substack Post That Sank The Market

Portfolio Armor's Photo
by Portfolio Armor
Tuesday, Feb 24, 2026 - 11:33

AI imagery about AI

The Substack Post That Sank The Market

On Monday, a single Substack post managed to vaporize hundreds of billions of dollars of market cap.

Citrini Research and Alep Shah published “The 2028 Global Intelligence Crisis” over the weekend—a fictional dispatch from June 2028 describing an AI-driven recession: a “human intelligence displacement spiral” where white-collar workers are replaced by GPU clusters, their incomes vanish, and with them the consumption that props up everything from DoorDash orders to prime mortgages.

It was explicitly framed as a scenario, not a prediction. But markets treated it as a profit warning from the future.

On Monday, software and “white-collar leverage” names (payments, staffing, consulting, etc.) dropped hard. The iShares Expanded Tech-Software ETF (IGV -3.21%↓) shed nearly 5%, wiping out more than $200 billion in software market cap in a day. The Dow fell over 800 points. Names like American Express (AXP -6.22%↓) , DoorDash (DASH -2.25%↓) , CrowdStrike (CRWD -9.17%↓) , and Datadog (DDOG -9.85%↓) all took double-digit hits as the “AI scare trade” flared again. (Investors.com)

So: did the post really “sink the market”? Not by itself. But it crystallized anxieties a lot of investors already had—and it raised real questions about where AI is taking the economy.

What follows is our response.


Why This Time Is Different

A standard critique of Citrini/Shah’s scenario is: we’ve seen this movie before. Automobiles put buggy drivers and stable hands out of work, but they also created auto mechanics, line workers, and truckers. Spreadsheets killed some clerical jobs but created armies of financial analysts.

So why treat this AI wave any differently?

Because this time, the technology isn’t just automating a specific task. It’s eating the general capability we used to call “white-collar work”:

  • Generating and editing text and code

  • Summarizing and analyzing documents

  • Drafting contracts, slide decks, marketing copy

  • Writing and reviewing software, even with complex legacy systems

In previous cycles, you could dodge disruption by “moving up the stack” into more abstract work—project management, coding, strategy. Now the stack itself is being automated.

“Learn to code” isn’t much comfort when the thing replacing you also codes, and does so faster, without getting tired, and for a flat monthly API fee.

Critics sometimes point to occupations like radiology as proof AI won’t really displace professionals: we were told a decade ago that image-reading AI would wipe out radiologists; yet they’re still here.

That’s true, but mostly because radiologists are protected by licensing and regulatory moats. A hospital can’t just say, “we fired all our radiologists and let the model handle it,” even if the model is pretty good. Many other white-collar workers don’t have those moats.

Strip away the regulatory insulation, and the Citrini/Shah spiral—AI improves → firms lay off → incomes fall → consumption drops → pressure to cut costs further → more AI—doesn’t look impossible. It looks like one of several plausible paths.


If AI Succeeds, Everything Else Can Be Solved

There’s a weird tension at the heart of the “AI doom loop” story.

On the one hand, it assumes AI is fantastically powerful: capable enough to automate a huge share of white-collar output, compress software margins, and replace layers of intermediation. On the other hand, it assumes that makes everyone poorer.

If AI really does all the things the scenario fears—makes operations dramatically more efficient, slashes software and service costs, accelerates drug discovery, speeds up R&D—then in aggregate, it’s creating value.

The scary part isn’t “AI makes the pie smaller.” It’s “the pie gets bigger while a lot of people lose their slice.”

That’s not primarily a technological problem; it’s a political one:

  • How do we tax and redistribute when value creation is concentrated in a relatively small set of AI-intensive firms and infrastructure providers?

If you solve the distribution problem, the production problem largely takes care of itself. AI making the economy more productive is good. The challenge is who participates in that.


We Already Have Obsolescent Workers.

One uncomfortable point the Citrini scenario surfaces (without saying it outright) is this: we already know what it looks like when a big group of people can’t compete effectively in the labor market.

In the U.S., there are demographic groups that have been largely detached from productive work for decades—not because of AI, but because of the superior natural intelligence of other groups competing for jobs leftover after automation and outsourcing.

We already know the policy playbook for them:

  • Transfer payments (disability programs, Supplemental Security, refundable tax credits, etc.)

  • Make-work or semi-make-work public jobs and contracting

  • Housing and healthcare subsidies

  • And sometimes, unfortunately, outright fraud and arbitrage of poorly monitored programs

Think of the huge Covid relief frauds, including the Minnesota food-aid scandal that steered hundreds of millions of dollars out of intended recipients and into shell organizations before investigators caught on. (Reuters)

Immigration When We Don’t Need More Workers

One of the more explosive implications of the Citrini/Shah framework has nothing to do with GPUs or ARR—it’s about people.

For the last few decades, the standard economic argument for broad, relatively non-selective immigration has been:

“We have labor shortages; we’re aging; we need more workers to support retirees and fund the welfare state.”

That argument was always a lie, because unskilled immigrants and their dependents are a net fiscal drain, but if AI and automation is going to replace even highly skilled workers, that argument because transparently ludicrous.

If most of the economic pie is being baked by capital-intensive AI systems and a relatively small number of highly productive firms, a smaller population can still support a lot of retirees—especially if those firms are taxed or partly nationalized.

In that world, countries that keep adding net dependents without a clear productivity story don’t look farsighted. They look fiscally reckless.


Using AI To Fix The Fiscal Math

Citrini and Shah worry that AI will erode tax receipts and blow up government balance sheets. That’s possible, but it’s not inevitable.

You can also flip their logic: if AI is powerful enough to upend the labor market, it’s powerful enough to help close fiscal gaps—if we let it.

Three quick examples:

1. The State That Let A Chatbot Renew Prescriptions

In January, Utah became the first state to approve a pilot program where an AI system (Doctronic) can renew many chronic-care prescriptions autonomously, without a live physician on every case. Patients verify their identity, answer structured questions about symptoms and side effects, and the AI either renews or escalates to a human doctor. (commerce.utah.gov)

This isn’t sci-fi; it’s live policy. If it works safely, it’s a template for using AI to reduce healthcare costs the government is on the hook for via Medicare and Medicaid—without banning doctors or rationing care.

2. Using AI To Hunt Fraud Instead Of Just Funding It

The Minnesota food-aid fraud case showed how easy it was for relatively small organizations to siphon tens or hundreds of millions from a rushed program. (Reuters)

AI systems that can read contracts, bank records, invoices, and text messages at scale should be able to flag suspicious clusters of behavior faster and cheaper than any manual audit team:

  • Repeated patterns of invoices from shell entities

  • Unusual money flows to certain jurisdictions

  • Recycled documentation across multiple “providers”

You don’t need perfect detection—just enough to raise the cost of fraud and recover a meaningful fraction of the money.

3. Owning The Capital Instead Of Just Taxing It

If AI turns a handful of firms into productivity monsters, governments don’t have to stand outside with a tax bucket only. They can stand inside the cap table.

We’re already drifting in that direction. Under the Trump administration, the U.S. government has taken more explicit equity-like positions or structured support in strategically important companies—semiconductors, critical minerals, energy transition. Perhaps the most well-known example was with Intel (INTC -1.29%↓).

What This Means For Workers And Investors

Citrini and Shah’s scenario is not destiny. It’s one possible path. But it does highlight some practical takeaways.

1. Debt Becomes More Dangerous In An AI/Deflation Scenario

If AI really delivers a flood of cheap goods and services while hammering wage growth for large swaths of workers, the bias is toward disinflation or outright deflation in many sectors.

In that world:

  • Fixed nominal debts get heavier in real terms

  • Wage cuts or job loss + fixed mortgage/student loan payments is a bad combo

If you think an “intelligence crisis” world is even a tail-risk, getting your balance sheet tighter—less leverage, more flexibility—is a rational move.

2. Owning Claims On AI-Resilient Cash Flows

The other side of that coin: if the future is capital-heavy and labor-light, you want to own the stuff that captures the cash flows.

Some of that will be obvious AI infrastructure: chips, power, datacenters.

But some of it will be boring, heavy, and old-economy:

  • Energy

  • Power infrastructure

  • Certain kinds of mining and materials

  • Healthcare and logistics firms whose demand isn’t primarily driven by white-collar expense accounts

On the day the Substack post “sank the market,” a lot of software and white-collar leverage names were in free fall. Meanwhile, one of our own trades—in a tungsten miner whose product is used in everything from chips to industrial tooling—finished the day up about 5%.

Why The Post Hit A Nerve

The Citrini/Shah note hit a nerve because it forced investors to confront something they’ve half-suspected for a year now:

What if we’re right about AI’s capabilities—and that’s actually bearish for a lot of human capital?

Our view at Portfolio Armor:

  • The disruption to white-collar work is real and likely understated.

  • The net economic impact of AI can still be positive if we get the distribution and policy side roughly right.

  • For investors, this argues for less leverage, more exposure to AI-resilient or AI-enabled cash flows, and a willingness to own “heavy assets, low obsolescence” alongside the shiny stuff.

The market just had its first “Substack panic.” It won’t be the last. But AI doesn’t get to write the whole script from here. Policy makers, voters, and investors have a say in what the 2028 timeline actually looks like.

And if we really are headed into an era where GPU clusters do the bulk of white-collar work, one practical implication is simple:

You’ll want fewer liabilities, more assets, and at least a few positions in things AI can’t replace.

We have trades on two more of those positions (sourced from our market-beating Top Names) teed up for later today. If you'd like a heads up when we place them, you can subscribe to the Portfolio Armor Substack below. 

Late Morning Update

Today's trade alert has gone out. 

 

Contributor posts published on Zero Hedge do not necessarily represent the views and opinions of Zero Hedge, and are not selected, edited or screened by Zero Hedge editors.
0
Loading...