The guy whose hedge fund returned 47% in H1 2025 just dropped one of the most important AI papers I've read.
Leopold Aschenbrenner's Situational Awareness LP returned 47% net of fees in H1. The S&P returned 6%. He bet his entire net worth on AI infrastructure and outperformed Wall Street by 8x.
When someone with that track record publishes a formal economics paper on existential risk, I read it.
The paper mathematically inverts the core assumption driving AI regulation: that slowing down reduces existential risk.
He and coauthor Philip Trammell from Stanford show the opposite can be true.
The setup is elegant.
If any dangerous technology already exists, stagnation doesn't eliminate risk. It guarantees catastrophe. You're stuck running the same gauntlet forever. Nuclear weapons don't disappear. Bioweapons don't disappear. Current AI systems don't disappear. Every year you remain in a dangerous state, you roll the dice again.
Enough rolls and you lose.
They split existential risk into two components the policy debate has been conflating. "State risk" is the ongoing hazard from technologies that already exist. "Transition risk" is the danger from developing new technologies. The experiments. The scaling runs. The novel deployments.
Unless transition risk scales super-linearly with speed, faster growth is always weakly safer. You endure less cumulative state risk by escaping dangerous states more quickly.
The integral under the hazard curve shrinks.
The Kuznets curve dynamics strengthen the case. As societies get richer, safety becomes a luxury good. The marginal utility of consumption falls while the value of civilization rises. Optimal policy shifts toward more safety spending. Faster growth accelerates this dynamic.
There's a second-order effect most people miss. When the future is more valuable because growth will be faster, it becomes worth sacrificing more today to protect it. Anticipated acceleration motivates stricter current policy.
The paper acknowledges limits. If policy frictions are severe enough, speed becomes genuinely risky. If transition risk compounds super-linearly with deployment velocity, slower wins on some margins.
These are empirical questions.
But the burden of proof shifts. Anyone advocating slowdown needs to demonstrate that transition risk dominates state risk. That we're not already in a "time of perils" where the safest path is pushing through as quickly as possible.
The real insight is structural. Permanent deceleration locks you into whatever hazard rate you currently face. If that rate is positive, survival probability goes to zero. Only acceleration or surgical regulation can minimize cumulative risk.
The pause advocates have it backwards. Slowing down extends your exposure to current dangers.
Speed is the escape route.
Everyone in AI policy should read this paper.
点击图片查看原图