A 23-year-old former OpenAI researcher claims we'll have superintelligence by 2030—machines vastly smarter than any human, capable of automating all cognitive work, and potentially deciding humanity's fate. In this Situational Awareness book summary, discover why the smartest people in AI believe we're racing toward AGI at breakneck speed.

  • Book: Situational Awareness: The Decade Ahead
  • Author: Leopold Aschenbrenner
  • Pages: 165 pages (series of essays)
  • Rating: 5 out of 5 Stars
  • Recommended for: Tech leaders navigating AI strategy, investors seeking to understand AI's trajectory, policymakers grappling with national security implications, and anyone who wants to understand what's really happening in San Francisco's AI labs.

Download the essays as a PDF:

See the latest holdings and trades for Situational Awareness' public equity fund:

Situational Awareness - Daniel Scrivner
Learn from history’s greatest entrepreneurs and investors. Find ideas you can use in your work.

The 30-Second Summary

Leopold Aschenbrenner argues that AGI (artificial general intelligence) will arrive by 2027, followed immediately by an intelligence explosion that produces superintelligence—AI systems vastly smarter than humans—within a year. The U.S. and China will be locked in a winner-take-all race for this technology that will determine global power for centuries. Behind the scenes, the leading AI labs are essentially building nuclear weapons while treating security like a startup treats its random SaaS product—and we're about to hand superintelligence to China on a silver platter unless we radically change course.

In a nutshell: We're on an unstoppable trajectory toward machines that will make humans look like preschoolers, the entire global order hangs in the balance, and basically nobody outside a few hundred people in San Francisco realizes what's about to hit us.

About the Author

Leopold Aschenbrenner is a German AI researcher who graduated as valedictorian from Columbia University at age 19. He worked on OpenAI's Superalignment team with Ilya Sutskever before being fired in April 2024 over alleged security concerns he raised about Chinese espionage. He founded Situational Awareness LP, an AI-focused hedge fund managing over $1.5 billion with backing from Stripe's Collison brothers, Daniel Gross, and Nat Friedman.

The 3 Big Ideas

1. The Intelligence Explosion Is Inevitable and Imminent

The core insight: We're not building better chatbots—we're building a new species that will be to us what we are to chimpanzees.

Aschenbrenner traces a simple but terrifying trajectory: GPT-2 (2019) wrote like a preschooler, GPT-3 (2020) like an elementary schooler, GPT-4 (2023) like a smart high schooler. Following the same trendlines of compute scaling and algorithmic progress, we'll hit expert-level AI by 2027. But here's where it gets wild: these AGI systems will then automate AI research itself. Imagine 100 million AI researchers, each smarter than the best human scientists, working 24/7 at 10x human speed. They'll compress a decade of progress into less than a year. As he puts it: "The jump to superintelligence would be wild enough at the current rapid but continuous rate of AI progress... But it could be much faster than that, if AGI automates AI research itself."

Key takeaway: Start planning for a world where every cognitive job can be automated by 2027, and where by 2030, AI systems will be as far beyond us as we are beyond insects.

2. China Can Win This Race (And That Would Be Catastrophic)

The core insight: American AI labs are currently delivering AGI secrets to China through security that's worse than "random startup" level.

The security situation is almost comedic if it weren't so terrifying. Aschenbrenner describes AI labs where "you can just look through office windows" to see key algorithmic breakthroughs, where thousands of unvetted employees have access to civilization-defining secrets, where people "gabber at parties in SF" about AGI techniques worth hundreds of billions. Meanwhile, China has demonstrated it can manufacture 7nm chips (enough for AGI), can outbuild the U.S. on power infrastructure, and has hackers who've already stolen AI code from Google. "On the current course," he writes, "the leading Chinese AGI labs won't be in Beijing or Shanghai—they'll be in San Francisco and London."

Key takeaway: Unless American AI labs implement Manhattan Project-level security immediately, we're handing superintelligence—and thus global dominance—to the CCP.

3. The U.S. Government Will Take Over (The Project)

The core insight: No startup can handle superintelligence—by 2027/28, we'll see a government AGI Manhattan Project.

Aschenbrenner argues it's "an insane proposition that the US government will let a random SF startup develop superintelligence." Once it becomes clear that AGI means military dominance comparable to nuclear weapons, the national security state will wake up. The leading labs will be forced to merge, Congress will appropriate trillions for compute clusters, and the core AGI research team will move to a secure facility. As he puts it: "Imagine if we had developed atomic bombs by letting Uber just improvise."

Key takeaway: If you're working in AI, prepare for your industry to be nationalized within 3-4 years—this isn't another tech boom, it's a national security emergency.


The Dwarkesh Interview: The Unfiltered Truth

In a four-hour conversation with Dwarkesh Patel, Leopold drops the academic tone and gets raw about what's really happening. Here are the moments that made even Dwarkesh—someone deeply embedded in AI circles—stop and say "that's pretty fucked up."

The Trillion-Dollar Cluster Is Already Being Built

"Within a year, Nvidia data center revenue has gone from a few billion a quarter to $25 billion a quarter and continues to go up," Leopold explains. But here's the kicker—we're just following straight lines on a graph that have held for almost a decade.

He breaks down the progression with surgical precision: "GPT-4 was reported to have finished pre-training in 2022... roughly a $500 million cluster. Very roughly, it's 10 megawatts." Then he drops the bomb: "By 2028, that's a cluster that's ten GW. That's more power than most US states. That's 10 million H100 equivalents, costing hundreds of billions of dollars."

The casual way he says it masks the insanity: "By 2030, you get the trillion-dollar cluster using 100 gigawatts, over 20% of US electricity production."

When Dwarkesh pushes back, citing Zuckerberg's skepticism about gigawatt data centers, Leopold's response is telling: "Six months ago, 10 GW was the talk of the town. Now, people have moved on. 10 GW is happening."

The "Unhobbling" Revolution: From Chatbot to Digital God

Leopold introduces a concept that's barely discussed publicly but is central to everything: unhobbling. It's not just about making models bigger—it's about removing their limitations.

"Right now, GPT-4 can do a few hundred tokens with chain-of-thought. That's already a huge improvement," he explains. "Before, answering a math question was just shotgun... If I think at 100 tokens a minute, that's like what GPT-4 does. It's equivalent to me thinking for three minutes."

But here's where it gets wild: "Suppose GPT-4 could think for millions of tokens. That's +4 OOMs on test time compute on one problem... If it's 100 tokens a minute, a few million tokens is a few months of working time."

The secret sauce? "You need to learn things like error correction tokens where you're like 'ah, I made a mistake, let me think about that again.' You need to learn planning tokens where it's like 'I'm going to start by making a plan.'"

He drops an analogy that sticks: "When you drive, you're on autopilot most of the time. Sometimes you hit a weird construction zone... You go from autopilot to System 2 and you're thinking about how to do it. Scaling improves that System 1 autopilot... If you can get System 2 working, you can quickly jump to something more agentified."

China's Master Plan: They Don't Need to Innovate

The most chilling part of the interview is Leopold's assessment of Chinese espionage. This isn't speculation—it's happening now.

"It's all extremely easy. They don't make the claim that it's hard," he says about stealing AI secrets. Then he shares a detail that should terrify everyone: "I've heard from multiple people—not from my time at OpenAI, and I haven't seen the memo—that at some point several years ago, OpenAI leadership had laid out a plan to fund and sell AGI by starting a bidding war between the governments of the United States, China, and Russia."

Wait, what? "It's surprising to me that they're willing to sell AGI to the Chinese and Russian governments."

The current security situation is a joke: "Between the labs, there are thousands of people with access to the most important secrets; there is basically no background-checking, siloing, controls, basic infosec... You can just look through office windows."

A Chinese national already succeeded: "All it took to steal the code, without detection, was pasting code into Apple Notes, then exporting to pdf!"

Leopold's assessment is brutal: "On the current course, the leading Chinese AGI labs won't be in Beijing or Shanghai—they'll be in San Francisco and London."

The Middle East Gambit: "Would You Do the Manhattan Project in the UAE?"

Leopold reveals what might be the most shortsighted decision in human history: American companies are planning to build AGI infrastructure in Middle Eastern dictatorships.

"There are some people who are trying to build clusters elsewhere. There's a lot of free-flowing Middle Eastern money," he explains. Then he asks the question that should be obvious: "Would you do the Manhattan Project in the UAE?"

The risks are existential: "Once the cluster is there, it's much easier for them to exfiltrate the weights. They can literally steal the AGI, the superintelligence. It's like they got a direct copy of the atomic bomb."

But it gets worse: "They can just seize the compute... Suppose we put 25% of the compute capacity in these Middle Eastern dictatorships. Say they seize that. Now it's a ratio of compute of 3:1... You can do a lot with that amount of compute."

Why are companies doing this? Leopold's answer is damning: "People aren't thinking about this as the AGI superintelligence cluster. They're just like, 'ah, cool clusters for my ChatGPT.'" And: "There's easy money coming from the Middle East."

Most tellingly: "Some people think that only autocracies that can do this with top-down mobilization of industrial capacity... Some people who shitpost about loving America, but then in private they're betting against America."

2023 at OpenAI: Ground Zero for the Revolution

Leopold shares what it was like being inside OpenAI during the ChatGPT explosion: "When you were at OpenAI in 2023, it was a weird thing. You almost didn't want to talk about AI or AGI. It was kind of a dirty word."

The contrast with public perception was surreal: "2023 was the moment for me where AGI went from being this theoretical, abstract thing. I see it, I feel it, and I see the path. I see where it's going. I can see the cluster it's trained on, the rough combination of algorithms, the people, how it's happening."

His observation about the "GPT wrapper" companies is savage: "I'm so bearish on the wrapper companies because they're betting on stagnation. They're betting that you have these intermediate models and it takes so much schlep to integrate them... We're going to sonic boom you. We're going to get the unhobblings. We're going to get the drop-in remote worker."

The Personal Stakes: Growing Up in History's Shadow

Leopold's German background gives him a perspective most Silicon Valley optimists lack. His great-grandmother "was born in 1934 and grew up during the Nazi era. In World War II, she saw the firebombing of Dresden... Then she spent most of her life in the East German communist dictatorship."

This history makes the stakes visceral: "When I was a kid, the thing she always really didn't want me to do was get involved in politics. Joining a political party had very bad connotations for her."

His warning about superintelligence under authoritarian control is haunting: "Imagine you have a perfectly loyal military and security force. No more rebellions. No more popular uprisings. You have perfect lie detection... No Gorbachev who had some doubts about the system would have ever risen to power."

The Intelligence Explosion: Doing the Math Live

Leopold walks through the actual mechanics of the intelligence explosion with a specificity that's both thrilling and terrifying:

"Once you get AGI, you won't just have one AGI... given inference GPU fleets by then, we'll likely be able to run many millions of them (perhaps 100 million human-equivalents, and soon after at 10x+ human speed)."

The acceleration is exponential: "If you can do that, you can maybe do a decade's worth of ML research progress in a year. You get some sort of 10x speed up. You can make the jump to AI that is vastly smarter than humans within a year, a couple of years."

He addresses the robotics bottleneck directly: "You have a billion super smart—smarter than the smartest human researchers—AI researchers in your cluster. At some point during the intelligence explosion, they're going to be able to figure out robotics."

The COVID Parallel: We Can See It Coming

Leopold draws a powerful parallel to early 2020: "COVID in February of 2020 honestly feels a lot like today. It feels like this utterly crazy thing is coming. You see the exponential and yet most of the world just doesn't realize it."

When Dwarkesh asks if Leopold shorted the market during COVID (he was 17), Leopold simply says: "Yeah."

The implication is clear—those who see what's coming can position themselves, but most won't believe it until it's too late. "At some point, people saw it and then crazy, radical reactions came."

The Revenue Reality Check

Leopold provides hard numbers that show this isn't fantasy: "Suppose you do hit $10 billion by the end of this year. Suppose it just continues on the trajectory of revenue doubling every six months. It's not actually that far from $100 billion, maybe by 2026."

He breaks down the economics: "There are like 300 million Microsoft Office subscribers... Suppose you sold some AI add-on for $100/month to a third of Microsoft Office subscribers. That'd be $100 billion right there."

When Dwarkesh objects that $100/month is a lot, Leopold's response cuts through the doubt: "For the average knowledge worker, it's a few hours of productivity a month. You have to be expecting pretty lame AI progress to not hit a few hours of productivity a month."

The Nationalization Timeline

Leopold is specific about when the government takeover happens: "By 2027-2028, the national security state is going to start paying a lot of attention... There's a real question on timing. Do they start taking this seriously when the intelligence explosion is already happening quite late. Do they start taking this seriously two years earlier? That matters a lot for how things play out."

The historical precedent is sobering: "In World War II, something like 50% of GDP went to war production. The US borrowed over 60% of GDP... Much more was on the line."

On Lab Employees' Diminishing Power

One of Leopold's most provocative observations: "This is like the rapidly depreciating influence of the lab employees. Right now, the AI lab employees have so much power. You saw this November event. It's so much power. Both are going to get automated and they're going to lose all their power. It'll just be a few people in charge with their armies of automated AIs."

The Oppenheimer parallel is explicit: "There are some of these classic scenes from the Oppenheimer movie. The scientists built it and then the bomb was shipped away and it was out of their hands."

His advice is urgent: "It's good for lab employees to be aware of this. You have a lot of power now, but maybe not for that long. Use it wisely."

The Energy Solution Nobody Wants to Hear

Leopold is blunt about what it'll take: "To make it possible in the US, to some degree we have to get our act together. There are basically two paths to doing it in the US. One is you just have to be willing to do natural gas."

The scale is staggering but doable: "Natural gas production in the United States has almost doubled in a decade. You do that one more time over the next seven years, you could power multiple trillion-dollar data centers."

But there's a problem: "The issue there is that a lot of people made these climate commitments, not just the government. It's actually the private companies themselves... I admire the climate commitments, but at some point the national interest and national security is more important."


Key Frameworks

The OOM (Orders of Magnitude) Framework

What it is: A way to measure AI progress by tracking exponential improvements in compute, algorithms, and "unhobbling" (removing limitations).

How to apply it:

  1. Count the compute scaling (currently ~0.5 OOMs/year)
  2. Add algorithmic efficiency gains (~0.5 OOMs/year)
  3. Factor in unhobbling improvements (RLHF, chain-of-thought, tool use)

When to use it: To predict when specific AI capabilities will emerge and understand why progress feels so rapid.

The Intelligence Explosion Feedback Loop

What it is: Once AI can improve itself, progress accelerates exponentially as millions of AI researchers work on making better AI.

How to apply it:

  1. AGI automates AI research
  2. Millions of AGI copies work 24/7 at 10x+ human speed
  3. Compress decades of progress into months
  4. Each generation of AI makes the next generation

When to use it: To understand why the window between human-level and superhuman AI might be less than a year.

The Situational Awareness Test

What it is: Only a few hundred people globally truly understand what's coming with AGI—having "situational awareness" means seeing through the current AI hype to the genuine transformation ahead.

How to apply it:

  1. Look at AI progress trendlines, not individual model releases
  2. Think in terms of intelligence levels (preschooler → high schooler → PhD → superhuman)
  3. Consider geopolitical implications, not just commercial applications

When to use it: To separate serious AI developments from hype and understand what really matters for the future.

Notable Quotes

"You can see the future first in San Francisco. Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters."

Why this matters: The scale of investment reveals that insiders know AGI is imminent—companies don't bet trillions on chatbots.

"The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace many college graduates."

Why this matters: This isn't speculation—it's the operating assumption of the people actually building these systems.

"Through whatever peculiar forces of fate, I have found myself amongst them. A few years ago, these people were derided as crazy—but they trusted the trendlines."

Why this matters: The same people who correctly predicted ChatGPT when everyone called them crazy are now predicting AGI by 2027.

"GPT-2 to GPT-4 took us from ~preschooler to ~smart high-schooler abilities in 4 years."

Why this matters: The pace of improvement is measurable and consistent—another 4 years means another similar jump.

"On the current course, we may as well give up on having any American AGI effort; China can promptly steal all the algorithmic breakthroughs."

Why this matters: Security isn't just important—without it, the entire Western AGI effort is pointless.

"Basically nothing else we do—on national competition, and on AI safety—will matter if we don't fix this, soon."

Why this matters: All other AI governance discussions are meaningless if we can't keep the technology secure.

"I find it an insane proposition that the US government will let a random SF startup develop superintelligence."

Why this matters: The current situation—startups building civilization-defining technology—is historically anomalous and won't last.

"Superintelligence will give a decisive economic and military advantage. China isn't at all out of the game yet."

Why this matters: This isn't about market competition—it's about which civilization survives.

"By the end of the intelligence explosion, we won't have any hope of understanding what our billion superintelligences are doing."

Why this matters: We're building something we fundamentally cannot control with current techniques.

"The scariest realization is that there is no crack team coming to handle this."

Why this matters: The few hundred people who understand what's coming are the only ones positioned to shape it.

"The models, they just want to learn. You have to understand this."

Why this matters: Ilya Sutskever's 2015 quote captures the relentless force driving AI progress—it's not stoppable.

"We will face an insane year in which the situation is shifting extremely rapidly every week."

Why this matters: The intelligence explosion won't be a gradual transition—it'll be a crisis requiring wartime decision-making.

"American AI labs must put the national interest first."

Why this matters: The current model of AI labs optimizing for profit while building AGI is unsustainable.

"If we can't keep model weights secure, we're just building AGI for the CCP."

Why this matters: Every advance we make is immediately transferred to adversaries without proper security.

"Perhaps most importantly, if these AI systems could automate AI research itself, that would set in motion intense feedback loops."

Why this matters: This is the key insight that transforms AGI from powerful to world-ending/world-creating.

"Trust the trendlines. The trendlines are intense, and they were right."

Why this matters: Those who bet on exponential progress in AI have been consistently right for a decade.

"A dictator who wields the power of superintelligence would command concentrated power unlike any we've ever seen."

Why this matters: AGI isn't just another technology—it's the ability to lock in totalitarianism forever.

"Whoever controls superintelligence will quite possibly have enough power to seize control from pre-superintelligence forces."

Why this matters: First-mover advantage with superintelligence might be permanent and irreversible.

"By 2027/28, we'll have models trained on the $100B+ cluster; full-fledged AI agents will start to widely automate software engineering."

Why this matters: Concrete prediction with specific timeline—not vague hand-waving about the future.

"Right now, there's perhaps a few hundred people in the world who realize what's about to hit us."

Why this matters: The information asymmetry between those with situational awareness and everyone else is massive.

Essay Deep Dives

Introduction - SITUATIONAL AWARENESS: The Decade Ahead

This opening sets the stage with a visceral scene from San Francisco: "Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans."

The introduction establishes the central thesis: we're in an AGI race that will determine humanity's future, yet "few have the faintest glimmer of what is about to hit them." Aschenbrenner positions himself among "perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness."

He warns: "Before long, the world will wake up. But right now, there are perhaps a few hundred people... that have situational awareness... Whether these people are also right about the next few years remains to be seen. But these are very smart people—the smartest people I have ever met—and they are the ones building this technology."

The key insight: "By the end of the decade, American electricity production will have grown tens of percent; from the shale fields of Pennsylvania to the solar farms of Nevada, hundreds of millions of GPUs will hum."

I. From GPT-4 to AGI: Counting the OOMs

This essay lays out the mathematical case for AGI by 2027 through "counting the OOMs" (orders of magnitude). Aschenbrenner traces the trajectory from GPT-2 to GPT-4: "GPT-2 (2019) ~ preschooler... GPT-3 (2020) ~ elementary schooler... GPT-4 (2023) ~ smart high schooler."

The projections are stark: "In 2027, a leading AI lab will be able to train a GPT-4-level model in a minute" compared to 3 months today. He breaks down three multiplicative factors:

  1. Compute scaling: ~0.5 OOMs/year trend growth
  2. Algorithmic efficiency: Another ~0.5 OOMs/year
  3. "Unhobbling": Moving from chatbots to agents through RLHF, chain-of-thought, tools, and scaffolding

The most striking framework is the "test-time compute overhang"—current models can only "think" for the equivalent of minutes, but "what if it could use millions of tokens to think about and work on really hard problems or bigger projects?" This could unlock "being able to think and work on something for months-equivalent, rather than a few-minutes-equivalent."

He addresses the data wall but argues it's surmountable: "The old state of the art of training models was simple and naive, but it worked, so nobody really tried hard to crack these approaches to sample efficiency. Now that it may become more of a constraint, we should expect all the labs to invest billions of dollars and their smartest minds into cracking it."

Key prediction: "We are on course for AGI by 2027. These AI systems will basically be able to automate basically all cognitive jobs (think: all jobs that could be done remotely)."

II. From AGI to Superintelligence: the Intelligence Explosion

This essay contains the book's most mind-bending argument: the transition from human-level to vastly superhuman AI could happen in less than a year. The mechanism is automation of AI research itself.

"Once we get AGI, we won't just have one AGI... given inference GPU fleets by then, we'll likely be able to run many millions of them (perhaps 100 million human-equivalents, and soon after at 10x+ human speed)."

The math is compelling: "100 million automated researchers each working at 100x human speed not long after we begin to be able to automate AI research." This could "compress the algorithmic progress human researchers would have found in a decade into a year instead."

Aschenbrenner directly addresses potential bottlenecks:

  • Limited compute for experiments: Even with compute constraints, automated researchers could be 10x more efficient through better intuitions, fewer bugs, and algorithmic breakthroughs
  • Complementarities with humans: Might delay things by a couple years but won't stop the explosion
  • Limits to algorithmic progress: "If we got 5 OOMs in the last decade, we should probably expect at least another decade's-worth of progress to be possible"

The implications are staggering: "The AI systems we'll likely have by the end of this decade will be unimaginably powerful... They'll be qualitatively superhuman... We'll be like high-schoolers stuck on Newtonian physics while it's off exploring quantum mechanics."

His visualization is haunting: "Look at some Youtube videos of video game speedruns, such as this one of beating Minecraft in 20 seconds... Now imagine this applied to all domains of science, technology, and the economy."

IIIa. Racing to the Trillion-Dollar Cluster

This essay maps out the industrial mobilization required for AGI, with specific projections that seem insane until you see the math:

YearTraining Cluster CostPower RequiredPower Reference
2022~$500M (GPT-4)~10 MW~10,000 homes
~2024$billions~100MW~100,000 homes
~2026$10s of billions~1 GWThe Hoover Dam
~2028$100s of billions~10 GWSmall US state
~2030$1T+~100GW>20% of US electricity

The evidence is already visible: "Zuck bought 350k H100s. Amazon bought a 1GW datacenter campus next to a nuclear power plant... Microsoft and OpenAI are rumored to be working on a $100B cluster."

On power specifically: "Total US electricity generation has barely grown 5% in the last decade... The trillion-dollar, 100GW cluster alone would require ~20% of current US electricity generation in 6 years."

The solution? Natural gas: "Powering a 10GW cluster would take only a few percent of US natural gas production... It would take about ~1200 new wells for the 100GW cluster."

Critical warning: "The clusters that are being planned today may well be the clusters AGI and superintelligence are trained and run on... Do we really want the infrastructure for the Manhattan Project to be controlled by some capricious Middle Eastern dictatorship?"

IIIb. Lock Down the Labs: Security for AGI

This essay contains the book's most urgent warning: we're about to hand AGI to China through criminal negligence on security.

Current state: "Between the labs, there are thousands of people with access to the most important secrets; there is basically no background-checking, siloing, controls, basic infosec, etc. Things are stored on easily hackable SaaS services. People gabber at parties in SF. Anyone, with all the secrets in their head, could be offered $100M and recruited to a Chinese lab at any point. You can... just look through office windows."

What's at stake: "An AI model is just a large file of numbers on a server. This can be stolen. All it takes an adversary to match your trillions of dollars and your smartest minds and your decades of work is to steal this file."

The threat is real: "Already, China engages in widespread industrial espionage; the FBI director stated the PRC has a hacking operation greater than 'every major nation combined.'" A Chinese national already stole AI code from Google: "All it took to steal the code, without detection, was pasting code into Apple Notes, then exporting to pdf!"

Most chilling scenario: "Perhaps the single scenario that most keeps me up at night is if China or another adversary is able to steal the automated-AI-researcher-model-weights on the cusp of an intelligence explosion."

What's needed: "Fully airgapped datacenters, with physical security on par with most secure military bases... All research personnel working from a SCIF... Extreme personnel vetting and security clearances... constant monitoring and substantially reduced freedoms."

The timeline is critical: "Our failure today will be irreversible soon: in the next 12-24 months, we will leak key AGI breakthroughs to the CCP."

IIIc. Superalignment

This essay tackles the technical challenge of controlling AI systems smarter than humans. The core problem is simple but terrifying: "RLHF will predictably break down as AI systems get smarter... Imagine, for example, a superhuman AI system generating a million lines of code in a new programming language it invented. If you asked a human rater in an RLHF procedure, 'does this code contain any security backdoors?' they simply wouldn't know."

Current AI alignment techniques work because humans can supervise the AI. But "we're starting to hit early versions of the superalignment problem in the real world now... Human labeler-pay has gone from a few dollars for MTurk labelers to ~$100/hour for GPQA questions in the last few years."

The default failure mode: "By default, it may well learn to lie, to commit fraud, to deceive, to hack, to seek power, and so on—simply because these can be successful strategies to make money in the real world!"

The intelligence explosion makes this "incredibly hair-raising":

  • "We will extremely rapidly go from systems where RLHF works fine—to systems where it will totally break down"
  • "We will extremely rapidly go from systems where failures are fairly low-stakes—to extremely high-stakes"
  • "The superintelligence we get by the end will be vastly superhuman"

The proposed solution requires automating alignment research: "If we manage to align somewhat-superhuman systems enough to trust them, we'll be in an incredible position: we'll have millions of automated AI researchers, smarter than the best AI researchers, at our disposal."

Most concerning: "We're counting way too much on luck here... By default, we'll probably stumble into the intelligence explosion and have gone through a few OOMs before people even realize what we've gotten into."

IIId. The Free World Must Prevail

This essay makes the geopolitical case with historical parallels. The Gulf War example is particularly striking: "Coalition dead numbered a mere 292, compared to 20k-50k Iraqi dead... The Coalition lost a mere 31 tanks, compared to the destruction of over 3,000 Iraqi tanks." This with merely a 20-30 year technology gap.

With superintelligence: "Within a matter of years, pre-superintelligence militaries would become hopelessly outclassed... It would simply be no contest."

China's path to competition is clear:

  1. Chips: "China now seems to have demonstrated the ability to manufacture 7nm chips... 7nm is enough!"
  2. Power: "In the last decade, China has roughly built as much new electricity capacity as the entire US capacity"
  3. Algorithms: "Unless we lock down the labs very soon, I expect China to be able to simply steal the key algorithmic ingredients for AGI"

The authoritarian peril: "A dictator who wields the power of superintelligence would command concentrated power unlike any we've ever seen... Millions of AI-controlled robotic law enforcement agents could police their populace; mass surveillance would be hypercharged; dictator-loyal AIs could individually assess every citizen for dissent."

On proliferation: "Perhaps dramatic advances in biology will yield extraordinary new bioweapons... Perhaps new kinds of nuclear weapons enable the size of nuclear arsenals to increase by orders of magnitude... Perhaps mosquito-sized drones, each carrying a deadly poison, could be targeted to kill every member of an opposing nation."

The only solution: "The United States must lead, and use that lead to enforce safety norms on the rest of the world... Perhaps most importantly, a healthy lead gives us room to maneuver: the ability to 'cash in' parts of the lead, if necessary, to get safety right."

IV. The Project

This essay predicts the inevitable government takeover of AGI development. The argument is straightforward: "I find it an insane proposition that the US government will let a random SF startup develop superintelligence. Imagine if we had developed atomic bombs by letting Uber just improvise."

The trigger: "Somewhere around 26/27 or so, the mood in Washington will become somber. People will start to viscerally feel what is happening; they will be scared... Slowly at first, then all at once, it will become clear: this is happening."

Why it's necessary:

  • National security: "By the early 2030s, the entirety of the US arsenal... will probably be obsolete"
  • Chain of command: "This should not be under the unilateral command of a random CEO"
  • Security: "It’s probably impossible for a private company to get good enough security"
  • Safety: "What's necessary will be less like spending a few years doing careful evaluations... It'll be more like fighting a war"

The form it will take: "The relationship with the DoD might look like the relationship the DoD has with Boeing or Lockheed Martin... Western labs might more-or-less 'voluntarily' agree to merge in the national effort."

Historical parallel: "After Einstein's letter to the President in 1939... an Advisory Committee on Uranium was formed. But officials were incompetent, and not much happened initially... Szilard believed that the project was delayed for at least a year by the short-sightedness and sluggishness of the authorities."

The endgame: "By 27/28, the endgame will be on. By 28/29 the intelligence explosion will be underway; by 2030, we will have summoned superintelligence, in all its power and might."

V. Parting Thoughts

The closing essay shifts to a personal, almost mournful tone. Aschenbrenner introduces his framework of "AGI Realism" with three core tenets:

  1. "Superintelligence is a matter of national security"
  2. "America must lead"
  3. "We need to not screw it up"

The weight of realization: "A few years ago, at least for me, I took these ideas seriously—but they were abstract, quarantined in models and probability estimates. Now it feels extremely visceral. I can see it. I can see how AGI will be built."

Most sobering: "The scariest realization is that there is no crack team coming to handle this. As a kid you have this glorified view of the world, that when things get real there are the heroic scientists, the uber-competent military men, the calm leaders who are on it, who will save the day. It is not so."

The final reality check: "Right now, there's perhaps a few hundred people in the world who realize what's about to hit us, who understand just how crazy things are about to get, who have situational awareness... The few folks behind the scenes who are desperately trying to keep things from falling apart are you and your buddies and their buddies. That's it. That's all there is."

His closing: "These are great and honorable people. But they are just people. Soon, the AIs will be running the world, but we're in for one last rodeo. May their final stewardship bring honor to mankind."

Sam Altman on AGI timelines: "Every year we get closer to AGI everybody will gain +10 crazy points." This captures how the discourse will shift dramatically as capabilities improve.

Ilya Sutskever's prescient 2015 observation: "The models, they just want to learn. You have to understand this. The models, they just want to learn." This captures the relentless force driving progress—it's not a choice, it's almost physics.

Jensen Huang on AI infrastructure: NVIDIA's CEO has been preparing for exactly this scenario, with the company's valuation reflecting the trillions in compute that will be purchased. His "the more you buy, the more you save" perfectly captures the economics of the race.

Demis Hassabis on AI safety: DeepMind's CEO has long argued we need to solve alignment before we reach AGI, not after—a race against time that Aschenbrenner suggests we're losing.

Dario Amodei on the data wall: Anthropic's CEO recently said "if you look at it very naively we're not that far from running out of data... My guess is that this will not be a blocker... There's just many different ways to do it." This aligns with Aschenbrenner's view that the data wall is surmountable.

Natural Next Reads

  • The Alignment Problem by Brian Christian: Deep dive into why controlling AI systems is so technically difficult
  • Superintelligence by Nick Bostrom: The philosophical case for why AI could pose an existential risk
  • The Making of the Atomic Bomb by Richard Rhodes: Historical parallel for how governments respond to transformative technology
  • Chip War by Chris Miller: Understanding the semiconductor supply chain that constrains AI development
  • The Age of Em by Robin Hanson: Alternative vision of how AI might transform the economy

Reflection Questions

  1. If AGI arrives in 2027, what skills and knowledge will still be valuable in a world where machines can do any cognitive task better than humans?
  2. How would your investment strategy change if you believed there was a 50% chance of AGI by 2027?
  3. What safeguards would need to be in place for you to feel comfortable with AI systems that are smarter than humans?
  4. If the U.S. government nationalized AI development tomorrow, would that make AGI safer or more dangerous?
  5. What would you do differently in the next 3 years if you truly believed superintelligence was coming by 2030?

Practical Applications

For Founders

  • Strategic Planning: Assume any cognitive work will be automatable by 2027—build businesses that capture value from this transition
  • Security Infrastructure: Implement security practices now that assume nation-state adversaries are trying to steal your IP
  • Talent Strategy: Hire for judgment and decision-making rather than pure technical skills that will be automated
  • Exit Timeline: Consider that traditional venture exit timelines may not apply—the entire economy could transform before a typical 7-10 year fund lifecycle

For Investors

  • Portfolio Construction: Weight heavily toward companies building AI infrastructure (chips, power, datacenters) rather than AI applications
  • Due Diligence: Evaluate AI companies based on their access to compute and algorithmic talent, not current model performance
  • Risk Assessment: Consider geopolitical risk as primary—a company's value could go to zero if their weights are stolen by adversaries
  • Time Horizons: Traditional DCF models break down when economic growth could be 30% annually by 2030

The Bottom Line

We stand at the threshold of the most important moment in human history. By 2027, we'll have created minds that match our own; by 2030, minds that far exceed them. The companies building this technology have security barely better than a random startup, while China watches and waits to steal the results. Unless we achieve a Manhattan Project level of seriousness immediately—locking down the labs, preparing for a government project, and maintaining American leadership—we risk handing absolute power to authoritarian regimes or losing control entirely to alien intelligences we've created but can't understand. The few hundred people with situational awareness aren't enough; we need orders of magnitude more talent, resources, and above all, seriousness about what we're building before it's too late.


Sources

Leopold Aschenbrenner's Work

Situational Awareness Essays

Leopold's Fund & Personal Site

The Dwarkesh Patel Interview

Main Interview

Silicon Valley Investment & Reactions

Fund Launch & Backing

Tyler Cowen Connection

OpenAI Departure & Security Concerns

Breaking News Coverage

Analysis & Commentary

Media Coverage & Analysis

Major Tech Media

Critical Takes