7 Comments
User's avatar
Nick Maini's avatar

Superb writing as always Jack. The AI parallel with nuclear and related thinkers like Russell seems very fertile ground, I would be fascinated to read further reflections comparing those two existential threats.

The term millenarianism might underplay the broadly rational analysis that lay behind the concerns of thinkers like Bertrand Russell. That said, when it comes to existential risk and seemingly exponential trends, logical fallacies are more likely to crop up.

The Scott Alexander passage on epistemic learned helplessness is marvellous and one I reflect on often. In fact, I quoted it in my first essay here: https://nickmaini.substack.com/p/on-collective-knowledge-and-complex

Expand full comment
Paul Triolo's avatar

Excellent piece Jack and agree with most points on AI 2027 and in particular, the narrative that leaders will not be that cavalier about the strategic advantages of something like DSA. However, I worry that the potential for misunderstanding on both sides is the real risk, even if the gains in the AI 2027 narrative are much to aggressive for the reasons you point out. As I note here (https://pstaidecrypted.substack.com/p/the-core-case-for-export-controls), the idea that China would go right to an AGI fueled cyber operation against US critical infrastructure, as Ben Buchanon suggests on Ezra Podcast seems really farfetched and indeed cavalier, but maybe that would not be the case if the US gets to AGI first, based on conversations around the issue. Addressing these issues in an upcoming essay and will reference this really good post....

Expand full comment
Daniel Kokotajlo's avatar

Author here. Thanks for this critique! I'm going to read it now and respond point by point as I read.

...OK nevermind I blew past the length limit for comments. Hmm. Here's a gdoc link containing my comment: https://docs.google.com/document/d/1RoK-RpXTk9UX1U99yQaN3OknJ3p9glv8DTNkpM2Jayk/edit?usp=sharing

Hopefully you can see it, if not let me know and I'll find some other way to display it.

Expand full comment
Dave Friedman's avatar

The main problem that I have with the AI 2027 narrative, and pieces like it, including Dario Amodei's and Leopold Aschenbrenner's is that they never really explain how all the underlying infrastructure (power generation, substations, transmission lines, cooling systems, data centers, assorted environmental permits, tribal/state/federal turf battles, etc.) will be built and resolved in the time they forecast. Their forecasts are metaphysical incantations devoid of material reality.

Expand full comment
Gilad Drori's avatar

Did you read the forecast? They do provide that, mainly by using generous coefficients of efficiency (of infra design) thanks to AI.

Expand full comment
Eric Brown's avatar

I don’t see how AI makes NEPA permitting go faster.

Expand full comment
Prismatico Magnifico's avatar

solid.

“does this thing ‘takeoff’ because it goes recursively self-improving, or nah?” is absolutely the whole ballgame, and you correctly id’d my own lynchpin of skepticism, that we meaningfully get AI Super Researchers (like you, i find that a prettttty big leap…)

Expand full comment