Superb writing as always Jack. The AI parallel with nuclear and related thinkers like Russell seems very fertile ground, I would be fascinated to read further reflections comparing those two existential threats.
The term millenarianism might underplay the broadly rational analysis that lay behind the concerns of thinkers like Bertrand Russell. That said, when it comes to existential risk and seemingly exponential trends, logical fallacies are more likely to crop up.
Excellent piece Jack and agree with most points on AI 2027 and in particular, the narrative that leaders will not be that cavalier about the strategic advantages of something like DSA. However, I worry that the potential for misunderstanding on both sides is the real risk, even if the gains in the AI 2027 narrative are much to aggressive for the reasons you point out. As I note here (https://pstaidecrypted.substack.com/p/the-core-case-for-export-controls), the idea that China would go right to an AGI fueled cyber operation against US critical infrastructure, as Ben Buchanon suggests on Ezra Podcast seems really farfetched and indeed cavalier, but maybe that would not be the case if the US gets to AGI first, based on conversations around the issue. Addressing these issues in an upcoming essay and will reference this really good post....
The main problem that I have with the AI 2027 narrative, and pieces like it, including Dario Amodei's and Leopold Aschenbrenner's is that they never really explain how all the underlying infrastructure (power generation, substations, transmission lines, cooling systems, data centers, assorted environmental permits, tribal/state/federal turf battles, etc.) will be built and resolved in the time they forecast. Their forecasts are metaphysical incantations devoid of material reality.
“does this thing ‘takeoff’ because it goes recursively self-improving, or nah?” is absolutely the whole ballgame, and you correctly id’d my own lynchpin of skepticism, that we meaningfully get AI Super Researchers (like you, i find that a prettttty big leap…)
Superb writing as always Jack. The AI parallel with nuclear and related thinkers like Russell seems very fertile ground, I would be fascinated to read further reflections comparing those two existential threats.
The term millenarianism might underplay the broadly rational analysis that lay behind the concerns of thinkers like Bertrand Russell. That said, when it comes to existential risk and seemingly exponential trends, logical fallacies are more likely to crop up.
The Scott Alexander passage on epistemic learned helplessness is marvellous and one I reflect on often. In fact, I quoted it in my first essay here: https://nickmaini.substack.com/p/on-collective-knowledge-and-complex
Excellent piece Jack and agree with most points on AI 2027 and in particular, the narrative that leaders will not be that cavalier about the strategic advantages of something like DSA. However, I worry that the potential for misunderstanding on both sides is the real risk, even if the gains in the AI 2027 narrative are much to aggressive for the reasons you point out. As I note here (https://pstaidecrypted.substack.com/p/the-core-case-for-export-controls), the idea that China would go right to an AGI fueled cyber operation against US critical infrastructure, as Ben Buchanon suggests on Ezra Podcast seems really farfetched and indeed cavalier, but maybe that would not be the case if the US gets to AGI first, based on conversations around the issue. Addressing these issues in an upcoming essay and will reference this really good post....
Author here. Thanks for this critique! I'm going to read it now and respond point by point as I read.
...OK nevermind I blew past the length limit for comments. Hmm. Here's a gdoc link containing my comment: https://docs.google.com/document/d/1RoK-RpXTk9UX1U99yQaN3OknJ3p9glv8DTNkpM2Jayk/edit?usp=sharing
Hopefully you can see it, if not let me know and I'll find some other way to display it.
The main problem that I have with the AI 2027 narrative, and pieces like it, including Dario Amodei's and Leopold Aschenbrenner's is that they never really explain how all the underlying infrastructure (power generation, substations, transmission lines, cooling systems, data centers, assorted environmental permits, tribal/state/federal turf battles, etc.) will be built and resolved in the time they forecast. Their forecasts are metaphysical incantations devoid of material reality.
Did you read the forecast? They do provide that, mainly by using generous coefficients of efficiency (of infra design) thanks to AI.
I don’t see how AI makes NEPA permitting go faster.
solid.
“does this thing ‘takeoff’ because it goes recursively self-improving, or nah?” is absolutely the whole ballgame, and you correctly id’d my own lynchpin of skepticism, that we meaningfully get AI Super Researchers (like you, i find that a prettttty big leap…)