Discussion about this post

User's avatar
Nick Maini's avatar

Superb writing as always Jack. The AI parallel with nuclear and related thinkers like Russell seems very fertile ground, I would be fascinated to read further reflections comparing those two existential threats.

The term millenarianism might underplay the broadly rational analysis that lay behind the concerns of thinkers like Bertrand Russell. That said, when it comes to existential risk and seemingly exponential trends, logical fallacies are more likely to crop up.

The Scott Alexander passage on epistemic learned helplessness is marvellous and one I reflect on often. In fact, I quoted it in my first essay here: https://nickmaini.substack.com/p/on-collective-knowledge-and-complex

Expand full comment
Paul Triolo's avatar

Excellent piece Jack and agree with most points on AI 2027 and in particular, the narrative that leaders will not be that cavalier about the strategic advantages of something like DSA. However, I worry that the potential for misunderstanding on both sides is the real risk, even if the gains in the AI 2027 narrative are much to aggressive for the reasons you point out. As I note here (https://pstaidecrypted.substack.com/p/the-core-case-for-export-controls), the idea that China would go right to an AGI fueled cyber operation against US critical infrastructure, as Ben Buchanon suggests on Ezra Podcast seems really farfetched and indeed cavalier, but maybe that would not be the case if the US gets to AGI first, based on conversations around the issue. Addressing these issues in an upcoming essay and will reference this really good post....

Expand full comment
3 more comments...

No posts