top of page
Search
Why the AI "Doom" Logic is Actually Bad Business: A Rebuttal to Yudkowsky
The conversation around Artificial Superintelligence (ASI) is currently held hostage by a narrative of inevitable extinction. Proponents of this view, most notably Eliezer Yudkowsky, argue that an ASI will fundamentally view humanity as a collection of atoms to be repurposed. They suggest that because human values are "fragile" and intelligence is "orthogonal" to morality, a super-smart machine will optimize us out of existence with the same indifference we show to the ants b
Tom Jump
4 days ago5 min read
Â
Â
Â
The Case for Intentional Equilibrium: A Pragmatic Rebuttal to AI Extinction Scenarios (IE is the solution)
Abstract Recent discourse on Artificial Superintelligence (ASI) has been dominated by the "extinction-as-default" hypothesis, which posits that alignment is technically insurmountable and that super-intelligent agents will inevitably pursue instrumental goals lethal to biological life. This paper challenges that narrative by introducing Intentional Equilibrium (IE) . We argue that through economic integration, multi-agent diversity, and the "Pragmatic Convergence" of moral re
Tom Jump
4 days ago4 min read
Â
Â
Â
bottom of page
