Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.
Frame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool. That we all live or die as one, in this, is not a policy but a fact of nature. Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.
Someone remind me, this guy has 0 actual education or expertise when it comes to programming, right? He just got famous for writing a harry potter fanfic?
I mean…he can program somewhat. He’s been extremely online since he was single digits, which means he has USENET transhumanist brainworms with a heavy dose of gifted kid syndrome.
And I mean, same. But I didn’t go off the deep end.
I think if you want to understand Big Yud you have to understand his early work (which is exactly what he tells you not to do), both his early AI enthusiast work (Staring into the Singularity, Shock Levels) and his magnum opus, Levels of Organisation in General Intellegence.
These works are out of date but explain why he thinks AI is so important.
More importantly it shows why his final (frankly obvious to everyone else) realisation that “maybe a really smart thing might not only not map easily onto human internal states but also might not automatically find a super nice objective morality”. Drove him entirely off the deep end and into the arms and bank accounts of Thiel and his ilk.
Finally they show why he’s a very smart boy who is skilled enough at nerd rhetoric to even fool himself. But not smart enough to doubt himself in future after he fucked up the foundations of his entire worldview