4 Comments
May 9, 2023·edited May 10, 2023

Pleased to see you tackling this. After reading a Politico editorial that said we needed a AI Manhattan Project because of safety/existential threat, I'm genuinely confused as to why we've gone from "check out this thing that can write bad term papers" to "pull the plug, they're going to kill us all" in a matter of months.

Expand full comment

I have faith in the foresight of humans. If they don’t have the foresight to control their creations, they deserve to be held captive by them.

Expand full comment

I have worked as a software developer for over 40 years. AI is just a bunch of computer code, sophisticated, yes, but at it's heart, the same stuff I code every day. What scares me is when this AI block of code is described as "self learning".

When this means the AI block of code can modify itself, all bets are off. When the supposed safety guard rails are " programmed in via parameters ", what is there to stop a self modifying block of code deciding that that guard rail is misaligned, or gets in the way of the AI meeting some goal?

A " time out " seems necessary to review this.

Once an AI goes rouge, a self modifying AI can go through many iterations of learning in seconds.

Expand full comment

Thanks Dan. Thoughtful as always. As someone who reads a lot of dystopian lit and who has a day job worrying about the end of the world, there is one other angle here. Worst case vs. best case. The people involved on tech development, being capable, think humanity can handle it. Later, when they see what they have done, they come running to others to please help because they begin to realize it is not in their control. The reason I refer to T2 is not because it will end badly. It may not. But I rely on T2 because the people involved are sure it will not end badly and forge ahead. I want more caution, self awareness and less hubris.

Expand full comment