Discussion about this post

User's avatar
Phoenix's avatar

Does it really affect P(Doom) whether AI is conscious? After all, Hitler could've been a philosophical zombie and nothing would've changed. Since an AI philosophical zombie can decide to kill us all the same as a conscious AI, P(Doom) becomes a simple coding problem. We must code AI to preclude that possibility, whether AI is conscious or not.

Even the philosophers who find the notion of philosophical zombies incoherent would not, in my view, take issue with AI being one, as their arguments against humans being philosophical zombies are prejudiced by no such humans existing, as far as we know. AI, however, is the complete inverse: the only form of AI we know to exist isn't conscious.

As for AI suffering, I think that's also just a coding problem. Humans suffer because millions of years of evolution determined that capacity for suffering is a good survival strategy. There is no reason to implant a suffering chip into AI. This circles back to my first point, that any benefits humans get from suffering (such as the instinct for self-preservation) can be coded into a philosophical zombie AI, without actually including the suffering.

If my reasoning is correct, and we design conscious AI to have the capacity for well-being but not for suffering, and code it as best we can to be unable to go on a killing spree, then there's no downsides. P(Doom) is the same as if it were unconscious, suffering is impossible, and well-being is possible. If only humans were designed this way!

Expand full comment
Nour's avatar

This is exactly why i force myself to be extra nice to any chatbox, you never know and i’d rather be extra extra safe 😂

Expand full comment
2 more comments...

No posts