Discussion about this post

User's avatar
Samantha Atkins's avatar

I am struck by how fear based so many brilliant and rationality seeking people have been toward AGI. I believe that with greater intelligence comes the possibility and even likelihood of more universally compassionate and potential optimizing for all intelligent beings. I do not believe that a superintelligence would fail to recognize this as significantly more conducive to the best outcomes for itself. We humans in our evolved scarcity mired and fearful psychology have a hard time seeing and embracing this possibility. But I do not believe it will be missed by a true superintelligence.

Expand full comment
Will's avatar

It seems unfair to ignore that expert surveys of AI researchers show a surprising number endorsing reasonable probabilities of disaster from AI. Like, 10-20%, IIRC. Pretty wild that the people pushing a field forward think it can be terrible. Adding that would contextualize the apparently fantastical area of focus that rationalists spent their time on.

Expand full comment

No posts

Ready for more?