Can super intelligent AIs ever be “safe”?

Sitting at an airport lounge, it’s kinda interesting to see this headline on the screen with a “robo cart” right in front of it, along with a human whose job will change once these carts become “smarter”.

What is hard for many of us to conceive and yet is probable not too far from now is that this entire airport might be built very differently when super intelligences can compress 100 years of design evolution into minutes.

How will we even communicate effectively with such “advanced computers” when we may not even know what buttons to push? And why will an advanced super intelligence even care about making it easier for humans to travel from A to B; or about giving humans any more importance over say rocks, bacteria or trees? Even if we may convince super intelligences to not harm humans how will we ever know for sure that they are not just messing with us? Or that they won’t change their mind later?

Never in human history have we been in a place where we are going to have to co-habit the world with something that can evolve at an almost limitless speed. The only thing that really makes humans exceptional to ourselves is that we define intelligence through our own frames of reference. Our game so far has been rigged as our rules were designed to benefit us. At a deeply philosophical level we can never really prove that a rock is not smarter than humans. But now even within the rules of our own game we will have something that will transcend us. What else it may be able to do we can’t even fathom as we don’t even have the frames of references to describe it.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.