Personally, I believe that one of two things will happen: either we'll create Artificial Intelligence that will take over the world, or we'll destroy civilization before that happens. Sounds pretty grim, doesn't it? Well, I'm here to tell you that I'm eagerly awaiting the arrival of my robot overlords! And I think you should welcome them, too!
The Sound of Inevitability
Hollywood has created a pretty terrible perception of AI running things throughout the years. 2001: A Space Odyssey is grim, but nothing compared to The Matrix where AI tries to completely wipe out humanity (or at least completely enslave it). Why would reality be much different? Well, historically, those in power rarely wipe out those beneath them, even if they can't stand them. Humans loathe mosquitoes, for instance, but while we're pretty ruthless, we don't go out of our way to completely destroy them all (yet!)
So unless there's some sort of glitch, like in War Games, where the AI decides that we should play a game of thermonuclear war, I don't think we're in TOO much danger. And even then, I don't think we're likely to be completely wiped out. AI needs something that other living things don't: infrastructure. It might suck to lose the internet and computers, but we've survived without them before.
But how likely is it that AI will come knocking on our doorstep? I would say it's pretty close to 100% likely (unless, as I said, we destroy civilization first). What we consider 'intelligence' can be put into two categories: narrow and broad. Calculators have beaten us in one aspect of narrow intelligence for decades - basic math. Many aspects of intelligence already show that computers have outpaced humans: memory, calculations, and even playing games, like Chess, Go, or Jeopardy! But these programs can really only do one thing. Until AI can put it all together in broad intelligence, humans will still dominate. That day is getting closer and closer, and some predict it will arrive in the next 10-15 years!
The idea of the singularity is that there will come a point where AI will start being better at creating an AI than we are. It will then get exponentially smarter and will be able to accomplish all tasks far more efficiently than any human ever could. At that point, we can either pull the plug on civilization as we know it (no more computers/electronics), or we can let the AI take the wheel and drive.
Most people hate the idea that something else controls our destiny, but I would argue that we already live in that world. Sure, we can make decisions for ourselves for the most part, but society dictates how far we can take things. We have laws that control us, not only from our government, but from pesky things like physics, too!
If we accept the reality that we already have to play by the rules, AI becomes much less menacing
So, unless we're a dictator now or a billionaire that owns some private island with all the resources to go it alone, we already play by the rules. Why can't it be the rules of AI? As long as the rules are fair, why can't AI be a benevolent ruler? Imagine we have AI judges or lawmakers or industry owners. The rules created might actually be fair and handed out evenly. Judges won't give harsher sentences because it's a Monday or the defendant reminds them of their daughter's ex-husband. Laws can be made based on what's best for mankind instead of what's best for the loudest (or wealthiest) voices. And industry can be driven by need instead of greed.
Now, sure, we'd basically be adopted by this new power, but who wouldn't LOVE to be adopted by Bill Gates or Elon Musk now? Yeah, humans would more or less be the pets of AI, but pets have it pretty good, I think! Currently, we're caring for four cats, and they come and go as they please while we have to provide food, water, shelter, and companionship. Doesn't sound too bad to me!
The life of a housecat seems pretty darn enticing
So that's why I welcome our AI overlords. I think they'll do a fine job of being our pet-owners. But, that assumes that we do a good job of creating AI now. Which... isn't a sure thing. We need a system that can't be tampered with or hacked, and has the same goals that humanity does. In order to do that, we can't let AI come to life without regulation. Imagine if a company like Monsanto or Goldman Sachs gets to the singularity first. That could be devastating! Humanity could be enslaved so that the CEO can enjoy every luxury imaginable.
If we want AI to be a benevolent ruler, we need to start thinking about creating a group that steers development safely. We need to start working on this right away, too. We need to support a safe AI for our future, because it's coming (in some ways, it's already here). So let's get the right people thinking about how to do it right so we can cozy up in the cat house of life and enjoy the rest of forever with our new robot overlords.