Artificial Intelligence or AI may not yet be a reality in the way we envision it but it is certainly evolving. Over the years, we have made steadfast progress and several leaps have been quite successful. Not too long ago, it was not possible to play tic-tac-toe or chess with a computer but today some of the revered chess grandmasters have succumbed to the machines in the board game. The ancient Chinese board game Go is still beyond the realms of artificial intelligence but if AlphaGo is a success, a project being helmed by Google, then computers will be able to win at Go as well. Today, the best a computer can do is play Go like an amateur. It is not capable of making the intuitive decisions that can win you a game of Go.
We all would love to see robots being as smart as us, perhaps smarter. But what would be the cost of such a development? Let us hear what some of the scientists and smartest technical minds have to say about this.
Stephen Hawking has said that AI being a success will be the biggest event in the history of mankind but it will be the last of its kind accomplishment. The risks are far too great and artificial intelligence may well be the end of mankind. Hawking cites the slow and limited scope of biological evolution in humans as compared to the instantaneous and unimaginable capability of artificial intelligence to evolve.
Stephen Hawking is not alone. Elon Musk, Bill Gates and Steve Wozniak are just a few of the sharpest brains in the world that are stringently against the development of artificial intelligence. Bill Gates believes that at the outset, artificial intelligence will cater to odd jobs and will steadily replace the need for humans to do chores or menial tasks. Over time, artificial intelligence will do complicated jobs that humans specialize in and very soon from then the systems will be smarter than us. They would be able to outsmart us and we shall become insignificant, vulnerable and at the disposal of artificial intelligence rather quickly. Elon Musk has said that artificial intelligence is the greatest existential threat for humans.
Ryan Calo who is a law professor at the University of Washington raises a more fundamental question. He asks what if artificial intelligence powered systems or machines read the constitution, want the right to vote, want to procreate and enjoy the rights or liberties that other citizens do. Such systems will have the potential to procreate infinitely and they would all get to vote, trumping the human populace and basically junking the democratic system of governance.
There are so many such possibilities where artificial intelligence can take over the world that the risks don’t get negated by the rewards or the potential benefits. The society as we know it could be nonexistent, the lives we lead and the entire world we inhabit could be in unavoidable jeopardy.