Artificial Intelligence (AI) has great potential to bring about social good, but recently experts have been discussing its potential dangers. Two weeks ago, 116 tech experts including Elon Musk called for a UN ban on using AI in any type of weapon. This led to talk about how AI has the potential to start World War III, with Elon Musk pointing out that a computer given too much control over weapons could start the war itself. And last Friday, Vladimir Putin made a comment in a talk that, “whoever becomes the leader in [AI] will become the ruler of the world,” leading some to declare that an AI arms race has begun.
Understanding the potential threat of AI requires understanding what AI currently is. What AI will look like decades from now is only a matter of speculation, but AI today mainly takes the form of computers engaging in machine learning. Machine learning is a process by which a computer algorithm is designed to get progressively better at doing a task, like finding street signs in pictures in the case of a computer program being designed for autonomous vehicles. By having humans go through beforehand and mark where the street sign are (now you know why so many recaptcha’s involve you identifying street signs), the computer can see how well it did, and then modify its algorithm slightly and try again, using each round of feedback to hone in on a better and better algorithm. Possible examples of AI have the potential for both good and bad: a computer that learns to identify humans could be used to find survivors after a disaster more quickly, but it could also be used to make a weapons that can automatically find and target people. Some like Mark Zuckerburg have criticized Musk's statements as fear mongering that will only serve to slow down the creation of beneficial AI. Regardless, it’s clear that given it’s potential, AI public policy needs to be discussed sooner rather than later.