Microsoft CEO Brad Smith issued a warning over the weekend that killer robots are "unstoppable" and a new digital Geneva convention is needed.
Most science fiction fans think of Terminator when they hear of a robot killer. In the classic movie "Skynet", a rogue military AI named "Skynet" gained self-awareness after spreading to millions of servers around the world. Skynet's conclusion is that humans will try to shut it down and try to exterminate all humans for the benefit of self-protection.
Although "Terminator" was once just a popcorn movie, it now sends a terrible warning to people: what might happen if preventive measures are not taken?
Like most technologies, artificial intelligence will be more and more. Land is used for military applications. The ultimate goal of general artificial intelligence is self-learning. Combining the two, Skynet is no longer as dramatic as before. In an interview with the Daily Telegraph, Smith seemed to agree with this statement. Smith pointed out that the United States, China, Britain, Russia, Israel, South Korea and other countries are all developing autonomous weapon systems.
One day, the war will be conducted entirely by robots on the battlefield. This situation has both advantages and disadvantages. On the one hand, it reduces the risk of the human army. On the other hand, it makes it easier to declare war and risk machine errors.
Many technical experts liken the militarization of artificial intelligence to a nuclear arms race. In the pursuit of the first and the best, some risks may be taken.
There is currently no clear entity responsible for casualties caused by autonomous machines—manufacturers, developers, or supervisors. This is also the subject of a debate about how insurance will work with driverless cars.
In military applications, many technical experts call on artificial intelligence to never make combat decisions alone—especially those that lead to death. Artificial intelligence can make recommendations, but the final decision must be made by humans.
Prevents Unimaginable Destruction
The story of Russian Lieutenant Colonel Stanislav Petrov in 1983 warned us that machines without human supervision may cause unimaginable destruction.
Petrov’s computer reported that the United States launched an intercontinental missile at the Soviet Union. Under these circumstances, the Soviet Union’s strategy was to immediately launch a mandatory nuclear counterattack against the United States. Petrov intuitively believed that the computer was wrong and decided not to launch nuclear missiles. His decision was correct.
If only a computer was used to decide whether to deploy a nuclear missile in 1983, then a missile would be launched and be launched in retaliation by the United States and its allies.
Smith hopes to see a new digital Geneva convention that will bring the world’s powers together and agree on acceptable norms for artificial intelligence. "The safety of civilians is in danger today. We need more urgent action. We need rules to protect civilians and soldiers in the form of the Digital Geneva Convention." Many companies-including thousands of Google employees, are not in the Pentagon. After the contract for human-machine development of artificial intelligence technology was strongly opposed-it promised not to develop artificial intelligence technology for harmful uses.
Smith published a new book titled "Tools and Weapons". At the press conference, Smith also called for stricter regulations on the use of facial recognition technology. "A new law is needed in this field, and we need to regulate the field of facial recognition to prevent potential abuse." A report from the Dutch NGO PAX stated that leading technology companies are putting the world in artificial intelligence. In the "danger" of the killer. Microsoft and Amazon are listed as the most risky companies. Microsoft itself warned investors in February that its artificial intelligence products could damage the company's reputation.
"Why don't companies like Microsoft and Amazon deny that they are currently developing these highly controversial weapons that can decide to kill without direct human involvement?" said Frank Slijper, the lead author of the PAX report.
A global campaign called "Stop the Killing Robot Movement" currently includes 113 NGOs from 57 countries, and its size has doubled in the past year.