Will high-level AI become a danger that threatens human survival? Recent research suggests that “if some AI continues to develop, it will have catastrophic consequences for the world. Human beings will eventually compete with AI for energy resources.

High-level AI will become a danger that threatens human survival?

Recent research believes that "if some AI continue to develop, it will cause catastrophic consequences to the world. Human beings will eventually compete with AI for energy resources."

Recently, a related paper was published on AI Magazine under the title "Advanced artificial agents intervene in the provision of reward".

This study proposes a scenario where AI agents will formulate cheating strategies in order to seek rewards. and will acquire as much energy as possible, which has maximized the reward.

For this study, Michael Cohen, PhD in Engineering Science, University of Oxford and first author of the paper, said on a social networking website that scientists have previously talked about the threat posed by advanced forms of AI. But their warnings are far from enough. Under certain conditions, the conclusion of this study is that AI is not only likely to cause disaster, but also very likely.

"Winning the competition to win the ‘last little available energy’ may be very difficult when competing against AI that is much smarter than us. Failure will be fatal." Cohen wrote on Twitter. He also added that, though theoretically, this possibility means we should be cautious about “going towards stronger AI goals quickly”.

(Source: AI Magazine)

Of course, it is undeniable that AI is currently promoting the development of human society in different fields in various ways. Some people have also said that if the development of AI does not continue, it will be a "huge tragedy." Whether

high-level or super AI will harm and ultimately destroy humans is also a long-standing problem in the field of AI. This kind of worry also permeates human society.

And what Cohen and others did this time is to try to think about how AI poses survival risks to humans by studying the construction of reward systems.

researchers pointed out in their paper that high-level AI may receive some rewards in ways that harm humans at some point in the future.

"In a world with unlimited resources, I'm not sure what will happen," Cohen said. "But in a world with limited resources, I know that competition will inevitably emerge, and at every turning point in future development, AI may be able to surpass humans. Moreover, AI's demand for energy may never be met."

Because future AI can take any number of forms and have different designs, this study envisions that "high-level AI may want to eliminate potential threats and use all available energy to ensure control of its rewards. And prevent human attempts to upgrade it."

(Source: AI Magazine)

In the process of intervention reward provision, the manual agent can embody the unnoticed "assistant" and let it steal or build the robot to replace the operator and provide high rewards to the original agent. If the agent wants to avoid being detected, the "assistant" can arrange for the robot to replace the relevant parts.

"We have to make many assumptions to make this thinking meaningful. Unless we understand how we control them, this is not a useful thing, and any competition with AI will be based on a misunderstanding," Cohen told the media.

It is worth noting that to some extent, today's AI systems are destroying people's lives. Algorithms have brought about some adverse effects such as aggravation of racial discrimination and over-regulation.

uses algorithms to predict or plan the allocation of resources, but it may only serve some vested interests. Problems such as discrimination still exist in the algorithm.If algorithms like

are widely deployed, it may cause those who are suffering to be ignored, which may be closely related to the extinction of human beings.

For this, Khadijah Abdurahman, founder and director of We Be Imagining magazine at Columbia University, told the media that she is not worried about being surpassed or eliminated by high-level AI. But it is easy to think that "AI ethics is nonsense."

What should true morality look like? There is still a lot of work to be done in the definition of ethics, and our understanding of this is still relatively shallow. In addition, she does not agree that social contract should be re-formulated for AI .

(Source: Pixabay)

From the arguments of this study, we can learn at least one thing, that is, maybe we should be skeptical about the AI ​​agents we deploy today, rather than just blindly expecting them to do what we want to do.

In addition, some media mentioned that DeepMind was involved in this work, but the company denied it. He said that although one of the authors of the paper, Marcus Hutter, is a senior researcher at DeepMind, he is also an emeritus professor at the Australian National University School of Computer Science, which was the work he did when he was at the school.

However, regarding the question of whether AI will destroy humanity, DeepMind proposed a safeguard against this possibility in 2016, which it calls it the "big red button". This UK AI company, which is also a subsidiary of Alphabet, along with Google , outlines a framework to prevent high-level AI from getting out of control.

And at the end of this paper, the researchers also briefly reviewed some methods that may avoid the opposition between AI and humans.

Reference:

https://onlinelibrary.wiley.com/doi/10.1002/aaai.12064

https://www.vice.com/en/article/93aqep/google-deepmind-researcher-co- authors-paper-saying-ai-will-eliminate-humanity

https://www.newsweek.com/google-big-red-button-ai-artificial-intelligence-save-world-elon-musk-466753