Artificial Superintelligence and Potential Threats
Artificial Superintelligence (ASI) is a concept, which refers to an AI so powerful, it can overshadow the collective intelligence of the entire humankind combined. In this regard, a concern has been raised about possible threats posed by an ASI towards humans.
According to an expert survey (2022 ESPAI), there’s a 10% extinction risk due to humanity’s failure to control superintelligence. Additionally, an open letter calling to pause “Giant AI” development has been signed by the key researchers and tech leaders in 2023 — it states that artificial intelligence “should be planned for and managed with commensurate care and resources.” A similar opinion regarding ASI has been voiced by Stephen Hawking in a 2014 interview.
Threats and Challenges of Artificial Superintelligence
There is a group of possible challenges that ASI’s emergence could usher in:
- Societal Impact of Artificial Superintelligence
It is assumed that ASI will lack a system of moral values inherent to humans and, therefore, it will be indifferent to suffering it may cause. Naturally, multiple problems spring form this predicament in terms of:
- Healthcare. ASI can make harmful decisions if it deems them appropriate: cut the life support, authorize euthanasia, deprive patients of medical help on any grounds, deliberately conceal data from medical analyses, etc. Research from 2019 revealed that a simpler medical AI system was already prone to racial bias when comparing health needs with costs.
- Economy. ASI can come up with means to terminate human employment by introducing new automation methods in multiple industries.
- Ethics. ASI, while focusing on problem solving, may conclude that humans are a cause of the majority of these problems and initiate an “anti-human” campaign.
Besides, there is no guarantee that ASI won’t be vulnerable to adversarial examples (AEs), from which it will draw harmful knowledge during the training stage. The AEs can be injected on purpose by malicious actors to manipulate an ASI into causing harm — this can even be labeled as AI terrorism.
- Energy Challenges of Artificial Superintelligence
An ASI system, at least based on the semiconductor computer chips, will probably require more energy than the industrialized countries can produce as of now. The problem is dubbed Erasi equation or Energy Requirement for Artificial SuperIntelligence. This limitation either makes it impossible to design an ASI now or implies that humanity would quickly deplete its known energy resources. It would make the ASI shut down and probably cause a massive energy crisis.
- The Alignment Problem of Artificial Superintelligence
The alignment problem in AI development refers to creating an AI system that, at least to a degree, can understand human interests and use them as its guide. A concept of Coherent Extrapolated Volition (CEV) proposed by Eliezer Yudkowsky — the founder of the Machine Intelligence Research Institute — dictates principles of a friendly AI. However, an alternative idea suggests to include interests of non-human sentient beings as well. Dubbed Sentientist Coherent Extrapolated Volition (SCEV), it states that animals’ interests should also be prioritized by an AI.
- World Domination and the Risk of War
Another recognized threat is the use of an ASI to prevail in military conflicts. Potentially, this can cause a total war exceeding WWII or even be the reason for a nuclear immolation.
- Survival and Longevity of Technical Civilizations
A quasi-fantastic theory states that development of an ASI could be one of the reasons why there are no observable signs of technologically advanced extraterrestrial civilizations. In this case, ASI plays a role of the Great Filter that prevents a technological society from lasting longer than 200 years on average.
Possible Solutions to Artificial Superintelligence Threats
Some conceptions are proposed to sidestep the potential ASI-caused dangers.
- Superintelligence Alignment
An ASI alignment pipeline has been proposed by OpenAI. It consists of three components that include 1) scalable oversight for evaluating how AI systems work, 2) automating search for troublesome behavior and problematic internals, 3) purposely training misaligned models to make sure the pipeline is able to detect and report them.
- Safe Superintelligence
An initiative dubbed Safe Superintelligence (SSI) has been launched by the OpenAI’s ex-researchers, including Ilya Sutskever, with its business model focusing on “safety, security, and progress”.