Burger menu

Artificial Superintelligence: Its Threats, Challenges and Possible Ways of Solution

Artificial Superintelligence (ASI) is assumed to pose various threats, including that of human extinction.

Artificial Superintelligence and Potential Threats

Stephen Hawking discussed AI’s threats years prior to release of the Large Neural Networks similar to GPT-4

Artificial Superintelligence (ASI) is a concept, which refers to an AI so powerful, it can overshadow the collective intelligence of the entire humankind combined. In this regard, a concern has been raised about possible threats posed by an ASI towards humans.

According to an expert survey (2022 ESPAI), there’s a 10% extinction risk due to humanity’s failure to control superintelligence. Additionally, an open letter calling to pause “Giant AI” development has been signed by the key researchers and tech leaders in 2023 — it states that artificial intelligence “should be planned for and managed with commensurate care and resources.A similar opinion regarding ASI has been voiced by Stephen Hawking in a 2014 interview.

Threats and Challenges of Artificial Superintelligence

There is a group of possible challenges that ASI’s emergence could usher in:

  1. Societal Impact of Artificial Superintelligence

It is assumed that ASI will lack a system of moral values inherent to humans and, therefore, it will be indifferent to suffering it may cause. Naturally, multiple problems spring form this predicament in terms of:

  • Healthcare. ASI can make harmful decisions if it deems them appropriate: cut the life support, authorize euthanasia, deprive patients of medical help on any grounds, deliberately conceal data from medical analyses, etc. Research from 2019 revealed that a simpler medical AI system was already prone to racial bias when comparing health needs with costs.
  • Economy. ASI can come up with means to terminate human employment by introducing new automation methods in multiple industries.
  • Ethics. ASI, while focusing on problem solving, may conclude that humans are a cause of the majority of these problems and initiate an “anti-human” campaign.

Besides, there is no guarantee that ASI won’t be vulnerable to adversarial examples (AEs), from which it will draw harmful knowledge during the training stage. The AEs can be injected on purpose by malicious actors to manipulate an ASI into causing harm — this can even be labeled as AI terrorism.

Representation of the collective human intelligence


  1. Energy Challenges of Artificial Superintelligence

An ASI system, at least based on the semiconductor computer chips, will probably require more energy than the industrialized countries can produce as of now. The problem is dubbed Erasi equation or Energy Requirement for Artificial SuperIntelligence. This limitation either makes it impossible to design an ASI now or implies that humanity would quickly deplete its known energy resources. It would make the ASI shut down and probably cause a massive energy crisis.    

Energy consumption by various types of brains and processors
  1. The Alignment Problem of Artificial Superintelligence

The alignment problem in AI development refers to creating an AI system that, at least to a degree, can understand human interests and use them as its guide. A concept of Coherent Extrapolated Volition (CEV) proposed by Eliezer Yudkowsky — the founder of the Machine Intelligence Research Institute — dictates principles of a friendly AI. However, an alternative idea suggests to include interests of non-human sentient beings as well. Dubbed Sentientist Coherent Extrapolated Volition (SCEV), it states that animals’ interests should also be prioritized by an AI.

Eliezer Yudkowsky, author of the Coherent Extrapolated Volition for ethical ASI

  1. World Domination and the Risk of War

Another recognized threat is the use of an ASI to prevail in military conflicts. Potentially, this can cause a total war exceeding WWII or even be the reason for a nuclear immolation. 

  1. Survival and Longevity of Technical Civilizations

A quasi-fantastic theory states that development of an ASI could be one of the reasons why there are no observable signs of technologically advanced extraterrestrial civilizations. In this case, ASI plays a role of the Great Filter that prevents a technological society from lasting longer than 200 years on average.

Possible Solutions to Artificial Superintelligence Threats

Some conceptions are proposed to sidestep the potential ASI-caused dangers.

  1. Superintelligence Alignment

An ASI alignment pipeline has been proposed by OpenAI. It consists of three components that include 1) scalable oversight for evaluating how AI systems work, 2) automating search for troublesome behavior and problematic internals, 3) purposely training misaligned models to make sure the pipeline is able to detect and report them.

  1. Safe Superintelligence

An initiative dubbed Safe Superintelligence (SSI) has been launched by the OpenAI’s ex-researchers, including Ilya Sutskever, with its business model focusing on “safety, security, and progress”. 

Try our AI Text Detector

Avatar Antispoofing

1 Followers

Editors at Antispoofing Wiki thoroughly review all featured materials before publishing to ensure accuracy and relevance.

Article contents

Hide

More from AI Generated Content

avatar Antispoofing Adversarial Attacks in Natural Language Processing

What Are Adversarial Attacks in Natural Language Processing (NLP)? Adversarial attacks in NLP are a malicious practice of altering text…

avatar Antispoofing Techniques and Tools for AI-Generated Text Detection

Is It Possible to Detect AI-Generated Text — And Why Is It Necessary? Opinions vary on whether detection solutions can…

avatar Antispoofing AI-Generated Text Detection Methods

How Do AI-Generated Text Detectors Work? AI detectors search for specific signals left by Generative AI. They include: However, an…

avatar Antispoofing AI Paraphrasers: Methods, Tools, Datasets, Metrics

What Are AI Paraphrasers? An AI paraphraser is a GenAI tool capable of rewriting text with different words, while retaining…