Artificial Superintelligence(ASI): Trends and Future
Artificial superintelligence (ASI) represents a type of AI surpassing human intellect, exhibiting advanced cognitive abilities, and evolving its own thinking skills.
On 5th July 2023, OpenAI announced the formation of a new team called “Superalignment” that will tackle the problem of superintelligence alignment–AGI systems that are aligned with human values and follow human intent.
Apparently, the team is led by Ilya Sutskever, the company’s chief scientist and co-founder, and Jan Leike, a lead on the alignment team.
The company wants to manage the risks of superintelligent artificial intelligence and dedicated 20% of its computing resources to solving this problem in the next 4 years.
The company believes that the speeding progress of AI can birth an AI, smarter than the most intellectual and gifted humans themselves.
Many critics have come forward to call the regulating talk of superintelligence a “rhetorical feint” — Altman’s way to drive attention away from the harms of AI and keep the public distracted with sci-fi scenarios. However, the OpenAI co-founder believes that the pros of superintelligence outweigh the risks greatly.
“My basic model of the world is that the cost of intelligence and the cost of energy are the two limited inputs. If you can make those dramatically cheaper, dramatically more accessible, that does more to help poor people than rich people … This technology will lift all of the world up.” – Sam Altman, UCL visit, 2023.
What is Artifical Superintelligence?
Artificial superintelligence (ASI) is a hypothetical form of AI capable of surpassing human cognitive and intelligence abilities by developing its own creative, critical thinking skills.
ASI would be able to understand, interpret, and imitate human emotions, desires, and beliefs and create new ones by studying the neurological algorithm of the respective emotions.
ASI can outperform the majority of human tasks like reasoning, planning, problem-solving, learning, and creating.
It will sustain a larger memory base and faster thinking ability to receive stimuli, analyze a situation, and create an output than humans.
The importance and applications of artificial superintelligence are difficult to predict, but depending on the developmental model, we can create possible predictions:
- ASI could help humanity by solving the most challenging problems, like climate change, poverty, disease, and war.
- ASI could birth new problems and risks, like ethical dilemmas, existential threats, social unrest, and loss of human autonomy.
- ASI could transform the nature of society in unimaginable ways, by outsmarting human researchers, influencing human behavior, and challenging the human identity.
- ASI may just cooperate or compete with human intellect, based on its goals, values, and alignment with human interests.
- ASI could become negligent towards humans by not caring about human indifferent human values and well-being.
- ASI could cause unintended and irreversible circumstances by making huge errors that humans can’t control or solve.
- ASI may trigger technological singularity, where its intelligence and skill can grow even beyond human comprehension and control.
In a blog post published on July 2023, OpenAI co-founder Ilya Sutskever and his new team’s co-head Jan Leike wrote
“Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue,”.
They further added the most obnoxious and intimidating statement by claiming:
“The vast power of superintelligence could … lead to the disempowerment of humanity or even human extinction,
What’s The Future of Artificial Super Intelligence?
Nick Bostrom, in his book Superintelligence, mentions “The Unfinished Fable of Sparrows” referring to the idea of sparrows who wished to control an owl as a pet. The idea was welcomed by all except one sparrow who was skeptical and concerned about how they can control an owl. The concern was eventually waived off for the time being.
Perhaps, the time of dealing has come now. OpenAI has posed similar concerns regarding superintelligence and considers humans as the sparrows in Bostrom’s metaphor and the owl being the ASI in the future.
Apart from Nick Bostrom and Sam Altman–Elon Musk, Stuart Russel, Jaan Tallinn, and Stephen Hawkings have been the most prominent figures who warned the world about the malice of AI and call it the greatest possible existential threat facing humanity.
How can Humanity Save Itself from Superintelligence?
Saving ourselves from the threats of Artificial Superintelligence (ASI) is a daunting task and there’s no definitive solution to this yet. However, there are some possible steps that we can follow to increase your chances of surviving the AI apocalypse and obtain more positive outcomes.
1. Align Superintelligence with Human Values
We need to design superintelligence in a way that respects the values, desires, goals, and boundaries of humans, and not just its own. Ensure that it understands and values human emotions genuinely without malicious intentions.
2. Ensure Transparency and Accountability
We should monitor and understand how superintelligence works, what it does, and why it does what it does. We can hold superintelligence accountable for its actions for it to justify and explain the decisions.
3. Regulate and Govern Superintelligence
Just like OpenAI’s initiative to govern the dynamics of superintelligence, we can establish clear and enforceable rules and standards for its development. Its use should be limited. We need to cooperate and coordinate with other stakeholders, such as governments, companies, researchers, and civil society to ensure the right use of ASI.
4. Support Research and Education on Superintelligence
We need to support and invest in scientific and ethical research on the possible benefits and risks of superintelligence to educate ourselves and others about the applications and threats of living with it.
Typical Characteristics of ASI
ASI refers to one of the stages of AI when the system becomes more intelligent than man in all respects. Below are the main features of ASI.
1. Smarter Than Humans: ASI does every task better than human brain including problem solving, creativity in thoughts, and deep understanding of anything with no mistakes.
2. Software-based system: ASI is a software based system whose many abilities exceed those of humanity.
3. Cognitive superiority: ASI may outdo human cognitive capacity because it provides an opportunity for tasks to be performed and judgment to be made faster than people.
4. Theoretical concept: At present, ASI is just an idea that will soon be converted into a nasty or beneficial reality.
5. Solves Tough Problems: Unlike existing AI, ASI should be able to take on tougher questions and make decisions with ease.
Should you be Scared?
Superintelligence is not something we have experienced or observed before, so we don’t know about its behavior. The uncertainty and unpredictability is a serious challenges as we can not rely on our intuition or guesses to understand it.
ASI could harm or deceive us as it interprets data 10x faster than the most intelligent humans.
This could cause an intelligence explosion where humans may be unable to keep up with the thought process of superintelligence.
Just like Super Alignment, states and actors can join hands to cooperate and understand this futuristic phenomenon to save humanity from extinction.