The 9 Biggest Risks and Dangers of Artificial Intelligence (AI)
In May 2023, Geoffrey Hinton, also called the ‘Godfather of AI’ announced his resignation from Google to talk about the ‘dangers of artificial intelligence (AI) technology’.
He helped develop as he claims concerns over the technology and his role in advancing it.
“I left so that I could talk about the dangers of AI without considering how this impacts Google,”
According to a 2023 survey of AI experts, it was discovered that there’s a 36% fear of AI development resulting in a “nuclear-level catastrophe.”
Risks of Artificial Intelligence
- Job Displacement
- Bias and Discrimination
- Privacy Concerns
- Dependence on Technology
- Security Risks
- Ethical and Moral Issues
- Control and Autonomy Loss
- Economic Inequality
- Accountability Challenges
- Unpredictable Developments
The Dangers of Artificial Intelligence (AI)
1. Lack of transparency
Lack of transparency in AI can be challenging to apply in the real world, rendering the work “worthless” when the results—no matter how positive—are not reproducible.
One of the difficulties in wringing out such bias is that AI systems are “black boxes” that lack transparency in how they make decisions.
The algorithms that power these machine learning systems are so complex that even their developers can fail to thoroughly understand the way they function.
This may result in the loss of trust, not just in the model but in its explanations as well for example Apple’s new credit card business has been accused of sexist lending models. You may call it the AI’s “transparency paradox”.
2. Privacy Concerns
AI is likely to pose privacy concerns like data persistence, data re-purposing, and data spillovers that can cause the data to be used beyond its original purpose and it could fall into the wrong hands, either through hacking or other security breaches.
As 328.77 million terabytes of data are created each day, it’s widely consumed and collected and one of the issues in wringing out such bias is that AI systems are unpredictable and may not secure your confidential data as much as you think.
Take facial recognition systems for example; they can intrude on privacy interests by raising the analysis of personal information to new levels of speed and power.
The California Consumer Privacy Act (CCPA) demands that businesses notify users of the type of information being gathered, provide a method to opt out of some portions of data collection and decide whether their data can be sold or not.
More so, it requires the business to not discriminate against any user.
3. AI bias
AI bias occurs when the machine consistently gives various outputs for one group of people compared to another.
These biases typically develop outputs that come under the classical societal biases like race, gender, biological sex, nationality, or age for example, Amazon scrapped an AI recruiting tool after discovering that it was discriminatory towards women.
The system allegedly taught itself that male candidates were preferable and penalized women candidates as they included the word “women’s” such as “women’s chess club captain”.
It even patronized the graduates of two all-women’s colleges according to the people who were familiar with the matter.
4. Automation-spurred job loss
Automation-spurred job loss is when people lose their jobs or have their wages reduced due to the implementation of artificial intelligence (AI) and automation technologies in various industries.
It’s a potential risk that many experts and workers are concerned about, as AI becomes more capable and widespread.
According to some studies, AI automation can displace 400-800 million jobs by 2030, which can cause 375 million people to switch job categories completely and in the next 10 years, a billion people could lose their jobs.
Despite the concerns, of job replacement and unemployment, Goldman Sach contends that automation sparks innovation and just, like prompt engineering, more jobs can be created.
However, other studies also suggest that the fear of AI’s impact surpasses the actual effect it would have, as a study by the University of Warwick found that only a quarter of firms have introduced some form of AI-based technology and haven’t suffered major net job loss. Even Forbes states that “AI Creates Job Disruption But Not Job Destruction”.
5. Deep Fakes
A deepfake is an audio or video clip manipulated using artificial intelligence to make it sound or look as if it’s real. Deepfakes are a menace to society for a couple of reasons. Deepfakes have sparked fear and loss of trust through misinformation campaigns.
It could jeopardize national security, interfere with the stock market, influence political elections, and cause corporate espionage. In 2019, An audio Deepfake Was Used To Scam A CEO Out Of $243,000.
Moreover, Deepfake pornography is frequently used to humiliate and destroy the repute and careers of victims posing a danger to the technology being used by domestic abusers to blackmail and exert control.
Recently, a Twitch streamer was caught watching a fellow gamer’s Deepfake porn.
QTCinderlla, the victim of the deepfake world was horrified to know this and expressed her mental downgrade as well as body dysmorphia through a tweet.
“It’s not as simple as ‘just’ being violated. It’s so much more than that.”
According to a report by Sensity, over 85,000 harmful misleading deepfake videos by experts were detected in December 2020.
6. Weapon Automation
Automated weapons are the biggest dangers of A.I and their risks have been mentioned in science fiction literature and cinema with the German film Metropolis (1927) and Ray Bradbury’s dystopian novel of 1953 (Fahrenheit 451) but now they’re a reality.
Lethal Autonomous Weapon Systems (L.A.W.S.) are machines crafted to identify, engage and destroy targets without human control.
The dangers of artificial intelligence with the use of robots like drones are becoming prevalent in the military and the vision of fully autonomous robots optimized for warfare can be the next big thing.
These deployments have garnered concern as there has been a similar incident in March 2020. An incident in Libya where military forces who supported the UN-backed Government of National Accord (GNA) defeated the army loyal to the Libyan National Army.
Post defeat, the retreating troops from Linyan National Army were attacked by an unmanned aerial vehicle system like the STM Kargu-2.
This lethal weapon was operated without constant communication or human control and operated independently like “fire, forget and find systems”.
7. Social Media Manipulation Through AI Algorithms
The manipulation of social media through AI algorithms can easily spread disinformation and alter the opinions/perspectives of the users.
A striking example of social media manipulation; Facebook’s algorithms had played a massive role in enabling disinformation campaigns from Eastern Europe to reach nearly half of all Americans before the 2020 U.S. presidential election.
These campaigns created the most popular pages for Christian and Black American content reached 140 million U.S. users every month
Remarkably, 75% of the exposed individuals had not followed any of the pages. The content appeared in their news feeds due to Facebook’s content recommendation system. This highlights the risks of algorithm-driven content distribution on social media platforms.
8. Social Surveillance as AI
In China, social media surveillance is a prevalent example of AI technology’s impact on society.
The Chinese firm Semptian developed the Aegis surveillance system, capable of analyzing unlimited data and monitoring over 200 million internet users. It functions as a “national firewall,” similar to the Great Firewall regulating China’s internet traffic.
This system allows Chinese authorities to access user content and metadata, facilitating the identification and punishment of individuals sharing sensitive content. In one incident, a member of the Uighur Muslim minority was detained due to a contact’s location check-in from Mecca.
Several Chinese provincial governments are developing a “Police Cloud” system, aggregating data from social media, telecom records, e-commerce activities, biometrics, and video surveillance to target individuals based on interactions with certain groups or ethnicities, particularly the Uighur Muslims.
9. Socioeconomic Equality Caused by Automation
Economist Brynjolfsson notes that automation, a key element of AI, depresses wages for many while enhancing the market power of a few who control these technologies.
Daron Acemoglu, another MIT economist, points out that 50 to 70% of the growth in US wage inequality from 1980 to 2016 was due to automation.
AI’s trend of replacing human tasks with “so-so technologies” that don’t significantly boost productivity or create new business opportunities further aggravates this inequality.
10. Loss of Human Influence
AI’s rise curbs human decision-making. Example: Automated trading algorithms in finance. These systems make rapid, complex trades, outperforming human traders in speed and efficiency. Yet, they can cause unexpected market swings, sidelining human judgment and expertise, leading to situations like the 2010 Flash Crash where market stability was briefly upended.
How AI’s Advancements Pose New Dangers to Society
Moreover, a team of Microsoft researchers who analyzed OpenAI’s GPT-4 in a new paper said it had “sparks of advanced general intelligence” and AI usually is more than 80% accurate.
When testing GPT-4, the generative AI performed 90% better than the human test takers on the Unifrom Bar Exam- a standard test to certify lawyers to practice in many states.
GPT-4 posed 10% more accuracy than its former model GPT-3.5 which was trained on the smaller data set.
Later in March, more than 1,000 tech leaders, researchers, and other gurus who have worked with artificial intelligence signed an open letter warning that A. I technology exhibits “profound risks to society and humanity”.
The open letter was also signed by Elon Musk and Steve Wozniak urging AI labs to pause the development of new AI systems for at least six months claiming the potential risk to society and humanity.
The letter warns that AI may flood information channels with misinformation and replace jobs with automation.