Skip to main content
Perspectives | 28 September, 2023

How do AI attacks impact people, organizations, and society?

By Manpreet Dash

AIShield explains How do AI attacks impact people, organizations, and society?

AI has been advancing rapidly in recent years, and while it has brought many benefits, it has also created new risks. One of these risks is adversarial attacks. In this article, we will discuss what adversarial attacks are, their real-world examples, their impact on society, how to defend against them, the future of adversarial attacks and AI, ethical considerations, and conclude with some final thoughts.

Introduction to Adversarial Attacks

Adversarial attacks are a type of attack on artificial intelligence systems that aim to deceive or manipulate them. These attacks work by adding or modifying data in a way that is imperceptible to humans but can cause an AI system to misbehave. Adversarial attacks can be applied to a wide range of AI systems, including image, speech, and text recognition, and can be used to cause harm or extract sensitive information.

There are several ways to carry out adversarial attacks, including adding noise to images or modifying pixels, changing voices in audio files, or inserting fake text. These attacks are often carried out to fool AI systems into making incorrect decisions, such as misclassifying an image or recognizing a voice as someone else's. To understand about AI attacks and what weaknesses of AI can be exploited to attack them, please read our articles What are AI attacks? and Why do AI attacks exist?

Real-World Examples of Adversarial Attacks

Adversarial attacks have already been used in the real world to cause harm. In 2019, a team of researchers from the University of Michigan, University of Washington, University of California, Berkeley, and Samsung Research America demonstrated how they could use adversarial attacks to fool an AI system into misinterpreting road signs. By adding stickers to the signs, they were able to trick the AI system into thinking that a stop sign was a speed limit sign. This could have grave consequences on the road, leading to accidents.

Another example of adversarial attacks is deepfakes. Deepfakes are manipulated videos that use AI algorithms to create realistic-looking fake videos of people saying or doing things they never did. These videos can be used to spread misinformation or defame individuals, causing harm to their reputation or causing panic in society. Researchers from Google created an adversarial example of a picture of a panda that, when slightly modified, was recognized by an image recognition system as a gibbon with 99.3% confidence. Researchers at Georgetown University created an audio file that sounded like gibberish to human ears but was recognized by a speech recognition system as a specific phrase with a high degree of confidence. To know more about the types of adversarial attacks on AI, read our article Understanding Types of AI attacks.

  • A demonstration of fast adversarial example generation applied to GoogLeNet
    A demonstration of fast adversarial example generation applied to GoogLeNet (Szegedy et al., 2014a) on ImageNet | Source: Goodfellow, Ian & Shlens, Jonathon & Szegedy, Christian. (2014). Explaining and Harnessing Adversarial Examples. arXiv 1412.6572
  • The input is the target Stop sign. The adversary prints out the resulting perturbations and sticks them to the target Stop sign.
    The input is the target Stop sign. The adversary prints out the resulting perturbations and sticks them to the target Stop sign. The output is Speed Limit 45. | Source: Eykholt, Kevin & Evtimov, Ivan & Fernandes, Earlence & Li, Bo & Rahmati, Amir & Xiao, Shaowei & Prakash, Atul & Kohno, Tadayoshi & Song, Dawn. (2018). Robust Physical-World Attacks on Deep Learning Visual Classification. 1625-1634. 10.1109/CVPR.2018.00175.
  • Adversarial traffic sign on the test track, as seen from a dashboard camera installed on the vehicle.
    Adversarial traffic sign on the test track, as seen from a dashboard camera installed on the vehicle | Source: Morgulis, Nir & Kreines, Alexander & Mendelowitz, Shachar & Weisglass, Yuval. (2019). Fooling a Real Car with Adversarial Traffic Signs.

The Impact of Adversarial Attacks on Society

The impact of adversarial attacks on society can be far-reaching. Adversarial attacks can be used to cause harm to individuals or organizations, such as stealing sensitive data or causing accidents. They can also be used to spread misinformation, causing damage to reputations and society's trust in AI systems.

Adversarial attacks can have significant consequences on national security, too. For example, an AI system that is vulnerable to adversarial attacks could be used to manipulate or disrupt military operations. This could lead to the loss of lives or damage to critical infrastructure.

These attacks can have significant impacts on society, including:

• Security breaches: Adversarial attacks can result in security breaches that compromise sensitive data and systems. This can have serious consequences for individuals, organizations, and even governments.

• Safety risks: Adversarial attacks can also create safety risks by compromising the functioning of autonomous systems, such as self-driving cars or drones. If these systems are compromised, they could cause accidents or other dangerous situations.

• Economic impact: Adversarial attacks can also have economic consequences by causing disruption or damage to businesses that rely on machine learning models. This could include everything from financial fraud to industrial sabotage.

• Misinformation: Adversarial attacks can be used to spread misinformation or propaganda by manipulating the output of machine learning models. This can lead to confusion and distrust, and ultimately harm society's ability to make informed decisions. For example, researchers at OpenAI were able to trick an AI language model into generating fake news articles by adding small changes to the original text .

Adversarial patterns can be used in all sorts of ways to bypass AI systems, and have substantial implications for future security systems, factory robots, and self-driving cars — all places where AI’s ability to identify objects is crucial. “Imagine you’re in the military and you’re using a system that autonomously decides what to target,” Jeff Clune, co-author of a 2015 paper on fooling images told The Verge . “What you don’t want is your enemy putting an adversarial image on top of a hospital so that you strike that hospital. Or if you are using the same system to track your enemies; you don’t want to be easily fooled [and] start following the wrong car with your drone.”

Cybersecurity Awareness: Adversarial AI Attacks | Artificial Intelligence & Technology Office, Department of Energy
Cybersecurity Awareness: Adversarial AI Attacks | Artificial Intelligence & Technology Office, Department of Energy

Overall, adversarial AI attacks pose a serious threat to society, and it is important to develop effective countermeasures to protect against them. This includes improving the security of machine learning systems, developing more robust models that are less susceptible to attacks, and creating better detection and mitigation strategies. As AI technology continues to advance, so too will adversarial attacks. The use of AI in critical infrastructure, such as power grids or transportation systems, could make these systems vulnerable to attack. Similarly, the use of AI in healthcare or financial systems could have significant implications for individuals and society.

It is important to consider the potential consequences of adversarial AI and to ensure that appropriate measures are in place to prevent harm. This includes the development of ethical guidelines for the use of AI and the establishment of regulatory frameworks to ensure that AI is developed and used in a responsible and ethical manner.

How to Defend Against Adversarial AI Attacks

Defending against adversarial attacks requires a multi-faceted approach. One approach is to improve the robustness of AI algorithms by incorporating adversarial training. Adversarial training involves training AI models using adversarial examples to improve their resilience to attacks. This can help to identify weaknesses in the model and improve its overall performance.

Another approach is to use anomaly detection techniques to identify potential adversarial attacks. This involves analyzing the input data to detect any unusual patterns or discrepancies that could indicate an attack. For example, AI-powered anomaly detection systems could be used to identify potential attacks and respond quickly. Similarly, AI-powered security systems could be used to identify and mitigate vulnerabilities in AI algorithms. By detecting and responding to attacks quickly, it may be possible to minimize their impact.
It is important to have a comprehensive security strategy that includes regular vulnerability assessments and penetration testing. This can help to identify potential weaknesses in AI systems and ensure that appropriate measures are in place to defend against attacks.

Key techniques and intervention points for AI Security in Model Development and Operations
Key techniques and intervention points for AI Security in Model Development and Operations

Conclusion

Adversarial attacks are a hidden threat of AI that have the potential to cause significant harm to individuals and society. It is important to raise awareness of these attacks and to develop appropriate measures to defend against them. This includes the use of adversarial training, anomaly detection, and comprehensive security strategies.

As AI technology continues to advance, it is important to consider the potential ethical implications of adversarial AI and to ensure that appropriate measures are in place to prevent harm. By working together, we can harness the power of AI while protecting ourselves against the hidden threat of adversarial attacks.

Download our whitepaper for a detailed understanding of adversarial attacks, their impact on society, and how to defend against them. Please share it with your colleagues and friends to raise awareness of this important issue.