Skip to main content
Perspectives

What are AI attacks?

By Manpreet Dash

What are AI attacks?

What are AI attacks?

Introduction

AI is the gamechanger of this decade. It is rapidly transforming our world and everyday life. The underlying technology, called Machine Learning (ML), is all around us. It’s ML that decides whether you get a loan sanctioned, how much you must pay for health insurance and what medical procedures are used for treating your illness. ML is getting rapid adoption across industries such as healthcare, BFSI, automotive, telecommunication, manufacturing and many more, making it a very enticing target for cyber adversaries.

In the 21st century, a malicious player does not necessarily need bombs, virus or weapons to cause havoc. All he might need is some electrical tape and a good pair of walking shoes. By placing a few small pieces of tape inconspicuously, he can magically convert the stop sign into a green one in the eyes of an autonomous car. When done at large intersections in leading metropolitan areas, it could potentially kill people and bring the entire urban transportation system to its knees. It’s hard to imagine the kind of damage possible with $1 investment in a tape. How is that even possible? Well, that is because AI algorithms come with an inherent limitation.

AI systems can be attacked

Learning is what gives AI systems an edge - they continue to train, evolve, and mature with increasing data. However, the way they learn exposes them to novel risks. They can be attacked and controlled by a bad actor with malicious intent. What humans see as a slightly damaged stop sign, a compromised artificial intelligence system sees as a green light. This is what we call an “artificial intelligence attack” (AI attack).

This vulnerability is due to inherent limitations in the state-of-the-art AI methods that leave them open to a devastating set of attacks which are harmful and dangerous. Under one type of attack, adversaries can gain control over a state-of-the-art AI system with a small but carefully chosen manipulation. It could range from a piece of tape on a stop sign to a sprinkling of digital dust invisible to the human eye on a digital image. Under another, adversaries can poison AI systems, installing backdoors that can be used at a time and place of their choice to destroy the system. In broad terms, an ML model can be attacked in three different ways:

  • It can be fooled into making a wrong prediction (e.g., to bypass spam detection)
  • It can be altered (e.g., to make it biased, inaccurate, or even malicious in nature – in medical reimbursement decisions)
  • It can be replicated (in other words, stolen – IP theft by continuously querying the model)

Whether it’s causing a car to move uncontrollably through a red light, gaming a medical imaging analysis system to conduct unnecessary medical procedures or manipulate the reimbursement approval systems, deceiving a drone searching for enemy activity on a reconnaissance mission, or subverting content filters to post propaganda content on social networks, the danger is serious, widespread, and already here.

In our webinar, “Cybersecurity for AI in Digital Health”, almost 80% of the respondents to our polls believed that AI attacks are common across organizations.

“AI attacks” are very different from traditional cyberattacks

AI attacks are different from the cybersecurity problems that have dominated recent headlines. These attacks are not bugs in code that can be fixed. As a result, exploiting these AI vulnerabilities does not necessarily require “hacking” of the targeted system. In fact, attacking these critical systems does not even always require a computer.

Further, AI attacks fundamentally expand the set of entities that can be used to execute cyberattacks. For the first time, physical objects can be now used for cyberattacks (e.g., an AI attack can transform a stop sign into a green light in the eyes of a self-driving car by simply placing a few pieces of tape on the stop sign itself). This is a new set of cybersecurity problems and cannot be solved with the existing cybersecurity technologies and toolkits businesses and governments have assembled. Instead, addressing this problem will require new approaches and solutions.

Conclusion

Machine learning has achieved another evolutionary tipping point, where it is now more accessible than ever before and does not require a strong foundation in hard data science/statistics. As ML models grow simpler to deploy, utilize, and integrate into our programming toolbox, there is greater space for security flaws and vulnerabilities to emerge.

An obscure topic within artificial intelligence—currently the preoccupation of yet another minor discipline of computer science—is on a perilous collision course with future economic, health, military, and social security. Even while AML is at the leading edge of current cybersecurity and may not be as well-known as your neighborhood ransomware gang, we must ask: are AI powered organizations aware and considering managing these risks?

AIShield helps enterprises safeguard their AI assets powering the most important products with an extensive security platform. With its SaaS based API, AIShield provides enterprise-class AI model security vulnerability assessment and threat informed defense mechanism for wide variety of AI use cases across all industries. For more information, visit www.boschaishield.com and follow us on LinkedIn.

To understand what are novel risks affecting AI systems and review perspectives on AI Security from research community, businesses and regulators, please read our Whitepaper on AI Security.

Next articles in this series

Key takeaways from this article

  • Your AI systems can be fooled, altered, and replicated – all by exploiting vulnerabilities and without “hacking” the targeted system.
  • The learning mechanism of AI systems, which makes them highly effective, are also their fundamental limitation - they can be attacked and controlled by adversaries to fail an AI system.
  • These AI attacks are fundamentally different from traditional cybersecurity attacks.