Headshots of Raghu Sangwan, Youakim Badr, and Satish Srinivasan

'Cybersecurity for AI Systems: A Survey'

Raghu Sangwan—associate chief academic officer, director of engineering programs, and professor of software engineering—reflects on his recently-published article "Cybersecurity for AI Systems: A Survey," architecting safe systems, and more.
By: Raghu Sangwan

Professor of Data Analytics Youakim Badr, Associate Professor of Information Science Satish Srinivasan, and I have been examining how artificial intelligence (AI) has contributed to the enhancement of cybersecurity capabilities of traditional systems with applications that include detection of intrusion, malware, code vulnerabilities, and anomalies.

When it comes to AI systems, however, the story is quite different. Unlike traditional systems, AI systems have embedded machine learning models which are vulnerable to a new set of threats, commonly known as AI attacks. By exploiting these vulnerabilities, attackers can manipulate system behavior or obtain its internal details. In this research study, we wanted to explore these vulnerabilities or threats systematically along with mechanisms to defend against them.

To understand this better, let me give you one example. In machine learning, we often deal with classification tasks, where the goal is to categorize input data into specific classes or categories. For example, given an image, we might want an AI system to classify it as either a cat or a dog. In this case, the AI system is learning to make decisions that can discriminate between a cat and a dog based on the features present in the input data set.

Now imagine an attacker has an image of a cat and wants the AI system to misclassify it as a dog. The attacker can cleverly apply an imperceptible perturbation to the cat image. This slight change can trick the AI model into misclassifying the image as a dog. One of the goals in this study was to understand how the decision boundary, a line or plane that separates instances of one class from the other, within the AI system is exploited to deceive it. This understanding can allow us to devise defense mechanisms against them.

It's worth noting that adversarial attacks can take various forms and can be much more complex than this simple example. Based on the inputs given to the system, the training dataset used for learning, and manipulation of the model hyperparameters, our study examined attacks on AI systems that can manifest in different types and with different degrees of severity.

At Penn State, we teach our students to architect systems that are safe, secure, and trustworthy by design, rather than retrofitting it as an afterthought. For example, one technique to defend against adversarial attacks is adversarial training. This technique involves augmenting the training data with adversarial examples. By including manipulated examples during the training process, the model learns to be more robust against adversarial attacks. The idea is that by exposing the model to adversarial examples, it becomes better at recognizing and handling them appropriately.

No defense mechanism is entirely foolproof, however, and the arms race between attackers and defenders continues. AI attacks and defense mechanisms are active areas of research, and new techniques are being developed to address emerging challenges.

Raghu Sangwan is the associate chief academic officer, director of engineering programs, and professor of software engineering at Penn State Great Valley. He is also the director of the campus' Big Data Lab, which advances the state of research by collaborating with academic and industry partners around the globe, and by training the next generation of data scientists and engineers.

Read "Cybersecurity for AI Systems: A Survey" in the Journal of Cybersecurity and Privacy