Adversarial Machine Learning

Impact of adversarial and backdoor attacks on deep learning techniques

In the last couple of decades, machine learning and neural networks applications have quickly become the state of the art in every automated task. Moreover, the spreading of high-performance GPUs at an affordable price, along with the creation of frameworks that are always simpler to use, have made the implementation of neural network architectures accessible to everyone. Nevertheless these techniques have been found to be highly exposed to malicious approaches. Malicious approaches usually refer to an adversarial scenario, in which an attacker tries to exploit vulnerabilities of a system in order to gain advantages from it. 

In this research project is studied the impact of adversarial examples and backdoor attack on CNN model for MRI medical images.

Muhammad Imran, Hassaan Khaliq Qureshi, Irene Amerini, “BHAC-MRI: Backdoor & Hybrid Attacks on MRI Brain Tumor Classification Using CNN”, ICIAP 2023 

G. Abbate, I. Amerini and R. Caldelli, “Image Watermarking Backdoor Attacks in CNN-based classification tasks”, AI4MFDD workshop ICPR 2022