JSAN, Vol. 14, Pages 83: Can Differential Privacy Hinder Poisoning Attack Detection in Federated Learning?


JSAN, Vol. 14, Pages 83: Can Differential Privacy Hinder Poisoning Attack Detection in Federated Learning?

Journal of Sensor and Actuator Networks doi: 10.3390/jsan14040083

Authors:
Chaitanya Aggarwal
Divya G. Nair
Jafar Aco Mohammadi
Jyothisha J. Nair
Jörg Ott

We consider the problem of data poisoning attack detection in a federated learning (FL) setup with differential privacy (DP). Local DP in FL ensures that privacy leakage caused by shared gradients is controlled by adding randomness to the process. We are interested in studying the effect of the Gaussian mechanism in the detection of different data poisoning attacks. As the additive noise from DP could hide poisonous data, the effectiveness of detection algorithms should be analyzed. We present two poisonous data detection algorithms and one malicious client identification algorithm. For the latter, we show that the effect of DP noise decreases as the size of the neural network increases. We further demonstrate this effect alongside the performance of these algorithms on three publicly available datasets.



Source link

Chaitanya Aggarwal www.mdpi.com