This repository contains PyTorch implementation of the paper: Toward Efficient Defense Against Model Poisoning Attacks in Privacy-Preserving Federated Learning.
Toward Efficient Defense Against Model Poisoning Attacks in Privacy-Preserving Federated Learning
The repository contains one jupyter notebook for each benchmark which can be used to re-produce the experiments reported in the paper for that benchmark. The notebooks contain clear instructions on how to run the experiments.
MNIST and CIFAR10 will be automatically downloaded.
The results can be seen in the paper