Implementation of the work by Xiang et al. (ICLR Poster, Paper PDF). Full report will be available soon at OpenReview.
We examine the reproducibility of the quantitative results reported by Xiang et al. Since no publicly available implementation currently exists, we write our own in PyTorch.
As the authors do not provide training details in their work, we do not aim to obtain the exact reported metrics. Instead, we focus on the claims that the proposed complex-valued networks are secure against inversion and property inference attacks while maintaining similar performance as the real-value counterparts.
- Python 3.6 or greater.
- Dependencies can be installed by
pip install -r requirements.txt
We include several shell scripts with examples on how to train the classification models and the various attacker models.
Our results can be reproduced by running the provided Jupyter Notebook. The notebook requires the model checkpoints of our trained models, which can be downloaded here. The downloaded zip directory needs to be extracted into the root directory in order for the notebook to work properly. We use both, the CIFAR-10 and CIFAR-100 dataset in the notebook. Both datasets will be automatically downloaded by PyTorch and don't require any further preparation. It takes about 1-2 hours for the whole notebook to run.