Skip to content

Latest commit

 

History

History
273 lines (228 loc) · 15.6 KB

README.md

File metadata and controls

273 lines (228 loc) · 15.6 KB

attack_pandas

This code is based on baseline code provided by organizers.
Solution authors: @atmyre, @mortido, @snakers4, @stalkermustang

More info:

Our solution presentation at MCS2018
Top-3 winners' presentation
Presentations at MCS2018 summit: video
Yandex ML Training 07.07.2018: video

Table of contents:

Solution overview

This code trains different white boxes on data from the black box model, makes iterations of FGSM attacks with heuristics and genetic one-pixel attacks on the white boxes:
pipeline

How to reproduce

1. Set environment and get data

You may use our dockerfile to build a docker container:

How to build docker container

To build the docker image from the Dockerfile located in `dockerfile` please do: ``` cd dockerfile docker build -t face_attack_docker . ```

Also please make sure that nvidia-docker2 and proper nvidia drivers are installed.

To test the installation run

docker run --runtime=nvidia --rm nvidia/cuda nvidia-smi

Then launch the container as follows:

docker run --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=all -it -v /your/folder/:/home/keras/notebook/your_folder -p 8888:8888 -p 6006:6006 --name face_attack --shm-size 16G face_attack_docker

Please note that w/o --shm-size 16G PyTorch dataloader classes will not work. The above command will start a container with a Jupyter notebook server available via port 8888. Port 6006 is for tensorboard, if necessary.

Then you can exec into the container like this. All the scripts were run as root, but they must also work under user keras

docker exec -it --user root 46b9bd3fa3f8 /bin/bash

or

docker exec -it --user keras 46b9bd3fa3f8 /bin/bash

To find out the container ID run

 docker container ls

or download black box manually for your cuda-pytorch environment and put in into root of your folder with this repo:

OS python 2.7 python 3.5 python 3.6
Ubuntu CPU
GPU(cuda8.0)
GPU(cuda9.0)
GPU(cuda9.1)
GPU(cuda9.2)
CPU
GPU(cuda8.0)
GPU(cuda9.0)
GPU(cuda9.1)
GPU(cuda9.2)
CPU
GPU(cuda8.0)
GPU(cuda9.0)
GPU(cuda9.1)
GPU(cuda9.2)
CentOS CPU
GPU (cuda8.0)
CPU
GPU(cuda8.0)
CPU
GPU(cuda8.0)
Windows CPU CPU CPU
MacOS CPU CPU CPU

Download pair's data, student_model_imgs, submit list, pairs list and move to data:

python downloader.py --root ./data --main_imgs --student_model_imgs --submit_list --pairs_list

Prepare data for student model:

python prepare_data.py --root data/student_model_imgs/ --datalist_path data/datalist/ --datalist_type train --gpu_id 1;
python prepare_data.py --root data/imgs/ --datalist_path data/datalist/ --datalist_type val --gpu_id 1

2. Download pre-trained weights

Download weights for 5 models and place them to student_net_learning/checkpoint/ :

3. Run attack inference

Run attack scripts using the above pre-trained weights, or proceed to training sections

First attack using Fast Gradient Value Method

python attacker.py --root ./data/imgs/ --save_root ./dual_net_new/ --datalist ./data/pairs_list.csv --start_from 0 --attack_type IFGM \
--model_name resnet34 ResNet50 Xception resnet18 densenet161 \
--checkpoint \
student_net_learning/checkpoint/resnet34_scale_fold0_best.pth.tar \
student_net_learning/checkpoint/ResNet50/best_model_chkpt-resnet50.t7 \
student_net_learning/checkpoint/Xception/best_model_chkpt-xception.t7 \
student_net_learning/checkpoint/resnet18_scale_fold0_best.pth.tar \
student_net_learning/checkpoint/densenet161_1e4_scale_fold0_best.pth.tar --cuda

Please note that full inference may take 30+ hours, therefore the easiest way to speed up the script is to run it in several threads using --start_from 0 parameter Then run one instance of one-pixel attack

python attacker.py --root ./dual_net_new --save_root ./dual_net_new_op/ \
--datalist ./data/pairs_list.csv --cuda --start_from 0 --attack_mode continue --attack_type OnePixel --iter 16

Then run one more instance of one pixel attack

python attacker.py --root ./dual_net_new_op_15 --save_root ./FINAL_FINAL/ \
--datalist ./data/pairs_list.csv --cuda --start_from 0 --attack_mode continue --attack_type OnePixel-last-hope --iter 5

4. Make submission

Check ssim for submission, archive all files and make submission

python evaluate.py --attack_root ./FINAL_FINAL/ --target_dscr ./data/val_descriptors.npy --submit_name final_final --gpu_id 0

Train mortido's CNNs (optional)

Provided original scripts log w/o alterations The require code from the original repository


======================================
xception redesign
=====================================
python main.py --name Xception --model_name Xception --epochs 6 --down_epoch 2 --cuda --batch_size 64 --datalist ../data/data_list/ --root C:/ --lr 0.001 --finetune --criterion HUBER --resume --max_train_imgs 100000
python main.py --name Xception --model_name Xception --epochs 3 --down_epoch 1 --cuda --batch_size 64 --datalist ../data/data_list/ --root C:/ --lr 0.0001 --finetune --ignore_prev_run --resume --max_train_imgs 100000
python main.py --name Xception --model_name Xception --epochs 2 --down_epoch 1 --cuda --batch_size 64 --datalist ../data/data_list/ --root C:/ --lr 0.0001 --ignore_prev_run --resume --max_train_imgs 100000
(accidentely 3 epochs with frozen layers...)
python main.py --name Xception --model_name Xception --epochs 3 --down_epoch 2 --cuda --batch_size 64 --datalist ../data/data_list/ --root C:/ --lr 0.0001 --finetune --ignore_prev_run --resume --max_train_imgs 100000 
python main.py --name Xception --model_name Xception --epochs 1 --down_epoch 1 --cuda --batch_size 64 --datalist ../data/data_list/ --root C:/ --lr 0.0001 --ignore_prev_run --resume --max_train_imgs 100000
python main.py --name Xception --model_name Xception --epochs 1 --down_epoch 1 --cuda --batch_size 64 --datalist ../data/data_list/ --root C:/ --lr 0.0001 --ignore_prev_run --resume --max_train_imgs 500000
python main.py --name Xception --model_name Xception --epochs 2 --down_epoch 1 --cuda --batch_size 32 --datalist ../data/data_list/ --root C:/ --lr 0.0001 --ignore_prev_run --resume
python main.py --name Xception --model_name Xception --epochs 3 --down_epoch 1 --cuda --batch_size 32 --datalist ../data/data_list/ --root C:/ --lr 0.0005 --ignore_prev_run --resume
python main.py --name Xception --model_name Xception --epochs 2 --cuda --batch_size 32 --datalist ../data/data_list/ --root C:/ --lr 0.000005 --ignore_prev_run --resume
=========================
resnet50
=========================
python main.py --name ResNet50 --model_name ResNet50 --epochs 3 --down_epoch 1 --cuda --batch_size 16 --datalist ../data/data_list/ --root C:/ --lr 0.005 --max_train_imgs 10000
python main.py --name ResNet50 --model_name ResNet50 --epochs 3 --down_epoch 4 --cuda --batch_size 32 --datalist ../data/data_list/ --root C:/ --lr 0.0001 --ignore_prev_run --resume
python main.py --name ResNet50 --model_name ResNet50 --epochs 1 --down_epoch 4 --cuda --batch_size 32 --datalist ../data/data_list/ --root C:/ --lr 0.0001 --ignore_prev_run --resume
python main.py --name ResNet50 --model_name ResNet50 --epochs 1 --down_epoch 4 --cuda --batch_size 32 --datalist ../data/data_list/ --root C:/ --lr 0.00003 --ignore_prev_run --resume
python main.py --name ResNet50 --model_name ResNet50 --epochs 1 --down_epoch 4 --cuda --batch_size 32 --datalist ../data/data_list/ --root C:/ --lr 0.00001 --ignore_prev_run --resume
python main.py --name ResNet50 --model_name ResNet50 --epochs 1 --down_epoch 4 --cuda --batch_size 32 --datalist ../data/data_list/ --root C:/ --lr 0.000003 --ignore_prev_run --resume
python main.py --name ResNet50 --model_name ResNet50 --epochs 1 --down_epoch 4 --cuda --batch_size 32 --datalist ../data/data_list/ --root C:/ --lr 0.000001 --ignore_prev_run --resume
=========================

Useful links