This document describes the step-by-step instructions for reproducing PyTorch BlendCNN distillation(with MRPC dataset) results with Intel® Neural Compressor.
cd examples/pytorch/nlp/blendcnn/distillation/eager
pip install torch>=1.6.0 tqdm
Download BERT-Base, Uncased
mkdir models/ && mv uncased_L-12_H-768_A-12.zip models/
cd models/ && unzip uncased_L-12_H-768_A-12.zip
Download
GLUE MRPC Benchmark Datasets,
After downloads dataset, you need to put dataset at ./MRPC/
, list this:
ls MRPC/
dev_ids.tsv dev.tsv test.tsv train.tsv
mkdir -p models/bert/mrpc
# fine-tune the pretrained BERT-Base model
python finetune.py config/finetune/mrpc/train.json
Now BERT-Base model weights model_final.pt at ./models/bert/mrpc/
.
mkdir -p models/blendcnn/
# distilling the BlendCNN
python distill.py --loss_weights 0.1 0.9
Follow the above steps, you will find distilled BlendCNN model weights best_model_weights.pt in ./models/blendcnn/
.