Paper: Efficient Feature Compression for Edge-Cloud Systems, published at Picture Coding Symposium (PCS) 2022
Arxiv: https://arxiv.org/abs/2211.09897
pytorch>=1.12
,tqdm
,compressai==1.2.2
(link)- Code has been tested on Windows and Linux with Intel CPUs and Nvidia GPUs (Python 3.9, CUDA 11.3).
- Download the repository;
- Download the pre-trained model checkpoints and put them in the
checkpoints
folder.
Latency | Link | |
---|---|---|
ours_n0 |
3.95ms | Google Drive |
ours_n4 |
6.70ms | Google Drive |
ours_n8 |
10.2ms | Google Drive |
ours_n0_enc |
3.95ms | Google Drive |
ours_n4_enc |
6.70ms | Google Drive |
ours_n8_enc |
10.2ms | Google Drive |
*Latency is the time to encode a 224x224
RGB image, including time for entropy coding. Device: Intel 10700k CPU, using 8 cores (default of PyTorch).
- Compress an image feature: See
example-sender.ipynb
. - Prediction from compressed feature: See
example-receiver.ipynb
.
Evaluate all models on ImageNet:
python evalutate.py -d /path/to/imagenet/val -b batch_size -w cpu_workers
- In
trian.py
, updateIMAGENET_DIR = Path('../../datasets/imagenet')
to the imagenet root directory; - Install
wandb
: https://docs.wandb.ai/quickstart
python train.py --model ours_n4
Supported models:
ours_n8
ours_n8_enc
ours_n4
ours_n4_enc
ours_n0
ours_n0_enc
matsubara2022wacv
CUDA_VISIBLE_DEVICES=4 python train.py --model ours_n4
CUDA_VISIBLE_DEVICES=4 python train.py --model ours_n4 --model_args bpp_lmb=0.64
The training loss function is loss = other_terms + lmb_bpp * bppix
- A larger
bpp_lmb
results in lower bpp but lower accuracy - A smaller
bpp_lmb
results in higher bpp but higher accuracy
CUDA_VISIBLE_DEVICES=4 python train.py --model ours_n4 --model_args bpp_lmb=0.64 --batch_size 128
The default batch size is 384, which we used to train our models (we used 4 GPUs, 96 on each). Lower batch size results in a faster training but probably worse final performance.
CUDA_VISIBLE_DEVICES=4 python train.py --model ours_n4 --model_args bpp_lmb=0.64 --batch_size 128 --wbmode online
By default, the run locates at https://wandb.ai/home > "edge-cloud-rac" project > "default-group" group.
One can specify the project name and group name by --wbproject
and --wbgroup
CUDA_VISIBLE_DEVICES=4,5,6,7 torchrun --nproc_per_node 4 train.py --model ours_n4 --model_args bpp_lmb=0.64 --batch_size 96 --wbmode online --ddp_find
Note: --ddp_find
is necessary for using multi-GPU training (PyTorch DDP)
TBD