This repository contains the official PyTorch implementation of the following paper:
StyleInV: A Temporal Style Modulated Inversion Network for Unconditional Video Generation
Yuhan Wang, Liming Jiang, Chen Change Loy
In ICCV 2023.
From MMLab@NTU affiliated with S-Lab, Nanyang Technological University
[Paper] | [Project Page] | [Video]
From left to right: DeeperForensics, FaceForensics, SkyTimelapse, TaiChi
From left to right: In-the-wild image, pSp inversion, raw animation, style transfer
- [09/11/2023] Source code is available. Tutorial on the environment, usage, and data/model preparation is on the way.
- [08/2023] Accepted by ICCV 2023. The code is coming soon!
train_stylegan2.py
train_psp.py
train_styleinv.py
- Download
stylegan2-celebvhq256-fid5.00.pkl
and save it topretrained_models/psp/celebv_hq_256/stylegan2-celebvhq256-fid5.00.pkl
- For the auxiliary models, download
model_ir_se50.pth
and save it topretrained_models/psp/model_ir_se50.pth
. The weight for perceptual loss can be automatically downloaded. - Prepare a fine-tune dataset, where each image should be cropped according to this or this
- Start finetuning
python -u train_stylegan2.py \
--outdir=experiments/stylegan2/transfer/celebvhq-arcane \
--gpus=4 \
--data=[your fine-tune dataset directory] \
--mirror=1 \
--cfg=paper256 \
--aug=ada \
--snap=20 \
--resume=pretrained_models/psp/celebv_hq_256/stylegan2-celebvhq256-fid5.00.pkl \
--transfer=True \
--no-metric=True \
--finetune-g-res=64 \
--perceptual-weight=30 \
--identity-weight=1
generate_styleinv_video.py
scripts/calc_metrics_video.py
generate_animation.py
If you find our repo useful for your research, please consider citing our paper:
@InProceedings{wang2023styleinv,
title = {{StyleInV}: A Temporal Style Modulated Inversion Network for Unconditional Video Generation},
author = {Wang, Yuhan and Jiang, Liming and Loy, Chen Change},
booktitle = {ICCV},
year = {2023}
}
This codebase is maintained by Yuhan Wang.
This repo is built on top of following works: