We use FlowNet2 as a submodule for the Flow layers, so please add --recurse-submodules in the clone command:
git clone --recurse-submodules https://github.com/andrewjong/ShineOn-Virtual-Tryon.git
This code is tested on PyTorch 1.6.0 and cudatoolkit 9.2.
-
Install and activate the conda environment with:
conda env create -f sams-pt1.6.yml conda activate sams-pt1.6
-
Next, install the custom FlowNet2 CUDA layers. We use our custom fork that adds support for RTX GPU architectures.
cd models/flownet2_pytorch bash install.sh
-
Last, you must install the FlowNet2 pre-trained checkpoint provided by NVIDIA. You should place this checkpoint under the folder
models/flownet2_pytorch
That's it!
Docker Image (if problems above)
TODO: double check this works Having trouble with the conda install? You can try our provided [Docker Image](https://hub.docker.com/r/andrewjong/2021-wacv).
-
If you don't have Docker installed, follow NVIDIA's Docker install guide.
-
Pull and run the image via:
docker run -it \ --name 2021-wacv \ -v /PATH/TO/PROJECT_DIR:/2021-wacv-video-vton \ -v /data_hdd/fw_gan_vvt/:/data_hdd/fw_gan_vvt/ \ -v /PATH_TO_WARP-CLOTH/:/data_hdd/fw_gan_vvt/train/warp-cloth \ --gpus all --shm-size 8G \ andrewjong/2021-wacv:latest /bin/bash
And once within the Docker container, run
conda activate sams-pt1.6
.
We add densepose, schp and flow annotations to FW-GAN's original VVT dataset (original dataset courtesy of Haoye Dong).
First, follow the installation instructions in schp. We used the SCHP pre-trained model and evaluate.py script to generate frame-by-frame human parsing annotations.
A generic algorithm for this would be:
import os
import os.path as osp
home = "/path/to/fw_gan_vvt/test/test_frames"
schp = "/path/to/Self-Correction-Human-Parsing"
output = "/path/to/fw_gan_vvt/test/test_frames_parsing"
os.chdir(home)
paths = os.listdir('.')
paths.sort()
for vid in paths:
os.chdir(osp.join(home, vid))
input_dir = os.getcwd()
output_dir = osp.join(output, vid)
generate_seg = "python evaluate.py --dataset lip --restore-weight
checkpoints/exp-schp-201908261155-lip.pth --input " + input_dir +
" --output " + output_dir
os.chdir(schp)
os.system(generate_seg)
Follow the installation instructions on our custom fork.
Then, using the training command that is provided. We use ImagesFromFolder dataset instead of the MPISintel dataset
in the command. Similar to the SCHP annotation algorithm, we generate frame-by-frame flow annotations using methods in
models/flownet2_pytorch
.
Follow the installation instructions on densepose. Then, use the following command to generate densepose annotations,
python2 tools/infer_simple.py \
--cfg configs/DensePose_ResNet101_FPN_s1x-e2e.yaml \
--output-dir DensePoseData/infer_out/ \
--image-ext [jpg or png] \
--wts https://dl.fbaipublicfiles.com/densepose/DensePose_ResNet101_FPN_s1x-e2e.pkl \
[Input image]
```