Arguments for executing HandRefiner.py:
- --input_dir
input directory containing images to be rectified
- --input_img
input image to be rectified
- --out_dir
output directory where the rectified images will be saved to
- --log_json
file where the mpjpe values will be logged to
- --strength
control strength for ControlNet
- --depth_dir
directory where the depth maps will be saved to. Leaving it empty will disable this function
- --mask_dir
directory where the masks will be saved to. Leaving it empty will disable this function
- --eval (True/False)
whether evaluate the mpjpe error in fixed control strength mode, currently only works for batch size of 1.
- --finetuned (True/False)
whether use finetuned ControlNet trained on synthetic images as introduced in the paper
- --weights
path to the SD + ControlNet weights
- --num_samples
batch size
- --prompt_file
prompt file for multi-image rectification Format for prompt file:
{"img": filename, "txt": prompt}
Example:
{"img": "img1.jpg", "txt": "a woman making a hand gesture"} {"img": "img2.jpg", "txt": "a man making a hand gesture"} {"img": "img3.jpg", "txt": "a man making a thumbs up gesture"}
- --prompt
prompt for single image rectification
- --n_iter
number of generation iteration for each image to be rectified. In general, for each input image, n_iter x num_samples number of rectified images will be produced
- --adaptive_control (True/False)
adaptive control strength as introduced in paper, currently only works for batch size of 1. We tend to use fixed control strength as default.
- --padding_bbox
padding controls the size of masks around the hand
- --seed
set seed to maintain reproducibility