diff --git a/README.md b/README.md index e04cb34..c802c64 100644 --- a/README.md +++ b/README.md @@ -54,6 +54,8 @@ $ bash train.sh ``` which conducts Dreambooth LoRA fine-tuning by running `train_dreambooth.py` given a folder of identity images. This is based on [PEFT](https://github.com/huggingface/peft/tree/main/examples/lora_dreambooth). Download the folders of identity images from this [link](https://huggingface.co/datasets/wangkua1/w2w-celeba-generated/tree/main). All you need to do is change ``--instance_data_dir="celeba_generated0/0"`` to the identity folder and ``--output_dir="output0"`` to the desired output directory. +After conducting Dreambooth fine-tuning, you can see how we flatten the weights and conduct PCA in ``other/creating_weights_dataset.ipynb``. + ## Acknowledgments Our code is based on implementations from the following repos: