Skip to content
/ InST Public
forked from zyxElsa/InST

Official implementation of the paper “Inversion-Based Style Transfer with Diffusion Models” (CVPR 2023)

Notifications You must be signed in to change notification settings

dou0000/InST

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation


Inversion-Based Style Transfer with Diffusion Models

teaser

The artistic style within a painting is the means of expression, which includes not only the painting material, colors, and brushstrokes, but also the high-level attributes including semantic elements, object shapes, etc. Previous arbitrary example-guided artistic image generation methods often fail to control shape changes or convey elements. The pre-trained text-to-image synthesis diffusion probabilistic models have achieved remarkable quality, but it often requires extensive textual descriptions to accurately portray attributes of a particular painting. We believe that the uniqueness of an artwork lies precisely in the fact that it cannot be adequately explained with normal language.Our key idea is to learn artistic style directly from a single painting and then guide the synthesis without providing complex textual descriptions. Specifically, we assume style as a learnable textual description of a painting. We propose an inversion-based style transfer method (InST), which can efficiently and accurately learn the key information of an image, thus capturing and transferring the complete artistic style of a painting. We demonstrate the quality and efficiency of our method on numerous paintings of various artists and styles.

For details see the paper

News

📣📣 See our latest work about attribute-aware image generation with diffusion model ProSpect: Expanded Conditioning for the Personalization of Attribute-aware Image Generation in Code.

(back to top)

Getting Started

Prerequisites

For packages, see environment.yaml.

conda env create -f environment.yaml
conda activate ldm

(back to top)

Installation

Clone the repo

git clone https://github.com/zyxElsa/InST.git

(back to top)

Train

Train InST:

python main.py --base configs/stable-diffusion/v1-finetune.yaml
            -t 
            --actual_resume ./models/sd/sd-v1-4.ckpt
            -n <run_name> 
            --gpus 0, 
            --data_root /path/to/directory/with/images

See configs/stable-diffusion/v1-finetune.yaml for more options

Download the pretrained Stable Diffusion Model and save it at ./models/sd/sd-v1-4.ckpt.

(back to top)

Test

To generate new images, run InST.ipynb

(back to top)

Contact

Please feel free to open an issue or contact us personally if you have questions, need help, or need explanations. Write to one of the following email addresses, and maybe put one other in the cc:

[email protected]

(back to top)

About

Official implementation of the paper “Inversion-Based Style Transfer with Diffusion Models” (CVPR 2023)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 89.6%
  • Python 10.3%
  • Shell 0.1%