Acquiring the desired font for various design tasks can be challenging and requires professional typographic knowledge. While previous font retrieval or generation works have alleviated some of these difficulties, they often lack support for multiple languages and semantic attributes beyond the training data domains. To solve this problem, we present FontCLIP – a model that connects the semantic understanding of a large vision-language model with typographical knowledge. We integrate typographyspecific knowledge into the comprehensive vision-language knowledge of a pretrained CLIP model through a novel finetuning approach. We propose to use a compound descriptive prompt that encapsulates adaptively sampled attributes from a font attribute dataset focusing on Roman alphabet characters. FontCLIP’s semantic typographic latent space demonstrates two unprecedented generalization abilities. First, FontCLIP generalizes to different languages including Chinese, Japanese, and Korean (CJK), capturing the typographical features of fonts across different languages, even though it was only finetuned using fonts of Roman characters. Second, FontCLIP can recognize the semantic attributes that are not presented in the training data. FontCLIP’s dual-modality and generalization abilities enable multilingual and cross-lingual font retrieval and letter shape optimization, reducing the burden of obtaining desired fonts.
We propose FontCLIP, which is a CLIP model fine-tuned with a font dataset.
We have explored several fint-tuning approaches and integrated them into a Python class named ExCLIP
.
We propose two applications based on FontCLIP, font retrieval and vector optimization.
You can retrieve desired fonts inputting a text or image, or both of them. Internally, the cosine distance between the input and each font in the dataset is calculated, and the fonts in top rank are returned.
You can do the demo.
python font_retrieval.py
You can deform a input character in svg format by minimizing several losses including the cosine distance in the FontCLIP latent space.
You can use the demo here.
- For finetuning CLIP
conda create -y --name fontclip python=3.8.15
conda activate fontclip
conda install -y pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia
pip install tqdm ftfy regex gdown
pip install gradio==3.40.0
pip install httpx==0.24.1
- For vector optimization
conda install -y numpy scikit-image
conda install -y -c anaconda cmake=3.22.1
conda install -y -c conda-forge ffmpeg
pip install svgwrite svgpathtools cssutils numba torch-tools scikit-fmm easydict visdom freetype-py shapely ttf save_svg
pip install opencv-python==4.5.4.60
pip install kornia==0.6.8
pip install wandb
pip install shapely
# install diffvg
git clone https://github.com/BachiLi/diffvg.git
cd diffvg
git submodule update --init --recursive
python setup.py install
Please be careful that the version of each library is suitable for diffvg. - see the issue for details
python setup_data.py
You can run the processor finetuning using the following command.
python train.py --random_prompt_num_per_font 10000 --sample_num 50 --color_jitter_sample_num 200 --use_lora_text
We have tried several finetuning methods (direct finetuning, CoOp, VPT, LoRA, and OFT) and integrate them into one Python class named ExCLIP
. - see ex_clip.py for details
This project, based on CLIP, is licensed under MIT License - see the LICENSE_MIT for details
The source codes for CoOp in ExCLIP, is licensed under MIT License - see the LICENSE_MIT for details
The source codes for VPT in ExCLIP, is licensed under CC-BY-NC 4.0 License - see the LICENSE.CC_BY_NC_SA_4.0 for details
The source codes for OFT in ExCLIP, is licensed under MIT License - see the LICENSE_MIT for details
The source codes for vector optimization are based on Word-As-Image, particularly the files under optimizer
folder. - see the LICENSE.CC_BY_NC_SA_4.0 for details
The source codes for vector optimization are based on Word-As-Image created by Shiriluz. The original work can be found at https://github.com/Shiriluz/Word-As-Image and is licensed under CC BY-NC-SA 4.0.