New Release 01-29-2025 #3729
vladmandic
announced in
Announcements
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Two weeks since last release, time for update!
What's New?
face-restore models: RestoreFormer, CodeFormer, GFPGan, GPEN-BFR
face-swapper with Photomaker-v2 and video with Fast-Hunyuan
Many IPEX improvements, native torch fp8 support,
support for PAB:Pyramid-attention-broadcast, ParaAttention and PerFlow
Finally replace that pesky VAE in your favorite model with a fixed one!
Details for 2025-01-29
in addition to existing model weights merge support
now also having ability to replace model components and merge LoRAs
you can also test merges in-memory without needing to save to disk at all
and you can also use it to convert diffusers to safetensors if you want
example: replace vae in your favorite model with a fixed one? replace text encoder? etc.
note: limited to sdxl for now, additional models can be added depending on popularity
compatible with sdxl models, generates pretty good results and its faster than most other methods
select under scripts -> face -> photomaker
todo: experimental-only and unfinished, only noting in changelog for future reference
simply select model variant and set appropriate parameters
recommended: sampler-shift=17, steps=6, resolution=720x1280, frames=125, guidance>6.0
transformer
based models: e.g. flux.1, hunyuan-video, lyx-video, mochi, etc.higher values leads to more cache hits and speedups, but might also lead to a higher accuracy drop
float8_e4m3fn
orfloat8_e5m2
as data storage and performs dynamic upcasting to computedtype
as neededunet
andtransformer
based models: e.g. sd15, sdxl, sd35, flux.1, hunyuan-video, ltx-video, etc.this is alternative to
bnb
/quanto
/torchao
quantization on models/platforms/gpus where those libraries are not availableperflow
scheduler combined with one of the available pre-trained modelslisten
modeBeta Was this translation helpful? Give feedback.
All reactions