Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

冒昧问一项关于修图功能的希望 #649

Open
longfei796 opened this issue Nov 12, 2024 · 6 comments
Open

冒昧问一项关于修图功能的希望 #649

longfei796 opened this issue Nov 12, 2024 · 6 comments

Comments

@longfei796
Copy link

如果话说的不符合一些事情请见作者谅,我是外行小白
这个作者的开源项目https://github.com/Sanster/IOPaint/tree/iopaint-1.5.0
修图功能惊为天人
这个开源项目可以加到ballontranslator里吗
我也不会py跟pip

@bropines
Copy link
Contributor

如果话说的不符合一些事情请见作者谅,我是外行小白 这个作者的开源项目Sanster/IOPaint@iopaint-1.5.0 修图功能惊为天人 这个开源项目可以加到ballontranslator里吗 我也不会py跟pip

It uses one of our models trained by DmMaze. So essentially we don't need to add anything. We already have a normal Inpaint model that works well with manga

@longfei796
Copy link
Author

如果话说的不符合一些事情请见作者谅,我是外行小白 这个作者的开源项目Sanster/IOPaint@iopaint-1.5.0 修图功能惊为天人 这个开源项目可以加到ballontranslator里吗 我也不会py跟pip

It uses one of our models trained by DmMaze. So essentially we don't need to add anything. We already have a normal Inpaint model that works well with manga

what?i don't know that
Which one is it
opencv—tela,patchmatch,aot,lama-mpe,lama-large-512px

@bropines
Copy link
Contributor

bropines commented Nov 13, 2024

lama-large-512px

This release describes this:
https://github.com/Sanster/IOPaint/releases/tag/iopaint-1.5.0

@longfei796
Copy link
Author

lama-large-512px

This release describes this: https://github.com/Sanster/IOPaint/releases/tag/iopaint-1.5.0

thanks and May I ask what these two functions are

mmexport1731470748630

@bropines
Copy link
Contributor

  1. Roughly speaking, this is the maximum size (like in pixels) that it uses to generate new data, but I could be wrong.
  2. This is a method of calculation. If you have a 20xx-40xx Nvidia video card, then feel free to turn on bf16; if you have lower models, fp32 is better.

@longfei796
Copy link
Author

  1. Roughly speaking, this is the maximum size (like in pixels) that it uses to generate new data, but I could be wrong.
  2. This is a method of calculation. If you have a 20xx-40xx Nvidia video card, then feel free to turn on bf16; if you have lower models, fp32 is better.

thank you so much
i will ask the author for more details

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants