Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

extremely slow infering #13

Open
Fqlox opened this issue Nov 14, 2024 · 3 comments
Open

extremely slow infering #13

Fqlox opened this issue Nov 14, 2024 · 3 comments

Comments

@Fqlox
Copy link

Fqlox commented Nov 14, 2024

I wanted to render infer with the model on I2V following the example in the readme and got infinte slow time. Whereas on Comfyui I ran a video generation in around 10 minutes (without lora). As seen below:

renderCogxvideo

I ran on windows 10 with a 3090 a on conda env, python 3.12.7, torch 2.5.1, diffusers 0.31.

@wenqsun
Copy link
Owner

wenqsun commented Nov 15, 2024

HI, it may be caused by the lack of GPU resources. Now we have released the online huggingface demo: https://huggingface.co/spaces/fffiloni/DimensionX

You can try our model online! It would be much faster!

@Fqlox
Copy link
Author

Fqlox commented Nov 15, 2024

HI, it may be caused by the lack of GPU resources. Now we have released the online huggingface demo: https://huggingface.co/spaces/fffiloni/DimensionX

You can try our model online! It would be much faster!

As mentioned, I run the inference on a 3090 RTX and the model works well into comfyui

@dtaddis
Copy link

dtaddis commented Nov 24, 2024

I'm also experiencing very slow inferring - 120s/it, resulting in about a 90-minute total time.

Windows 11, RTX 4090, Python 3.12, Torch 2.5.1.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants