Replies: 1 comment
-
I'm also using an onboard AMD GPU and an RTX 4080 simultaneously. I have connected the display only to the onboard GPU. This allows me to use the RTX 4080 fully for ComfyUI. It's not absolutely necessary to dedicate the RTX 4080 exclusively to ComfyUI, but if you share it with the display, it reduces the available VRAM. This becomes a weakness when running large models like FLUX. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
( I don't think I saw a thread about that topic)
I currently use a RTX 4070 with an AMD processor without integrated graphics. Which means that the environment/screen already uses some memory of the graphic card. Using Mission Center, it says about 500Mo VRAM used (Cinnamon) (which is a lot).
I was wondering if it was a basic requirement for AI to have a CPU with integrated graphics running the environment - and then having the graphic card able to be fully dedicated to the AI task ?
(I used to work a lot on Blender and that's what I was doing, the integrated graphic card for the screen, and the graphic card, with headless drivers, for the CUDA rendering)
Beta Was this translation helpful? Give feedback.
All reactions