Replies: 17 comments
-
>>> erogol |
Beta Was this translation helpful? Give feedback.
-
>>> bearson |
Beta Was this translation helpful? Give feedback.
-
>>> erogol |
Beta Was this translation helpful? Give feedback.
-
>>> bearson |
Beta Was this translation helpful? Give feedback.
-
>>> erogol |
Beta Was this translation helpful? Give feedback.
-
>>> mathematiguy |
Beta Was this translation helpful? Give feedback.
-
>>> mathematiguy |
Beta Was this translation helpful? Give feedback.
-
>>> erogol |
Beta Was this translation helpful? Give feedback.
-
>>> mathematiguy |
Beta Was this translation helpful? Give feedback.
-
>>> mathematiguy |
Beta Was this translation helpful? Give feedback.
-
>>> repodiac |
Beta Was this translation helpful? Give feedback.
-
>>> thllwg |
Beta Was this translation helpful? Give feedback.
-
>>> repodiac |
Beta Was this translation helpful? Give feedback.
-
>>> erogol |
Beta Was this translation helpful? Give feedback.
-
>>> repodiac |
Beta Was this translation helpful? Give feedback.
-
>>> erogol |
Beta Was this translation helpful? Give feedback.
-
>>> mathematiguy |
Beta Was this translation helpful? Give feedback.
-
>>> bearson
[September 17, 2019, 6:27pm]
Hi!
I'm doing training experiments on a couple of machines. Both machines
have a single RTX2080Ti in them but different RAM and CPU setups. The
thing I'm seeing is that I'm very CPU limited. One setup have a 4 thread
i5 and the other have a 8 thread i7. Both of the CPUs seems to struggle
to keep the RTX card occupied. The RTXs are never 100% loaded and just
idles sometimes.
My question is: how much would I need to bump the CPU in order to max
out a single RTX2080Ti? Does the training like many cores or do I need
high clock speeds?
Best regards
[This is an archived TTS discussion thread from discourse.mozilla.org/t/training-hardware-bottlenecks]
Beta Was this translation helpful? Give feedback.
All reactions