You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
I reviewed the Discussions, and have a new and useful enhancement to share.
Feature Description
Using --threads artificially slows down disk access during model load by an order of magnitude.
Could a new option (like --model-load-threads) be added so I can specify the full system limit, and not have model loading artificially constrained?
Motivation
My CPU-based inference server generates tokens most quickly with --threads 5, just given my particular hardware setup. However, that also limits the number of threads used for model loading, which makes it take about 10x longer than needed. My system has 32 cores, and 64 threads total.
When I run with --threads 5, model loading happens at around 200MB/sec (I can see this in "sudo iotop -o").
When I run with --threads 64, model loading happens at around 2000MB/sec (2GB/sec), which is my systems max SSD speed.
I need to run with --threads 5 because that optimizes inference speed, but it means I need to wait a really really long time for large models to load on initial start.
Possible Implementation
No response
The text was updated successfully, but these errors were encountered:
This only happens when mmap is enabled (which it is by default).
What I see in iotop is that each of the 64 threads has a throughput of about 30MB/sec, for a total of just about 2GB/sec. I'm not sure what is happening during model load, but by distributing the load across as many threads as possible increase disk throughput about 10x at least on my system with mmap enabled.
Maybe there's some periodic processing as parts of the model are loaded that's preventing fewer threads from moving data fast enough to saturate the disk link?
Prerequisites
Feature Description
Using --threads artificially slows down disk access during model load by an order of magnitude.
Could a new option (like --model-load-threads) be added so I can specify the full system limit, and not have model loading artificially constrained?
Motivation
My CPU-based inference server generates tokens most quickly with --threads 5, just given my particular hardware setup. However, that also limits the number of threads used for model loading, which makes it take about 10x longer than needed. My system has 32 cores, and 64 threads total.
I need to run with --threads 5 because that optimizes inference speed, but it means I need to wait a really really long time for large models to load on initial start.
Possible Implementation
No response
The text was updated successfully, but these errors were encountered: