Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: Use all (or configurable #) of threads for model loading, not constrainted by --threads specified for inference #11873

Open
4 tasks done
VanceVagell opened this issue Feb 14, 2025 · 2 comments
Labels
enhancement New feature or request

Comments

@VanceVagell
Copy link

Prerequisites

  • I am running the latest code. Mention the version if possible as well.
  • I carefully followed the README.md.
  • I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • I reviewed the Discussions, and have a new and useful enhancement to share.

Feature Description

Using --threads artificially slows down disk access during model load by an order of magnitude.

Could a new option (like --model-load-threads) be added so I can specify the full system limit, and not have model loading artificially constrained?

Motivation

My CPU-based inference server generates tokens most quickly with --threads 5, just given my particular hardware setup. However, that also limits the number of threads used for model loading, which makes it take about 10x longer than needed. My system has 32 cores, and 64 threads total.

  • When I run with --threads 5, model loading happens at around 200MB/sec (I can see this in "sudo iotop -o").
  • When I run with --threads 64, model loading happens at around 2000MB/sec (2GB/sec), which is my systems max SSD speed.

I need to run with --threads 5 because that optimizes inference speed, but it means I need to wait a really really long time for large models to load on initial start.

Possible Implementation

No response

@VanceVagell VanceVagell added the enhancement New feature or request label Feb 14, 2025
@ggerganov
Copy link
Member

Hm, I'm surprised that the number of threads affects model loading - how could that be?

@VanceVagell
Copy link
Author

This only happens when mmap is enabled (which it is by default).

What I see in iotop is that each of the 64 threads has a throughput of about 30MB/sec, for a total of just about 2GB/sec. I'm not sure what is happening during model load, but by distributing the load across as many threads as possible increase disk throughput about 10x at least on my system with mmap enabled.

Maybe there's some periodic processing as parts of the model are loaded that's preventing fewer threads from moving data fast enough to saturate the disk link?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants