Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

build(deps): bump the pip group across 31 directories with 1 update #68

Conversation

dependabot[bot]
Copy link

@dependabot dependabot bot commented on behalf of github Jan 27, 2025

Bumps the pip group with 1 update in the /bentoml/bentos/codestral/22b-v0.1-fp16-7231/src directory: vllm.
Bumps the pip group with 1 update in the /bentoml/bentos/deepseek-r1-distill/llama3.1-8b-fp16-f208/src directory: vllm.
Bumps the pip group with 1 update in the /bentoml/bentos/deepseek-r1-distill/llama3.3-70b-instruct-fp16-5b46/src directory: vllm.
Bumps the pip group with 1 update in the /bentoml/bentos/deepseek-r1-distill/qwen2.5-1.5b-math-fp16-5e2f/src directory: vllm.
Bumps the pip group with 1 update in the /bentoml/bentos/deepseek-r1-distill/qwen2.5-14b-fp16-44c7/src directory: vllm.
Bumps the pip group with 1 update in the /bentoml/bentos/deepseek-r1-distill/qwen2.5-32b-fp16-29c6/src directory: vllm.
Bumps the pip group with 1 update in the /bentoml/bentos/deepseek-r1-distill/qwen2.5-7b-math-fp16-761e/src directory: vllm.
Bumps the pip group with 1 update in the /bentoml/bentos/deepseek-v3/671b-instruct-fp8-70d7/src directory: vllm.
Bumps the pip group with 1 update in the /bentoml/bentos/gemma/2b-instruct-fp16-1320/src directory: vllm.
Bumps the pip group with 1 update in the /bentoml/bentos/gemma/7b-instruct-awq-4bit-a9cb/src directory: vllm.
Bumps the pip group with 1 update in the /bentoml/bentos/gemma/7b-instruct-fp16-10bb/src directory: vllm.
Bumps the pip group with 1 update in the /bentoml/bentos/gemma2/27b-instruct-fp16-c1e5/src directory: vllm.
Bumps the pip group with 1 update in the /bentoml/bentos/gemma2/9b-instruct-fp16-fdaa/src directory: vllm.
Bumps the pip group with 1 update in the /bentoml/bentos/jamba1.5/mini-fp16-3615/src directory: vllm.
Bumps the pip group with 1 update in the /bentoml/bentos/llama2/13b-chat-fp16-49e4/src directory: vllm.
Bumps the pip group with 1 update in the /bentoml/bentos/llama2/70b-chat-fp16-cc77/src directory: vllm.
Bumps the pip group with 1 update in the /bentoml/bentos/llama2/7b-chat-awq-4bit-cc6f/src directory: vllm.
Bumps the pip group with 1 update in the /bentoml/bentos/llama2/7b-chat-fp16-81cf/src directory: vllm.
Bumps the pip group with 1 update in the /bentoml/bentos/llama3.1-nemotron/70b-instruct-fp16-8d09/src directory: vllm.
Bumps the pip group with 1 update in the /bentoml/bentos/llama3.1/405b-instruct-awq-4bit-bbd0/src directory: vllm.
Bumps the pip group with 1 update in the /bentoml/bentos/llama3.1/70b-instruct-awq-4bit-e86e/src directory: vllm.
Bumps the pip group with 1 update in the /bentoml/bentos/llama3.1/70b-instruct-fp16-d198/src directory: vllm.
Bumps the pip group with 1 update in the /bentoml/bentos/llama3.1/8b-instruct-awq-4bit-b149/src directory: vllm.
Bumps the pip group with 1 update in the /bentoml/bentos/llama3.1/8b-instruct-fp16-cbdd/src directory: vllm.
Bumps the pip group with 1 update in the /bentoml/bentos/llama3.2/11b-vision-instruct-714f/src directory: vllm.
Bumps the pip group with 1 update in the /bentoml/bentos/llama3.2/1b-instruct-fp16-ce2d/src directory: vllm.
Bumps the pip group with 1 update in the /bentoml/bentos/llama3.2/3b-instruct-fp16-be73/src directory: vllm.
Bumps the pip group with 1 update in the /bentoml/bentos/llama3.3/70b-instruct-fp16-419e/src directory: vllm.
Bumps the pip group with 1 update in the /bentoml/bentos/llama3/70b-instruct-awq-4bit-f693/src directory: vllm.
Bumps the pip group with 1 update in the /bentoml/bentos/llama3/70b-instruct-fp16-7265/src directory: vllm.
Bumps the pip group with 1 update in the /bentoml/bentos/mistral-large/123b-instruct-awq-4bit-13a5/src directory: vllm.

Updates vllm from 0.6.6post1 to 0.7.0

Release notes

Sourced from vllm's releases.

v0.7.0

Highlights

  • vLLM's V1 engine is ready for testing! This is a rewritten engine designed for performance and architectural simplicity. You can turn it on by setting environment variable VLLM_USE_V1=1. See our blog for more details. (44 commits).
  • New methods (LLM.sleep, LLM.wake_up, LLM.collective_rpc, LLM.reset_prefix_cache) in vLLM for the post training frameworks! (#12361, #12084, #12284).
  • torch.compile is now fully integrated in vLLM, and enabled by default in V1. You can turn it on via -O3 engine parameter. (#11614, #12243, #12043, #12191, #11677, #12182, #12246).

This release features

  • 400 commits from 132 contributors, including 57 new contributors.
    • 28 CI and build enhancements, including testing for nightly torch (#12270) and inclusion of genai-perf for benchmark (#10704).
    • 58 documentation enhancements, including reorganized documentation structure (#11645, #11755, #11766, #11843, #11896).
    • more than 161 bug fixes and miscellaneous enhancements

Features

Models

Hardwares

Features

  • Distributed:
    • Support torchrun and SPMD-style offline inference (#12071)
    • New collective_rpc abstraction (#12151, #11256)
  • API Server: Jina- and Cohere-compatible Rerank API (#12376)
  • Kernels:
    • Flash Attention 3 Support (#12093)
    • Punica prefill kernels fusion (#11234)
    • For Deepseek V3: optimize moe_align_block_size for cuda graph and large num_experts (#12222)

Others

  • Benchmark: new script for CPU offloading (#11533)
  • Security: Set weights_only=True when using torch.load() (#12366)

What's Changed

... (truncated)

Commits

Updates vllm from 0.6.6post1 to 0.7.0

Release notes

Sourced from vllm's releases.

v0.7.0

Highlights

  • vLLM's V1 engine is ready for testing! This is a rewritten engine designed for performance and architectural simplicity. You can turn it on by setting environment variable VLLM_USE_V1=1. See our blog for more details. (44 commits).
  • New methods (LLM.sleep, LLM.wake_up, LLM.collective_rpc, LLM.reset_prefix_cache) in vLLM for the post training frameworks! (#12361, #12084, #12284).
  • torch.compile is now fully integrated in vLLM, and enabled by default in V1. You can turn it on via -O3 engine parameter. (#11614, #12243, #12043, #12191, #11677, #12182, #12246).

This release features

  • 400 commits from 132 contributors, including 57 new contributors.
    • 28 CI and build enhancements, including testing for nightly torch (#12270) and inclusion of genai-perf for benchmark (#10704).
    • 58 documentation enhancements, including reorganized documentation structure (#11645, #11755, #11766, #11843, #11896).
    • more than 161 bug fixes and miscellaneous enhancements

Features

Models

Hardwares

Features

  • Distributed:
    • Support torchrun and SPMD-style offline inference (#12071)
    • New collective_rpc abstraction (#12151, #11256)
  • API Server: Jina- and Cohere-compatible Rerank API (#12376)
  • Kernels:
    • Flash Attention 3 Support (#12093)
    • Punica prefill kernels fusion (#11234)
    • For Deepseek V3: optimize moe_align_block_size for cuda graph and large num_experts (#12222)

Others

  • Benchmark: new script for CPU offloading (#11533)
  • Security: Set weights_only=True when using torch.load() (#12366)

What's Changed

... (truncated)

Commits

Updates vllm from 0.6.6post1 to 0.7.0

Release notes

Sourced from vllm's releases.

v0.7.0

Highlights

  • vLLM's V1 engine is ready for testing! This is a rewritten engine designed for performance and architectural simplicity. You can turn it on by setting environment variable VLLM_USE_V1=1. See our blog for more details. (44 commits).
  • New methods (LLM.sleep, LLM.wake_up, LLM.collective_rpc, LLM.reset_prefix_cache) in vLLM for the post training frameworks! (#12361, #12084, #12284).
  • torch.compile is now fully integrated in vLLM, and enabled by default in V1. You can turn it on via -O3 engine parameter. (#11614, #12243, #12043, #12191, #11677, #12182, #12246).

This release features

  • 400 commits from 132 contributors, including 57 new contributors.
    • 28 CI and build enhancements, including testing for nightly torch (#12270) and inclusion of genai-perf for benchmark (#10704).
    • 58 documentation enhancements, including reorganized documentation structure (#11645, #11755, #11766, #11843, #11896).
    • more than 161 bug fixes and miscellaneous enhancements

Features

Models

Hardwares

Features

  • Distributed:
    • Support torchrun and SPMD-style offline inference (#12071)
    • New collective_rpc abstraction (#12151, #11256)
  • API Server: Jina- and Cohere-compatible Rerank API (#12376)
  • Kernels:
    • Flash Attention 3 Support (#12093)
    • Punica prefill kernels fusion (#11234)
    • For Deepseek V3: optimize moe_align_block_size for cuda graph and large num_experts (#12222)

Others

  • Benchmark: new script for CPU offloading (#11533)
  • Security: Set weights_only=True when using torch.load() (#12366)

What's Changed

... (truncated)

Commits

Updates vllm from 0.6.6post1 to 0.7.0

Release notes

Sourced from vllm's releases.

v0.7.0

Highlights

  • vLLM's V1 engine is ready for testing! This is a rewritten engine designed for performance and architectural simplicity. You can turn it on by setting environment variable VLLM_USE_V1=1. See our blog for more details. (44 commits).
  • New methods (LLM.sleep, LLM.wake_up, LLM.collective_rpc, LLM.reset_prefix_cache) in vLLM for the post training frameworks! (#12361, #12084, #12284).
  • torch.compile is now fully integrated in vLLM, and enabled by default in V1. You can turn it on via -O3 engine parameter. (#11614, #12243, #12043, #12191, #11677, #12182, #12246).

This release features

  • 400 commits from 132 contributors, including 57 new contributors.
    • 28 CI and build enhancements, including testing for nightly torch (#12270) and inclusion of genai-perf for benchmark (#10704).
    • 58 documentation enhancements, including reorganized documentation structure (#11645, #11755, #11766, #11843, #11896).
    • more than 161 bug fixes and miscellaneous enhancements

Features

Models

Hardwares

Features

  • Distributed:
    • Support torchrun and SPMD-style offline inference (#12071)
    • New collective_rpc abstraction (#12151, #11256)
  • API Server: Jina- and Cohere-compatible Rerank API (#12376)
  • Kernels:
    • Flash Attention 3 Support (#12093)
    • Punica prefill kernels fusion (#11234)
    • For Deepseek V3: optimize moe_align_block_size for cuda graph and large num_experts (#12222)

Others

  • Benchmark: new script for CPU offloading (#11533)
  • Security: Set weights_only=True when using torch.load() (#12366)

What's Changed

... (truncated)

Commits

Updates vllm from 0.6.6post1 to 0.7.0

Release notes

Sourced from vllm's releases.

v0.7.0

Highlights

  • vLLM's V1 engine is ready for testing! This is a rewritten engine designed for performance and architectural simplicity. You can turn it on by setting environment variable VLLM_USE_V1=1. See our blog for more details. (44 commits).
  • New methods (LLM.sleep, LLM.wake_up, LLM.collective_rpc, LLM.reset_prefix_cache) in vLLM for the post training frameworks! (#12361, #12084, #12284).
  • torch.compile is now fully integrated in vLLM, and enabled by default in V1. You can turn it on via -O3 engine parameter. (#11614, #12243, #12043, #12191, #11677, #12182, #12246).

This release features

  • 400 commits from 132 contributors, including 57 new contributors.
    • 28 CI and build enhancements, including testing for nightly torch (#12270) and inclusion of genai-perf for benchmark (#10704).
    • 58 documentation enhancements, including reorganized documentation structure (#11645, #11755, #11766, #11843, #11896).
    • more than 161 bug fixes and miscellaneous enhancements

Features

Models

Hardwares

Features

  • Distributed:
    • Support torchrun and SPMD-style offline inference (#12071)
    • New collective_rpc abstraction (#12151, #11256)
  • API Server: Jina- and Cohere-compatible Rerank API (#12376)
  • Kernels:
    • Flash Attention 3 Support (#12093)
    • Punica prefill kernels fusion (#11234)
    • For Deepseek V3: optimize moe_align_block_size for cuda graph and large num_experts (#12222)

Others

  • Benchmark: new script for CPU offloading (#11533)
  • Security: Set weights_only=True when using torch.load() (#12366)

What's Changed

... (truncated)

Commits

Bumps the pip group with 1 update in the /bentoml/bentos/codestral/22b-v0.1-fp16-7231/src directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /bentoml/bentos/deepseek-r1-distill/llama3.1-8b-fp16-f208/src directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /bentoml/bentos/deepseek-r1-distill/llama3.3-70b-instruct-fp16-5b46/src directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /bentoml/bentos/deepseek-r1-distill/qwen2.5-1.5b-math-fp16-5e2f/src directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /bentoml/bentos/deepseek-r1-distill/qwen2.5-14b-fp16-44c7/src directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /bentoml/bentos/deepseek-r1-distill/qwen2.5-32b-fp16-29c6/src directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /bentoml/bentos/deepseek-r1-distill/qwen2.5-7b-math-fp16-761e/src directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /bentoml/bentos/deepseek-v3/671b-instruct-fp8-70d7/src directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /bentoml/bentos/gemma/2b-instruct-fp16-1320/src directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /bentoml/bentos/gemma/7b-instruct-awq-4bit-a9cb/src directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /bentoml/bentos/gemma/7b-instruct-fp16-10bb/src directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /bentoml/bentos/gemma2/27b-instruct-fp16-c1e5/src directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /bentoml/bentos/gemma2/9b-instruct-fp16-fdaa/src directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /bentoml/bentos/jamba1.5/mini-fp16-3615/src directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /bentoml/bentos/llama2/13b-chat-fp16-49e4/src directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /bentoml/bentos/llama2/70b-chat-fp16-cc77/src directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /bentoml/bentos/llama2/7b-chat-awq-4bit-cc6f/src directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /bentoml/bentos/llama2/7b-chat-fp16-81cf/src directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /bentoml/bentos/llama3.1-nemotron/70b-instruct-fp16-8d09/src directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /bentoml/bentos/llama3.1/405b-instruct-awq-4bit-bbd0/src directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /bentoml/bentos/llama3.1/70b-instruct-awq-4bit-e86e/src directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /bentoml/bentos/llama3.1/70b-instruct-fp16-d198/src directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /bentoml/bentos/llama3.1/8b-instruct-awq-4bit-b149/src directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /bentoml/bentos/llama3.1/8b-instruct-fp16-cbdd/src directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /bentoml/bentos/llama3.2/11b-vision-instruct-714f/src directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /bentoml/bentos/llama3.2/1b-instruct-fp16-ce2d/src directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /bentoml/bentos/llama3.2/3b-instruct-fp16-be73/src directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /bentoml/bentos/llama3.3/70b-instruct-fp16-419e/src directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /bentoml/bentos/llama3/70b-instruct-awq-4bit-f693/src directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /bentoml/bentos/llama3/70b-instruct-fp16-7265/src directory: [vllm](https://github.com/vllm-project/vllm).
Bumps the pip group with 1 update in the /bentoml/bentos/mistral-large/123b-instruct-awq-4bit-13a5/src directory: [vllm](https://github.com/vllm-project/vllm).


Updates `vllm` from 0.6.6post1 to 0.7.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Commits](vllm-project/vllm@v0.6.6.post1...v0.7.0)

Updates `vllm` from 0.6.6post1 to 0.7.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Commits](vllm-project/vllm@v0.6.6.post1...v0.7.0)

Updates `vllm` from 0.6.6post1 to 0.7.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Commits](vllm-project/vllm@v0.6.6.post1...v0.7.0)

Updates `vllm` from 0.6.6post1 to 0.7.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Commits](vllm-project/vllm@v0.6.6.post1...v0.7.0)

Updates `vllm` from 0.6.6post1 to 0.7.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Commits](vllm-project/vllm@v0.6.6.post1...v0.7.0)

Updates `vllm` from 0.6.6post1 to 0.7.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Commits](vllm-project/vllm@v0.6.6.post1...v0.7.0)

Updates `vllm` from 0.6.6post1 to 0.7.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Commits](vllm-project/vllm@v0.6.6.post1...v0.7.0)

Updates `vllm` from 0.6.6post1 to 0.7.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Commits](vllm-project/vllm@v0.6.6.post1...v0.7.0)

Updates `vllm` from 0.6.6post1 to 0.7.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Commits](vllm-project/vllm@v0.6.6.post1...v0.7.0)

Updates `vllm` from 0.6.6post1 to 0.7.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Commits](vllm-project/vllm@v0.6.6.post1...v0.7.0)

Updates `vllm` from 0.6.6post1 to 0.7.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Commits](vllm-project/vllm@v0.6.6.post1...v0.7.0)

Updates `vllm` from 0.6.6post1 to 0.7.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Commits](vllm-project/vllm@v0.6.6.post1...v0.7.0)

Updates `vllm` from 0.6.6post1 to 0.7.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Commits](vllm-project/vllm@v0.6.6.post1...v0.7.0)

Updates `vllm` from 0.6.6post1 to 0.7.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Commits](vllm-project/vllm@v0.6.6.post1...v0.7.0)

Updates `vllm` from 0.6.6post1 to 0.7.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Commits](vllm-project/vllm@v0.6.6.post1...v0.7.0)

Updates `vllm` from 0.6.6post1 to 0.7.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Commits](vllm-project/vllm@v0.6.6.post1...v0.7.0)

Updates `vllm` from 0.6.6post1 to 0.7.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Commits](vllm-project/vllm@v0.6.6.post1...v0.7.0)

Updates `vllm` from 0.6.6post1 to 0.7.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Commits](vllm-project/vllm@v0.6.6.post1...v0.7.0)

Updates `vllm` from 0.6.6post1 to 0.7.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Commits](vllm-project/vllm@v0.6.6.post1...v0.7.0)

Updates `vllm` from 0.6.6post1 to 0.7.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Commits](vllm-project/vllm@v0.6.6.post1...v0.7.0)

Updates `vllm` from 0.6.6post1 to 0.7.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Commits](vllm-project/vllm@v0.6.6.post1...v0.7.0)

Updates `vllm` from 0.6.6post1 to 0.7.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Commits](vllm-project/vllm@v0.6.6.post1...v0.7.0)

Updates `vllm` from 0.6.6post1 to 0.7.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Commits](vllm-project/vllm@v0.6.6.post1...v0.7.0)

Updates `vllm` from 0.6.6post1 to 0.7.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Commits](vllm-project/vllm@v0.6.6.post1...v0.7.0)

Updates `vllm` from 0.6.6post1 to 0.7.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Commits](vllm-project/vllm@v0.6.6.post1...v0.7.0)

Updates `vllm` from 0.6.6post1 to 0.7.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Commits](vllm-project/vllm@v0.6.6.post1...v0.7.0)

Updates `vllm` from 0.6.6post1 to 0.7.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Commits](vllm-project/vllm@v0.6.6.post1...v0.7.0)

Updates `vllm` from 0.6.6post1 to 0.7.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Commits](vllm-project/vllm@v0.6.6.post1...v0.7.0)

Updates `vllm` from 0.6.6post1 to 0.7.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Commits](vllm-project/vllm@v0.6.6.post1...v0.7.0)

Updates `vllm` from 0.6.6post1 to 0.7.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Commits](vllm-project/vllm@v0.6.6.post1...v0.7.0)

Updates `vllm` from 0.6.6post1 to 0.7.0
- [Release notes](https://github.com/vllm-project/vllm/releases)
- [Commits](vllm-project/vllm@v0.6.6.post1...v0.7.0)

---
updated-dependencies:
- dependency-name: vllm
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: vllm
  dependency-type: direct:production
  dependency-group: pip
...

Signed-off-by: dependabot[bot] <[email protected]>
@dependabot dependabot bot added dependencies Pull requests that update a dependency file python Pull requests that update Python code labels Jan 27, 2025
@aarnphm aarnphm closed this Feb 6, 2025
Copy link
Author

dependabot bot commented on behalf of github Feb 6, 2025

This pull request was built based on a group rule. Closing it will not ignore any of these versions in future pull requests.

To ignore these dependencies, configure ignore rules in dependabot.yml

@dependabot dependabot bot deleted the dependabot/pip/bentoml/bentos/codestral/22b-v0.1-fp16-7231/src/pip-a80ba3bf45 branch February 6, 2025 23:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies Pull requests that update a dependency file python Pull requests that update Python code
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant