Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(deps): update auto merged updates #703

Merged
merged 1 commit into from
Aug 5, 2024

Conversation

platform-engineering-bot
Copy link
Collaborator

@platform-engineering-bot platform-engineering-bot commented Jul 29, 2024

This PR contains the following updates:

Package Update Change
PyMuPDF (changelog) patch ==1.24.7 -> ==1.24.9
chromadb patch ==0.5.4 -> ==0.5.5
fastapi-cli patch ==0.0.4 -> ==0.0.5
llama-cpp-python (changelog) patch ==0.2.82 -> ==0.2.85
timm patch ==1.0.7 -> ==1.0.8
tqdm (changelog) patch ==4.66.4 -> ==4.66.5
uvicorn (changelog) patch ==0.30.3 -> ==0.30.5

Release Notes

pymupdf/pymupdf (PyMuPDF)

v1.24.9: PyMuPDF-1.24.9 released

Compare Source

PyMuPDF-1.24.9 has been released.

Wheels for Windows, Linux and MacOS, and the sdist, are available on pypi.org and can be installed in the usual way, for example:

python -m pip install --upgrade pymupdf

[Linux-aarch64 wheels will be built and uploaded later.]

Changes in version 1.24.9 (2024-07-24)

  • Incremented MuPDF version to 1.24.8.

v1.24.8: PyMuPDF-1.24.8 released

Compare Source

PyMuPDF-1.24.8 has been released.

Wheels for Windows, Linux and MacOS, and the sdist, are available on pypi.org and can be installed in the usual way, for example:

python -m pip install --upgrade pymupdf

[Linux-aarch64 wheels will be built and uploaded later.]

Changes in version 1.24.8 (2024-07-22)

Other:

  • Fixed various spelling mistakes spotted by codespell.
  • Improved how we modify MuPDF's default configuration on Windows.
  • Make text search to work with ligatures.
chroma-core/chroma (chromadb)

v0.5.5

Compare Source

Version: 0.5.5
Git ref: refs/tags/0.5.5
Build Date: 2024-07-23T01:01
PIP Package: chroma-0.5.5.tar.gz
Github Container Registry Image: ghcr.io/chroma-core/chroma:0.5.5
DockerHub Image: chromadb/chroma:0.5.5

What's Changed

Full Changelog: chroma-core/chroma@0.5.4...0.5.5

fastapi/fastapi-cli (fastapi-cli)

v0.0.5

Compare Source

Breaking Changes
  • ♻️ Add fastapi-cli[standard] including Uvicorn, make fastapi-cli and fastapi-cli-slim have the same packages. PR #​55 by @​tiangolo.
  • ➕ Keep Uvicorn in default dependencies. PR #​57 by @​tiangolo.
Summary

Install with:

pip install "fastapi[standard]"

Or if for some reason installing only the FastAPI CLI:

pip install "fastapi-cli[standard]"
Technical Details

Before this, fastapi-cli would include Uvicorn and fastapi-cli-slim would not include Uvicorn.

In a future version, fastapi-cli will not include Uvicorn unless it is installed with fastapi-cli[standard].

FastAPI version 0.112.0 has a fastapi[standard] and that one includes fastapi-cli[standard].

Before, you would install pip install fastapi, or pip install fastapi-cli. Now you should include the standard optional dependencies (unless you want to exclude one of those): pip install "fastapi[standard]".

In a future version, fastapi-cli will not include Uvicorn unless it is installed with fastapi-cli[standard].

Refactors
Docs
Internal
abetlen/llama-cpp-python (llama-cpp-python)

v0.2.85

Compare Source

v0.2.84

Compare Source

v0.2.83

Compare Source

huggingface/pytorch-image-models (timm)

v1.0.8

Compare Source

July 28, 2024
  • Add mobilenet_edgetpu_v2_m weights w/ ra4 mnv4-small based recipe. 80.1% top-1 @​ 224 and 80.7 @​ 256.
  • Release 1.0.8
July 26, 2024
  • More MobileNet-v4 weights, ImageNet-12k pretrain w/ fine-tunes, and anti-aliased ConvLarge models
model top1 top1_err top5 top5_err param_count img_size
mobilenetv4_conv_aa_large.e230_r448_in12k_ft_in1k 84.99 15.01 97.294 2.706 32.59 544
mobilenetv4_conv_aa_large.e230_r384_in12k_ft_in1k 84.772 15.228 97.344 2.656 32.59 480
mobilenetv4_conv_aa_large.e230_r448_in12k_ft_in1k 84.64 15.36 97.114 2.886 32.59 448
mobilenetv4_conv_aa_large.e230_r384_in12k_ft_in1k 84.314 15.686 97.102 2.898 32.59 384
mobilenetv4_conv_aa_large.e600_r384_in1k 83.824 16.176 96.734 3.266 32.59 480
mobilenetv4_conv_aa_large.e600_r384_in1k 83.244 16.756 96.392 3.608 32.59 384
mobilenetv4_hybrid_medium.e200_r256_in12k_ft_in1k 82.99 17.01 96.67 3.33 11.07 320
mobilenetv4_hybrid_medium.e200_r256_in12k_ft_in1k 82.364 17.636 96.256 3.744 11.07 256
model top1 top1_err top5 top5_err param_count img_size
efficientnet_b0.ra4_e3600_r224_in1k 79.364 20.636 94.754 5.246 5.29 256
efficientnet_b0.ra4_e3600_r224_in1k 78.584 21.416 94.338 5.662 5.29 224
mobilenetv1_100h.ra4_e3600_r224_in1k 76.596 23.404 93.272 6.728 5.28 256
mobilenetv1_100.ra4_e3600_r224_in1k 76.094 23.906 93.004 6.996 4.23 256
mobilenetv1_100h.ra4_e3600_r224_in1k 75.662 24.338 92.504 7.496 5.28 224
mobilenetv1_100.ra4_e3600_r224_in1k 75.382 24.618 92.312 7.688 4.23 224
  • Prototype of set_input_size() added to vit and swin v1/v2 models to allow changing image size, patch size, window size after model creation.
  • Improved support in swin for different size handling, in addition to set_input_size, always_partition and strict_img_size args have been added to __init__ to allow more flexible input size constraints
  • Fix out of order indices info for intermediate 'Getter' feature wrapper, check out or range indices for same.
  • Add several tiny < .5M param models for testing that are actually trained on ImageNet-1k
model top1 top1_err top5 top5_err param_count img_size crop_pct
test_efficientnet.r160_in1k 47.156 52.844 71.726 28.274 0.36 192 1.0
test_byobnet.r160_in1k 46.698 53.302 71.674 28.326 0.46 192 1.0
test_efficientnet.r160_in1k 46.426 53.574 70.928 29.072 0.36 160 0.875
test_byobnet.r160_in1k 45.378 54.622 70.572 29.428 0.46 160 0.875
test_vit.r160_in1k 42.0 58.0 68.664 31.336 0.37 192 1.0
test_vit.r160_in1k 40.822 59.178 67.212 32.788 0.37 160 0.875
  • Fix vit reg token init, thanks Promisery
  • Other misc fixes
June 24, 2024
  • 3 more MobileNetV4 hyrid weights with different MQA weight init scheme
model top1 top1_err top5 top5_err param_count img_size
mobilenetv4_hybrid_large.ix_e600_r384_in1k 84.356 15.644 96.892 3.108 37.76 448
mobilenetv4_hybrid_large.ix_e600_r384_in1k 83.990 16.010 96.702 3.298 37.76 384
mobilenetv4_hybrid_medium.ix_e550_r384_in1k 83.394 16.606 96.760 3.240 11.07 448
mobilenetv4_hybrid_medium.ix_e550_r384_in1k 82.968 17.032 96.474 3.526 11.07 384
mobilenetv4_hybrid_medium.ix_e550_r256_in1k 82.492 17.508 96.278 3.722 11.07 320
mobilenetv4_hybrid_medium.ix_e550_r256_in1k 81.446 18.554 95.704 4.296 11.07 256
  • florence2 weight loading in DaViT model
tqdm/tqdm (tqdm)

v4.66.5: tqdm v4.66.5 stable

Compare Source

encode/uvicorn (uvicorn)

v0.30.5

Compare Source

Fixed
  • Don't close connection before receiving body on H11 (#​2408)

v0.30.4

Compare Source

Fixed
  • Close connection when h11 sets client state to MUST_CLOSE (#​2375)

Configuration

📅 Schedule: Branch creation - "before 4am on Monday" (UTC), Automerge - At any time (no schedule defined).

🚦 Automerge: Enabled.

Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.

👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by Renovate Bot.

@axel7083
Copy link
Contributor

llamacpp python ==0.2.82 -> ==0.2.84 introduce a regression when using the LlamaGrammar. I think it is better to wait before merging this bump

see abetlen/llama-cpp-python#1623 and abetlen/llama-cpp-python#1636

@platform-engineering-bot platform-engineering-bot force-pushed the renovate/auto-merged-updates branch 3 times, most recently from af96fe4 to 4fc70e7 Compare August 2, 2024 12:46
@rhatdan
Copy link
Member

rhatdan commented Aug 2, 2024

Since test pass now @axel7083 Is this ok to merge?

@platform-engineering-bot platform-engineering-bot force-pushed the renovate/auto-merged-updates branch 2 times, most recently from c6122d2 to 4a91e66 Compare August 4, 2024 00:23
Signed-off-by: Platform Engineering Bot <[email protected]>
@platform-engineering-bot platform-engineering-bot force-pushed the renovate/auto-merged-updates branch from 4a91e66 to 7e9144d Compare August 4, 2024 11:03
@rhatdan
Copy link
Member

rhatdan commented Aug 5, 2024

I am going to assume that merging is now safe.

@rhatdan rhatdan closed this Aug 5, 2024
@rhatdan rhatdan reopened this Aug 5, 2024
@rhatdan rhatdan merged commit 023faf1 into main Aug 5, 2024
17 checks passed
@platform-engineering-bot platform-engineering-bot deleted the renovate/auto-merged-updates branch August 5, 2024 10:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants