Releases: huggingface/huggingface_hub
v0.22.0: Chat completion, inference types and hub mixins!
Discuss about the release in our Community Tab. Feedback is welcome!! 🤗
✨ InferenceClient
Support for inference tools continues to improve in huggingface_hub
. At the menu in this release? A new chat_completion
API and fully typed inputs/outputs!
Chat-completion API!
A long-awaited API has just landed in huggingface_hub
! InferenceClient.chat_completion
follows most of OpenAI's API, making it much easier to integrate with existing tools.
Technically speaking it uses the same backend as the text-generation
task but requires a preprocessing step to format the list of messages into a single text prompt. The chat template is rendered server-side when models are powered by TGI, which is the case for most LLMs: Llama, Zephyr, Mistral, Gemma, etc. Otherwise, the templating happens client-side which requires minijinja
package to be installed. We are actively working on bridging this gap, aiming at rendering all templates server-side in the future.
>>> from huggingface_hub import InferenceClient
>>> messages = [{"role": "user", "content": "What is the capital of France?"}]
>>> client = InferenceClient("HuggingFaceH4/zephyr-7b-beta")
# Batch completion
>>> client.chat_completion(messages, max_tokens=100)
ChatCompletionOutput(
choices=[
ChatCompletionOutputChoice(
finish_reason='eos_token',
index=0,
message=ChatCompletionOutputChoiceMessage(
content='The capital of France is Paris. The official name of the city is "Ville de Paris" (City of Paris) and the name of the country\'s governing body, which is located in Paris, is "La République française" (The French Republic). \nI hope that helps! Let me know if you need any further information.'
)
)
],
created=1710498360
)
# Stream new tokens one by one
>>> for token in client.chat_completion(messages, max_tokens=10, stream=True):
... print(token)
ChatCompletionStreamOutput(choices=[ChatCompletionStreamOutputChoice(delta=ChatCompletionStreamOutputDelta(content='The', role='assistant'), index=0, finish_reason=None)], created=1710498504)
ChatCompletionStreamOutput(choices=[ChatCompletionStreamOutputChoice(delta=ChatCompletionStreamOutputDelta(content=' capital', role='assistant'), index=0, finish_reason=None)], created=1710498504)
(...)
ChatCompletionStreamOutput(choices=[ChatCompletionStreamOutputChoice(delta=ChatCompletionStreamOutputDelta(content=' may', role='assistant'), index=0, finish_reason=None)], created=1710498504)
ChatCompletionStreamOutput(choices=[ChatCompletionStreamOutputChoice(delta=ChatCompletionStreamOutputDelta(content=None, role=None), index=0, finish_reason='length')], created=1710498504)
- Implement
InferenceClient.chat_completion
+ use new types for text-generation by @Wauplin in #2094 - Fix InferenceClient.text_generation for non-tgi models by @Wauplin in #2136
- #2153 by @Wauplin in #2153
Inference types
We are currently working towards more consistency in tasks definitions across the Hugging Face ecosystem. This is no easy job but a major milestone has recently been achieved! All inputs and outputs of the main ML tasks are now fully specified as JSONschema objects. This is the first brick needed to have consistent expectations when running inference across our stack: transformers (Python), transformers.js (Typescript), Inference API (Python), Inference Endpoints (Python), Text Generation Inference (Rust), Text Embeddings Inference (Rust), InferenceClient (Python), Inference.js (Typescript), etc.
Integrating those definitions will require more work but huggingface_hub
is one of the first tools to integrate them. As a start, all InferenceClient
return values are now typed dataclasses. Furthermore, typed dataclasses have been generated for all tasks' inputs and outputs. This means you can now integrate them in your own library to ensure consistency with the Hugging Face ecosystem. Specifications are open-source (see here) meaning anyone can access and contribute to them. Python's generated classes are documented here.
Here is a short example showcasing the new output types:
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.object_detection("people.jpg"):
[
ObjectDetectionOutputElement(
score=0.9486683011054993,
label='person',
box=ObjectDetectionBoundingBox(xmin=59, ymin=39, xmax=420, ymax=510)
),
...
]
Note that those dataclasses are backward-compatible with the dict-based interface that was previously in use. In the example above, both ObjectDetectionBoundingBox(...).xmin
and ObjectDetectionBoundingBox(...)["xmin"]
are correct, even though the former should be the preferred solution from now on.
- Generate inference types + start using output types by @Wauplin in #2036
- Add = None at optional parameters by @LysandreJik in #2095
- Fix inference types shared between tasks by @Wauplin in #2125
🧩 ModelHubMixin
ModelHubMixin
is an object that can be used as a parent class for the objects in your library in order to provide built-in serialization methods to upload and download pretrained models from the Hub. This mixin is adapted into a PyTorchHubMixin
that can serialize and deserialize any Pytorch model. The 0.22 release brings its share of improvements to these classes:
- Better support of init values. If you instantiate a model with some custom arguments, the values will be automatically stored in a config.json file and restored when reloading the model from pretrained weights. This should unlock integrations with external libraries in a much smoother way.
- Library authors integrating the hub mixin can now define custom metadata for their library: library name, tags, document url and repo url. These are to be defined only once when integrating the library. Any model pushed to the Hub using the library will then be easily discoverable thanks to those tags.
- A base modelcard is generated for each saved model. This modelcard includes default tags (e.g.
model_hub_mixin
) and custom tags from the library (see 2.). You can extend/modify this modelcard by overwriting thegenerate_model_card
method.
>>> import torch
>>> import torch.nn as nn
>>> from huggingface_hub import PyTorchModelHubMixin
# Define your Pytorch model exactly the same way you are used to
>>> class MyModel(
... nn.Module,
... PyTorchModelHubMixin, # multiple inheritance
... library_name="keras-nlp",
... tags=["keras"],
... repo_url="https://github.com/keras-team/keras-nlp",
... docs_url="https://keras.io/keras_nlp/",
... # ^ optional metadata to generate model card
... ):
... def __init__(self, hidden_size: int = 512, vocab_size: int = 30000, output_size: int = 4):
... super().__init__()
... self.param = nn.Parameter(torch.rand(hidden_size, vocab_size))
... self.linear = nn.Linear(output_size, vocab_size)
... def forward(self, x):
... return self.linear(x + self.param)
# 1. Create model
>>> model = MyModel(hidden_size=128)
# Config is automatically created based on input + default values
>>> model._hub_mixin_config
{"hidden_size": 128, "vocab_size": 30000, "output_size": 4}
# 2. (optional) Save model to local directory
>>> model.save_pretrained("path/to/my-awesome-model")
# 3. Push model weights to the Hub
>>> model.push_to_hub("my-awesome-model")
# 4. Initialize model from the Hub => config has been preserved
>>> model = MyModel.from_pretrained("username/my-awesome-model")
>>> model._hub_mixin_config
{"hidden_size": 128, "vocab_size": 30000, "output_size": 4}
# Model card has been correctly populated
>>> from huggingface_hub import ModelCard
>>> card = ModelCard.load("username/my-awesome-model")
>>> card.data.tags
["keras", "pytorch_model_hub_mixin", "model_hub_mixin"]
>>> card.data.library_name
"keras-nlp"
For more details on how to integrate these classes, check out the integration guide.
- Fix
ModelHubMixin
: pass config when__init__
accepts **kwargs by @Wauplin in #2058 - [PyTorchModelHubMixin] Fix saving model with shared tensors by @NielsRogge in #2086
- Correctly inject config in
PytorchModelHubMixin
by @Wauplin in #2079 - Fix passing kwargs in PytorchHubMixin by @Wauplin in #2093
- Generate modelcard in
ModelHubMixin
by @Wauplin in #2080 - Fix ModelHubMixin: save config only if doesn't exist by @Wauplin in [#2105...
[v0.21.4] Hot-fix: Fix saving model with shared tensors
Release v0.21 introduced a breaking change make it impossible to save a PytorchModelHubMixin
-based model that has shared tensors. This has been fixed in #2086.
Full Changelog: v0.21.3...v0.21.4
[v0.21.3] Hot-fix: ModelHubMixin pass config when `__init__` accepts `**kwargs`
More details in #2058.
Full Changelog: v0.21.2...v0.21.3
v0.21.2: hot-fix: [HfFileSystem] Fix glob with pattern without wildcards
See #2056. (+#2050 shipped as v0.21.1).
Full Changelog: v0.21.0...v0.21.2
v0.21.0: dataclasses everywhere, file-system, PyTorchModelHubMixin, serialization and more.
Discuss about the release in our Community Tab. Feedback welcome!! 🤗
🖇️ Dataclasses everywhere!
All objects returned by the HfApi
client are now dataclasses!
In the past, objects were either dataclasses, typed dictionaries, non-typed dictionaries and even basic classes. This is now all harmonized with the goal of improving developer experience.
Kudos goes to the community for the implementation and testing of all the harmonization process. Thanks again for the contributions!
- Use dataclasses for all objects returned by HfApi #1911 by @Ahmedniz1 in #1974
- Updating HfApi objects to use dataclass by @Ahmedniz1 in #1988
- Dataclasses for objects returned hf api by @NouamaneELGueddarii in #1993
💾 FileSystem
The HfFileSystem
class implements the fsspec
interface to allow loading and writing files with a filesystem-like interface. The interface is highly used by the datasets
library and this release will improve further the efficiency and robustness of the integration.
- Pass revision in path to AbstractBufferedFile init by @albertvillanova in #1948
- [HfFileSystem] Fix
rm
on branch by @lhoestq in #1957 - Retry fetching data on 502 error in
HfFileSystem
by @mariosasko in #1981 - Add HfFileSystemStreamFile by @lhoestq in #1967
- [HfFileSystem] Copy non lfs files by @lhoestq in #1996
- Add
HfFileSystem.url
method by @mariosasko in #2027
🧩 Pytorch Hub Mixin
The PyTorchModelHubMixin
class let's you upload ANY pytorch model to the Hub in a few lines of code. More precisely, it is a class that can be inherited in any nn.Module
class to add the from_pretrained
, save_pretrained
and push_to_hub
helpers to your class. It handles serialization and deserialization of weights and configs for you and enables download counts on the Hub.
With this release, we've fixed 2 pain points holding back users from using this lib:
- Configs are now better handled. The mixin automatically detects if the base class defines a config, saves it on the Hub and then injects it at load time, either as a dictionary or a dataclass depending on the base class's expectations.
- Weights are now saved as
.safetensors
files instead of pytorch pickles for safety reasons. Loading from previous pytorch pickles is still supported but we are moving toward completely deprecating them (in a mid to long term plan).
- Better config support in ModelHubMixin by @Wauplin in #2001
- Use safetensors by default for
PyTorchModelHubMixin
by @bmuskalla in #2033
✨ InferenceClient improvements
Audio-to-audio task is now supported by both by the InferenceClient
!
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> audio_output = client.audio_to_audio("audio.flac")
>>> for i, item in enumerate(audio_output):
>>> with open(f"output_{i}.flac", "wb") as f:
f.write(item["blob"])
- Added audio to audio in inference client by @Ahmedniz1 in #2020
Also fixed a few things:
- Fix intolerance for new field in TGI stream response: 'index' by @danielpcox in #2006
- Fix optional model in tabular tasks by @Wauplin in #2018
- Added best_of to non-TGI ignored parameters by @dopc in #1949
📤 Model serialization
With the aim of harmonizing repo structures and file serialization on the Hub, we added a new module serialization
with a first helper split_state_dict_into_shards
that takes a state dict and split it into shards. Code implementation is mostly taken from transformers
and aims to be reused by other libraries in the ecosystem. It seamlessly supports torch
, tensorflow
and numpy
weights, and can be easily extended to other frameworks.
This is a first step in the harmonization process and more loading/saving helpers will be added soon.
📚 Documentation
🌐 Translations
Community is actively getting the job done to translate the huggingface_hub
to other languages. We now have docs available in Simplified Chinese (here) and in French (here) to help democratize good machine learning!
- [i18n-CN] Translated some files to simplified Chinese #1915 by @2404589803 in #1916
- Update .github workflow to build cn docs on PRs by @Wauplin in #1931
- [i18n-FR] Translated files in french and reviewed them by @JibrilEl in #2024
Docs misc
- Document
base_model
in modelcard metadata by @Wauplin in #1936 - Update the documentation of add_collection_item by @FremyCompany in #1958
- Docs[i18n-en]: added pkgx as an installation method to the docs by @michaelessiet in #1955
- Added
hf_transfer
extra intosetup.py
anddocs/
by @jamesbraza in #1970 - Documenting CLI default for
download --repo-type
by @jamesbraza in #1986 - Update repository.md by @xmichaelmason in #2010
Docs fixes
- Fix URL in
get_safetensors_metadata
docstring by @Wauplin in #1951 - Fix grammar by @Anthonyg5005 in #2003
- Fix doc by @jordane95 in #2013
- typo fix by @Decryptu in #2035
🛠️ Misc improvements
Creating a commit with an invalid README will fail early instead of uploading all LFS files before failing to commit.
Added a revision_exists
helper, working similarly to repo_exists
and file_exists
:
>>> from huggingface_hub import revision_exists
>>> revision_exists("google/gemma-7b", "float16")
True
>>> revision_exists("google/gemma-7b", "not-a-revision")
False
InferenceClient.wait(...)
now raises an error if the endpoint is in a failed state.
Improved progress bar when downloading a file
Other stuff:
- added will not echo message to the login token message by @vtrenton in #1925
- Raise if repo is disabled by @Wauplin in #1965
- Fix timezone in datetime parsing by @Wauplin in #1982
- retry on any 5xx on upload by @Wauplin in #2026
💔 Breaking changes
- Classes
ModelFilter
andDatasetFilter
are deprecated when listing models and datasets in favor of a simpler API that lets you pass the parameters directly tolist_models
andlist_datasets
.
>>> from huggingface_hub import list_models, ModelFilter
# use
>>> list_models(language="zh")
# instead of
>>> list_models(filter=ModelFilter(language="zh"))
Cleaner, right? ModelFilter
and DatasetFilter
will still be supported until v0.24
release.
- In the inference client,
ModelStatus.compute_type
is not a string anymore but a dictionary with more detailed info...
0.20.3 hot-fix: Fix HfFolder login when env variable not set
This patch release fixes an issue when retrieving the locally saved token using huggingface_hub.HfFolder.get_token
. For the record, this is a "planned to be deprecated" method, in favor of huggingface_hub.get_token
which is more robust and versatile. The issue came from a breaking change introduced in #1895 meaning only 0.20.x
is affected.
For more details, please refer to #1966.
Full Changelog: v0.20.2...v0.20.3
0.20.2 hot-fix: Fix concurrency issues in google colab login
A concurrency issue when using userdata.get
to retrieve HF_TOKEN
token led to deadlocks when downloading files in parallel. This hot-fix release fixes this issue by using a global lock before trying to get the token from the secrets vault. More details in #1953.
Full Changelog: v0.20.1...v0.20.2
0.20.1: hot-fix Fix circular import
This hot-fix release fixes a circular import error happening when import login
or logout
helpers from huggingface_hub
.
Related PR: #1930
Full Changelog: v0.20.0...v0.20.1
v0.20.0: Authentication, speed, safetensors metadata, access requests and more.
(Discuss about the release in our Community Tab. Feedback welcome!! 🤗)
🔐 Authentication
Authentication has been greatly improved in Google Colab. The best way to authenticate in a Colab notebook is to define a HF_TOKEN
secret in your personal secrets. When a notebook tries to reach the Hub, a pop-up will ask you if you want to share the HF_TOKEN
secret with this notebook -as an opt-in mechanism. This way, no need to call huggingface_hub.login
and copy-paste your token anymore! 🔥🔥🔥
In addition to the Google Colab integration, the login guide has been revisited to focus on security. It is recommended to authenticate either using huggingface_hub.login
or the HF_TOKEN
environment variable, rather than passing a hardcoded token in your scripts. Check out the new guide here.
- Login/authentication enhancements by @Wauplin in #1895
- Catch
SecretNotFoundError
in google colab login by @Wauplin in #1912
🏎️ Faster HfFileSystem
HfFileSystem
is a pythonic fsspec-compatible file interface to the Hugging Face Hub. Implementation has been greatly improved to optimize fs.find
performances.
Here is a quick benchmark with the bigcode/the-stack-dedup dataset:
v0.19.4 | v0.20.0 | |
---|---|---|
hffs.find("datasets/bigcode/the-stack-dedup", detail=False) |
46.2s | 1.63s |
hffs.find("datasets/bigcode/the-stack-dedup", detail=True) |
47.3s | 24.2s |
- Faster
HfFileSystem.find
by @mariosasko in #1809 - Faster
HfFileSystem.glob
by @lhoestq in #1815 - Fix common path in
_ ls_tree
by @lhoestq in #1850 - Remove
maxdepth
param fromHfFileSystem.glob
by @mariosasko in #1875 - [HfFileSystem] Support quoted revisions in path by @lhoestq in #1888
- Deprecate
HfApi.list_files_info
by @mariosasko in #1910
🚪 Access requests API (gated repos)
Models and datasets can be gated to monitor who's accessing the data you are sharing. You can also filter access with a manual approval of the requests. Access requests can now be managed programmatically using HfApi
. This can be useful for example if you have advanced user request screening requirements (for advanced compliance requirements, etc) or if you want to condition access to a model based on completing a payment flow.
Check out this guide to learn more about gated repos.
>>> from huggingface_hub import list_pending_access_requests, accept_access_request
# List pending requests
>>> requests = list_pending_access_requests("meta-llama/Llama-2-7b")
>>> requests[0]
[
AccessRequest(
username='clem',
fullname='Clem 🤗',
email='***',
timestamp=datetime.datetime(2023, 11, 23, 18, 4, 53, 828000, tzinfo=datetime.timezone.utc),
status='pending',
fields=None,
),
...
]
# Accept Clem's request
>>> accept_access_request("meta-llama/Llama-2-7b", "clem")
🔍 Parse Safetensors metadata
Safetensors is a simple, fast and secured format to save tensors in a file. Its advantages makes it the preferred format to host weights on the Hub. Thanks to its specification, it is possible to parse the file metadata on-the-fly. HfApi
now provides get_safetensors_metadata
, an helper to get safetensors metadata from a repo.
# Parse repo with single weights file
>>> metadata = get_safetensors_metadata("bigscience/bloomz-560m")
>>> metadata
SafetensorsRepoMetadata(
metadata=None,
sharded=False,
weight_map={'h.0.input_layernorm.bias': 'model.safetensors', ...},
files_metadata={'model.safetensors': SafetensorsFileMetadata(...)}
)
>>> metadata.files_metadata["model.safetensors"].metadata
{'format': 'pt'}
Other improvements
List and filter collections
You can now list collections on the Hub. You can filter them to return only collection containing a given item, or created by a given author.
>>> collections = list_collections(item="models/TheBloke/OpenHermes-2.5-Mistral-7B-GGUF", sort="trending", limit=5):
>>> for collection in collections:
... print(collection.slug)
teknium/quantized-models-6544690bb978e0b0f7328748
AmeerH/function-calling-65560a2565d7a6ef568527af
PostArchitekt/7bz-65479bb8c194936469697d8c
gnomealone/need-to-test-652007226c6ce4cdacf9c233
Crataco/favorite-7b-models-651944072b4fffcb41f8b568
- add list_collections endpoint, solves #1835 by @ceferisbarov in #1856
- fix list collections sort values by @Wauplin in #1867
- Warn about truncation when listing collections by @Wauplin in #1873
Respect .gitignore
upload_folder
now respect gitignore
files!
Previously it was possible to filter which files should be uploaded from a folder using the allow_patterns
and ignore_patterns
parameters. This can now automatically be done by simply creating a .gitignore
file in your repo.
- Respect
.gitignore
file in commits by @Wauplin in #1868 - Remove respect_gitignore parameter by @Wauplin in #1876
Robust uploads
Uploading LFS files has also gotten more robust with a retry mechanism if a transient error happen while uploading to S3.
Target language in InferenceClient.translation
InferenceClient.translation
now supports src_lang
/tgt_lang
for applicable models.
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.translation("My name is Sarah Jessica Parker but you can call me Jessica", model="facebook/mbart-large-50-many-to-many-mmt", src_lang="en_XX", tgt_lang="fr_XX")
"Mon nom est Sarah Jessica Parker mais vous pouvez m'appeler Jessica"
>>> client.translation("My name is Sarah Jessica Parker but you can call me Jessica", model="facebook/mbart-large-50-many-to-many-mmt", src_lang="en_XX", tgt_lang="es_XX")
'Mi nombre es Sarah Jessica Parker pero puedes llamarme Jessica'
- add language support to translation client, solves #1763 by @ceferisbarov in #1869
Support source in reported EvalResult
EvalResult
now support source_name
and source_link
to provide a custom source for a reported result.
🛠️ Misc
Fetch all pull requests refs with list_repo_refs
.
Filter discussion when listing them with get_repo_discussions
.
# List opened PR from "sanchit-gandhi" on model repo "openai/whisper-large-v3"
>>> from huggingface_hub import get_repo_discussions
>>> discussions = get_repo_discussions(
... repo_id="openai/whisper-large-v3",
... author="sanchit-gandhi",
... discussion_type="pull_request",
... discussion_status="open",
... )
- ✨ Add filters to HfApi.get_repo_discussions by @SBrandeis in #1845
New field createdAt
for ModelInfo
, DatasetInfo
and SpaceInfo
.
It's now possible to create an inference endpoint running on a custom docker image (typically: a TGI container).
# Start an Inference Endpoint running Zephyr-7b-beta on TGI
>>> from huggingface_hub import create_inference_endpoint
>>> endpoint = create_inference_endpoint(
... "aws-zephyr-7b-beta-0486",
... repository="HuggingFaceH4/zephyr-7b-beta",
... framework="pytorch",
... task="text-generation",
... accelerator="gpu",
... vendor="aws",
... region="us-east-1",
... type="protected",
... instance_size="medium",
... instance_type="g5.2xlarge",
... custom_image={
... "health_route": "/health",
... "env": {
... "MAX_BATCH_PREFILL_TOKENS": "2048",
... "MAX_INPUT_LENGTH": "1024",
... "MAX_TOTAL_TOKENS": "1512",
... "MODEL_ID": "/repository"
... },
... "url": "ghcr.io/huggingface/text-generation-inference:1.1.0",
... },
... )
Upload CLI: create branch when revision does not exist
🖥️ Environment variables
huggingface_hub.constants.HF_HOME
has been made a public constant (see reference).
Offline mode has gotten more consistent. If HF_HUB_OFFLINE
is set, any http call to the Hub will fail. The fallback mechanism is snapshot_download
has been refactored to be aligned with the hf_hub_download
workflow. If offline mode is activated (or a connection error happens) and the files are already in the cache, snapshot_download
returns the corresponding snapshot directory.
- Respect HF_HUB_OFFLINE for every http call by @Wauplin in #1899
- Improve
snapshot_download
offline mode by @Wauplin in #1913
DO_NOT_TRACK
environment variable is now respected to deactivate telemetry calls. This is similar to HF_HUB_DISABLE_TELEMETRY
but not specific to Hugging Face.
📚 Documentation
- Document more list repos behavior by @Wauplin in #1823
- [i18n-KO] 🌐 Translated
git_vs_http.md
to Korean by @heuristicwave in #1862
Doc fixes
v0.19.4 - Hot-fix: do not fail if pydantic install is corrupted
On Python3.8, it is fairly easy to get a corrupted install of pydantic (more specificially, pydantic 2.x cannot run if tensorflow is installed because of an incompatible requirement on typing_extensions
). Since pydantic
is an optional dependency of huggingface_hub
, we do not want to crash at huggingface_hub
import time if pydantic install is corrupted. However this was the case because of how imports are made in huggingface_hub
. This hot-fix releases fixes this bug. If pydantic is not correctly installed, we only raise a warning and continue as if it was not installed at all.
Related PR: #1829
Full Changelog: v0.19.3...v0.19.4