Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add text classification to inference client #1606

Merged
2 changes: 1 addition & 1 deletion docs/source/en/guides/inference.md
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,7 @@ has a simple API that supports the most common tasks. Here is a list of the curr
| | [Sentence Similarity](https://huggingface.co/tasks/sentence-similarity) | ✅ | [`~InferenceClient.sentence_similarity`] |
| | [Summarization](https://huggingface.co/tasks/summarization) | ✅ | [`~InferenceClient.summarization`] |
| | [Table Question Answering](https://huggingface.co/tasks/table-question-answering) | | |
| | [Text Classification](https://huggingface.co/tasks/text-classification) | | |
| | [Text Classification](https://huggingface.co/tasks/text-classification) | ✅ | [`~InferenceClient.text_classification`] |
| | [Text Generation](https://huggingface.co/tasks/text-generation) | ✅ | [`~InferenceClient.text_generation`] |
| | [Token Classification](https://huggingface.co/tasks/token-classification) | | |
| | [Translation](https://huggingface.co/tasks/translation) | | |
Expand Down
32 changes: 32 additions & 0 deletions src/huggingface_hub/inference/_client.py
Original file line number Diff line number Diff line change
Expand Up @@ -765,6 +765,38 @@ def summarization(
response = self.post(json=payload, model=model, task="summarization")
return _bytes_to_dict(response)[0]["summary_text"]

def text_classification(self, text: str, *, model: Optional[str] = None) -> List[ClassificationOutput]:
"""
Perform sentiment-analysis on the given text.

Args:
text (`str`):
A string to be classified.
model (`str`, *optional*):
The model to use for the text classification task. Can be a model ID hosted on the Hugging Face Hub or a URL to
a deployed Inference Endpoint. If not provided, the default recommended text classification model will be used.
Defaults to None.

Returns:
`List[Dict]`: a list of dictionaries containing the predicted label and associated probability.

Raises:
[`InferenceTimeoutError`]:
If the model is unavailable or the request times out.
`HTTPError`:
If the request fails with an HTTP error status code other than HTTP 503.

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> output = client.text_classification("I like you")
[{'label': 'POSITIVE', 'score': 0.9998695850372314}, {'label': 'NEGATIVE', 'score': 0.0001304351753788069}]
```
"""
response = self.post(json={"inputs": text}, model=model, task="text-classification")
return _bytes_to_list(response)[0]

@overload
def text_generation( # type: ignore
self,
Expand Down
33 changes: 33 additions & 0 deletions src/huggingface_hub/inference/_generated/_async_client.py
Original file line number Diff line number Diff line change
Expand Up @@ -772,6 +772,39 @@ async def summarization(
response = await self.post(json=payload, model=model, task="summarization")
return _bytes_to_dict(response)[0]["summary_text"]

async def text_classification(self, text: str, *, model: Optional[str] = None) -> List[ClassificationOutput]:
"""
Perform sentiment-analysis on the given text.

Args:
text (`str`):
A string to be classified.
model (`str`, *optional*):
The model to use for the text classification task. Can be a model ID hosted on the Hugging Face Hub or a URL to
a deployed Inference Endpoint. If not provided, the default recommended text classification model will be used.
Defaults to None.

Returns:
`List[Dict]`: a list of dictionaries containing the predicted label and associated probability.

Raises:
[`InferenceTimeoutError`]:
If the model is unavailable or the request times out.
`aiohttp.ClientResponseError`:
If the request fails with an HTTP error status code other than HTTP 503.

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> output = await client.text_classification("I like you")
[{'label': 'POSITIVE', 'score': 0.9998695850372314}, {'label': 'NEGATIVE', 'score': 0.0001304351753788069}]
```
"""
response = await self.post(json={"inputs": text}, model=model, task="text-classification")
return _bytes_to_list(response)[0]

@overload
async def text_generation( # type: ignore
self,
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
interactions:
- request:
body: '{"inputs": ["I like you"]}'
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate, br
Connection:
- keep-alive
Content-Length:
- '41'
Content-Type:
- application/json
X-Amzn-Trace-Id:
- b658f44b-c82c-4a0c-9fc1-c287ea0b66d3
user-agent:
- unknown/None; hf_hub/0.17.0.dev0; python/3.10.12
method: POST
uri: https://api-inference.huggingface.co/models/distilbert-base-uncased-finetuned-sst-2-english
response:
body:
string: '[[{"label":"POSITIVE","score":0.9998695850372314},{"label":"NEGATIVE","score":0.0001304351753788069}]]'
headers:
Connection:
- keep-alive
Content-Length:
- '204'
Content-Type:
- application/json
Date:
- Sun, 20 Aug 2023 11:48:55 GMT
access-control-allow-credentials:
- 'true'
vary:
- Origin, Access-Control-Request-Method, Access-Control-Request-Headers
x-compute-time:
- '0.033'
x-compute-type:
- cache
x-request-id:
- MiuTWky1u3OlV7JlitniT
x-sha:
- 3d65bad49c7ba6f71920504507a8927f4b9db6c0
status:
code: 200
message: OK
version: 1
8 changes: 8 additions & 0 deletions tests/test_inference_client.py
Original file line number Diff line number Diff line change
Expand Up @@ -200,6 +200,14 @@ def test_summarization(self) -> None:
" surpassed the Washington Monument to become the tallest man-made structure in the world.",
)

def test_text_classification(self) -> None:
output = self.client.text_classification("I like you")
self.assertIsInstance(output, list)
self.assertEqual(len(output), 2)
for item in output:
self.assertIsInstance(item["score"], float)
self.assertIsInstance(item["label"], str)

def test_text_generation(self) -> None:
"""Tested separately in `test_inference_text_generation.py`."""

Expand Down
Loading