Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add text classification to inference client #1606

Merged
2 changes: 1 addition & 1 deletion docs/source/guides/inference.md
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,7 @@ has a simple API that supports the most common tasks. Here is a list of the curr
| | [Sentence Similarity](https://huggingface.co/tasks/sentence-similarity) | ✅ | [`~InferenceClient.sentence_similarity`] |
| | [Summarization](https://huggingface.co/tasks/summarization) | ✅ | [`~InferenceClient.summarization`] |
| | [Table Question Answering](https://huggingface.co/tasks/table-question-answering) | | |
| | [Text Classification](https://huggingface.co/tasks/text-classification) | | |
| | [Text Classification](https://huggingface.co/tasks/text-classification) | ✅ | [`~InferenceClient.text_classification`] |
martinbrose marked this conversation as resolved.
Show resolved Hide resolved
| | [Text Generation](https://huggingface.co/tasks/text-generation) | ✅ | [`~InferenceClient.text_generation`] |
| | [Token Classification](https://huggingface.co/tasks/token-classification) | | |
| | [Translation](https://huggingface.co/tasks/translation) | | |
Expand Down
48 changes: 48 additions & 0 deletions src/huggingface_hub/inference/_client.py
Original file line number Diff line number Diff line change
Expand Up @@ -763,6 +763,54 @@ def summarization(
response = self.post(json=payload, model=model, task="summarization")
return _bytes_to_dict(response)[0]["summary_text"]

def text_classification(
self, text: List[str], *, parameters: Optional[Dict[str, Any]] = None, model: Optional[str] = None
) -> List[ClassificationOutput]:
martinbrose marked this conversation as resolved.
Show resolved Hide resolved
"""
Perform sentiment-analysis on the given text.

Args:
text (`str`):
A list of strings to be classified.
parameters (`Dict[str, Any]`, *optional*):
Additional parameters for the text classification task. Defaults to None. For more details about the available
parameters, please refer to [this page](https://huggingface.co/docs/api-inference/detailed_parameters#text-classification-task)
model (`str`, *optional*):
The model to use for the text classification task. Can be a model ID hosted on the Hugging Face Hub or a URL to
a deployed Inference Endpoint. If not provided, the default recommended text classification model will be used.
Defaults to None.

Returns:
`List[Dict]`: a list of dictionaries containing the predicted label and associated probability.

Raises:
[`InferenceTimeoutError`]:
If the model is unavailable or the request times out.
`HTTPError`:
If the request fails with an HTTP error status code other than HTTP 503.

Example:
```py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> output = client.text_classification(["I like you", "I love you"])
>>> output
[[{'label': 'POSITIVE', 'score': 0.9998695850372314},
{'label': 'NEGATIVE', 'score': 0.0001304351753788069}],
[{'label': 'POSITIVE', 'score': 0.9998656511306763},
{'label': 'NEGATIVE', 'score': 0.00013436275185085833}]]
```
martinbrose marked this conversation as resolved.
Show resolved Hide resolved
"""
payload: Dict[str, Any] = {"inputs": text}
if parameters is not None:
payload["parameters"] = parameters
response = self.post(
json=payload,
model=model,
task="text-classification",
)
return _bytes_to_dict(response)

@overload
def text_generation( # type: ignore
self,
Expand Down
49 changes: 49 additions & 0 deletions src/huggingface_hub/inference/_generated/_async_client.py
Original file line number Diff line number Diff line change
Expand Up @@ -770,6 +770,55 @@ async def summarization(
response = await self.post(json=payload, model=model, task="summarization")
return _bytes_to_dict(response)[0]["summary_text"]

async def text_classification(
self, text: List[str], *, parameters: Optional[Dict[str, Any]] = None, model: Optional[str] = None
) -> List[ClassificationOutput]:
"""
Perform sentiment-analysis on the given text.

Args:
text (`str`):
A list of strings to be classified.
parameters (`Dict[str, Any]`, *optional*):
Additional parameters for the text classification task. Defaults to None. For more details about the available
parameters, please refer to [this page](https://huggingface.co/docs/api-inference/detailed_parameters#text-classification-task)
model (`str`, *optional*):
The model to use for the text classification task. Can be a model ID hosted on the Hugging Face Hub or a URL to
a deployed Inference Endpoint. If not provided, the default recommended text classification model will be used.
Defaults to None.

Returns:
`List[Dict]`: a list of dictionaries containing the predicted label and associated probability.

Raises:
[`InferenceTimeoutError`]:
If the model is unavailable or the request times out.
`aiohttp.ClientResponseError`:
If the request fails with an HTTP error status code other than HTTP 503.

Example:
```py
# Must be run in an async context
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> output = await client.text_classification(["I like you", "I love you"])
>>> output
[[{'label': 'POSITIVE', 'score': 0.9998695850372314},
{'label': 'NEGATIVE', 'score': 0.0001304351753788069}],
[{'label': 'POSITIVE', 'score': 0.9998656511306763},
{'label': 'NEGATIVE', 'score': 0.00013436275185085833}]]
```
"""
payload: Dict[str, Any] = {"inputs": text}
if parameters is not None:
payload["parameters"] = parameters
response = await self.post(
json=payload,
model=model,
task="text-classification",
)
return _bytes_to_dict(response)

@overload
async def text_generation( # type: ignore
self,
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
interactions:
- request:
body: '{"inputs": ["I like you", "I love you."]}'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
body: '{"inputs": ["I like you", "I love you."]}'
body: '{"inputs": ["I like you"]}'

should be only 1 sample now

headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate, br
Connection:
- keep-alive
Content-Length:
- '41'
Content-Type:
- application/json
X-Amzn-Trace-Id:
- b658f44b-c82c-4a0c-9fc1-c287ea0b66d3
user-agent:
- unknown/None; hf_hub/0.17.0.dev0; python/3.10.12
method: POST
uri: https://api-inference.huggingface.co/models/distilbert-base-uncased-finetuned-sst-2-english
response:
body:
string: '[[{"label":"POSITIVE","score":0.9998695850372314},{"label":"NEGATIVE","score":0.0001304351753788069}],[{"label":"POSITIVE","score":0.9998705387115479},{"label":"NEGATIVE","score":0.00012938841246068478}]]'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
string: '[[{"label":"POSITIVE","score":0.9998695850372314},{"label":"NEGATIVE","score":0.0001304351753788069}],[{"label":"POSITIVE","score":0.9998705387115479},{"label":"NEGATIVE","score":0.00012938841246068478}]]'
string: '[[{"label":"POSITIVE","score":0.9998695850372314},{"label":"NEGATIVE","score":0.0001304351753788069}]]'

... and therefore only 1 response

headers:
Connection:
- keep-alive
Content-Length:
- '204'
Content-Type:
- application/json
Date:
- Sun, 20 Aug 2023 11:48:55 GMT
access-control-allow-credentials:
- 'true'
vary:
- Origin, Access-Control-Request-Method, Access-Control-Request-Headers
x-compute-time:
- '0.033'
x-compute-type:
- cache
x-request-id:
- MiuTWky1u3OlV7JlitniT
x-sha:
- 3d65bad49c7ba6f71920504507a8927f4b9db6c0
status:
code: 200
message: OK
version: 1
8 changes: 8 additions & 0 deletions tests/test_inference_client.py
Original file line number Diff line number Diff line change
Expand Up @@ -200,6 +200,14 @@ def test_summarization(self) -> None:
" surpassed the Washington Monument to become the tallest man-made structure in the world.",
)

def test_text_classification(self) -> None:
output = self.client.text_classification(["I like you", "I love you."])
self.assertIsInstance(output, list)
self.assertEqual(len(output), 2)
for item in output:
self.assertIsInstance(item[0]["score"], float)
self.assertIsInstance(item[0]["label"], str)
Wauplin marked this conversation as resolved.
Show resolved Hide resolved
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
self.assertIsInstance(item[0]["score"], float)
self.assertIsInstance(item[0]["label"], str)
self.assertIsInstance(item["score"], float)
self.assertIsInstance(item["label"], str)

1 level less


def test_text_generation(self) -> None:
"""Tested separately in `test_inference_text_generation.py`."""

Expand Down
Loading