Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ask_async docs #113

Merged
merged 20 commits into from
Oct 18, 2023
Merged
Show file tree
Hide file tree
Changes from 16 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 14 additions & 3 deletions .vscode/settings.json
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,17 @@
"editor.codeActionsOnSave": {
"source.organizeImports": true
},
"python.analysis.extraPaths": ["./generated"],
"python.formatting.provider": "black"
}
"editor.rulers": [
100,
120
],
"python.analysis.extraPaths": [
"./generated"
],
"python.formatting.provider": "black",
"[python]": {
"editor.codeActionsOnSave": {
"source.organizeImports": false
}
}
}
4 changes: 4 additions & 0 deletions docs/docs/building-applications/1-grabbing-images.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
---
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I explicitly gave each of the docs in /building-applications a place in the sidebar list by prepending each one with <{position}-> and adding sidebar_position: {position} where necessary.

sidebar_position: 1
---

# Grabbing Images

Groundlight's SDK accepts images in many popular formats, including PIL, OpenCV, and numpy arrays.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
sidebar_position: 3
sidebar_position: 2
---

# Working with Detectors
Expand Down
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
---
sidebar_position: 3
---
# Confidence Levels

Groundlight gives you a simple way to control the trade-off of latency against accuracy. The longer you can wait for an answer to your image query, the better accuracy you can get. In particular, if the ML models are unsure of the best response, they will escalate the image query to more intensive analysis with more complex models and real-time human monitors as needed. Your code can easily wait for this delayed response. Either way, these new results are automatically trained into your models so your next queries will get better results faster.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
---
sidebar_position: 4
---

# Handling Server Errors

When building applications with the Groundlight SDK, you may encounter server errors during API calls. This page covers how to handle such errors and build robust code that can gracefully handle exceptions.
Expand Down
70 changes: 70 additions & 0 deletions docs/docs/building-applications/5-async-queries.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
---
sidebar_position: 5
---

# Asynchronous Queries

Groundlight provides a simple interface for submitting asynchronous queries. This is useful for times in which the thread or machine submitting image queries is not the same thread or machine that will be retrieving and using the results. For example, you might have a forward deployed robot or camera that submits image queries to Groundlight, and a separate server that retrieves the results and takes action based on them. We will refer to these two machines as the **submitting machine** and the **retrieving machine**.
sunildkumar marked this conversation as resolved.
Show resolved Hide resolved

## Setup Submitting Machine
On the **submitting machine**, you will need to install the Groundlight Python SDK. Then you can submit image queries asynchronously using the `ask_async` interface (read the full documentation [here](pathname:///python-sdk/api-reference-docs/#groundlight.client.Groundlight.ask_async)). `ask_async` submits your query and returns as soon as the query is submitted. It does not wait for an answer to be available prior to returning to minimize the time your program spends interacting with Groundlight. As a result, the `ImageQuery` object `ask_async` returns lacks a `result` (the `result` field will `None`). This is acceptable for this use case as the **submitting machine** is not interested in the result. Instead, the **submitting machine** just needs to communicate the `ImageQuery.id`s to the **retrieving machine** - this might be done via a database, a message queue, or some other mechanism. For this example, we assume you are using a database where you save the `ImageQuery.id` to it via `db.save(image_query.id)`.
sunildkumar marked this conversation as resolved.
Show resolved Hide resolved

```python notest
from groundlight import Groundlight
import cv2
from time import sleep

detector = gl.get_or_create_detector(name="your_detector_name", query="your_query")

cam = cv2.VideoCapture(0) # Initialize camera (0 is the default index)

while True: # TODO: add a way to exit this loop... not sure what makes sense here
sunildkumar marked this conversation as resolved.
Show resolved Hide resolved
_, image = cam.read() # Capture one frame from the camera
image_query = gl.ask_async(detector=detector, image=image) # Submit the frame to Groundlight
db.save(image_query.id) # Save the image_query.id to a database for the retrieving machine to use
sleep(10) # Sleep for 10 seconds before submitting the next query

cam.release() # Release the camera

```

## Setup Retrieving Machine
On the **retrieving machine** you will need to install the Groundlight Python SDK. Then you can retrieve the results of the image queries submitted by another machine using `get_image_query`. The **retrieving machine** can then use the `ImageQuery.result` to take action based on the result for whatever application you are building. For this example, we assume your application looks up the next image query to process from a database via `db.get_next_image_query_id()` and that this function returns `None` once all `ImageQuery`s are processed.
```python notest
from groundlight import Groundlight

detector = gl.get_or_create_detector(name="your_detector_name", query="your_query")

image_query_id = db.get_next_image_query_id()

while image_query_id is not None:
image_query = gl.get_image_query(id=image_query_id) # retrieve the image query from Groundlight
result = image_query.result

# take action based on the result of the image query
if result.label == 'YES':
pass # TODO: do something based on your application
elif result.label == 'NO':
pass # TODO: do something based on your application
elif result.label == 'UNCLEAR':
pass # TODO: do something based on your application

# update image_query_id for next iteration of the loop
image_query_id = db.get_next_image_query_id()
```

## Important Considerations
When you submit an image query asynchronously, ML prediction on your query is **not** instant. So attempting to retrieve the result immediately after submitting the query will likely result in an `UNCLEAR` result as Groundlight is still processing your query. Instead, if your code needs a `result` synchronously we recommend using one of our methods with a polling mechanism to retrieve the result. You can see all of the interfaces available in the documentation [here](pathname:///python-sdk/api-reference-docs/#groundlight.client.Groundlight).

```python notest
from groundlight import Groundlight
from PIL import Image

detector = gl.get_or_create_detector(name="your_detector_name", query="your_query")
image = Image.open("/path/to/your/image.jpg")
image_query = gl.ask_async(detector=detector, image=image) # Submit the frame to Groundlight
result = image_query.result # This will likely be 'UNCLEAR' as Groundlight is still processing your query
sunildkumar marked this conversation as resolved.
Show resolved Hide resolved
```

# TODO: what other considerations are there?
sunildkumar marked this conversation as resolved.
Show resolved Hide resolved

Original file line number Diff line number Diff line change
@@ -1,4 +1,8 @@
# Using Groundlight on the edge
---
sidebar_position: 6
---

# Using Groundlight on the Edge

If your account has access to edge models, you can download and install them to your edge devices.
This allows you to run your model evaluations on the edge, reducing latency, cost, network bandwidth, and energy.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
---
sidebar_position: 7
---

# Industrial and Manufacturing Applications

Modern natural language-based computer vision is transforming industrial and manufacturing applications by enabling more intuitive interaction with automation systems. Groundlight offers cutting-edge computer vision technology that can be seamlessly integrated into various industrial processes, enhancing efficiency, productivity, and quality control.
Expand Down
13 changes: 8 additions & 5 deletions docs/docs/building-applications/building-applications.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,11 +37,14 @@ Groundlight can be used to [apply modern natural-language-based computer vision
## Further Reading

For more in-depth guides on various aspects of building applications with Groundlight, check out the following pages:

- [Working with Detectors](working-with-detectors.md): Learn how to create, configure, and use detectors in your Groundlight-powered applications.
- [Using Groundlight on the edge](edge.md): Discover how to deploy Groundlight in edge computing environments for improved performance and reduced latency.
- [Handling HTTP errors](handling-errors.md): Understand how to handle and troubleshoot HTTP errors that may occur while using Groundlight.

- **[Grabbing images](1-grabbing-images.md)**: Understand the intricacies of how to submit images from various input sources to Groundlight.
- **[Working with detectors](2-working-with-detectors.md)**: Learn how to create, configure, and use detectors in your Groundlight-powered applications.
- **[Confidence levels](3-managing-confidence.md)**: Master how to control the trade-off of latency against accuracy by configuring the desired confidence level for your detectors.
- **[Handling server errors](4-handling-errors.md)**: Understand how to handle and troubleshoot HTTP errors that may occur while using Groundlight.
- **[Asynchronous queries](5-async-queries.md)**: Groundlight makes it easy to submit asynchronous queries. Learn how to submit queries asynchronously and retrieve the results later.
- **[Using Groundlight on the edge](6-edge.md)**: Discover how to deploy Groundlight in edge computing environments for improved performance and reduced latency.
- **[Industrial applications](7-industrial.md)**: Learn how to apply modern natural-language-based computer vision to your industrial and manufacturing applications.

sunildkumar marked this conversation as resolved.
Show resolved Hide resolved
By exploring these resources and sample applications, you'll be well on your way to building powerful visual applications using Groundlight's computer vision and natural language capabilities.


2 changes: 1 addition & 1 deletion docs/docs/getting-started/getting-started.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ _Note: The SDK is currently in "beta" phase. Interfaces are subject to change in

### How does it work?

Your images are first analyzed by machine learning (ML) models which are automatically trained on your data. If those models have high enough [confidence](docs/building-applications/managing-confidence), that's your answer. But if the models are unsure, then the images are progressively escalated to more resource-intensive analysis methods up to real-time human review. So what you get is a computer vision system that starts working right away without even needing to first gather and label a dataset. At first it will operate with high latency, because people need to review the image queries. But over time, the ML systems will learn and improve so queries come back faster with higher confidence.
Your images are first analyzed by machine learning (ML) models which are automatically trained on your data. If those models have high enough [confidence](docs/building-applications/3-managing-confidence.md), that's your answer. But if the models are unsure, then the images are progressively escalated to more resource-intensive analysis methods up to real-time human review. So what you get is a computer vision system that starts working right away without even needing to first gather and label a dataset. At first it will operate with high latency, because people need to review the image queries. But over time, the ML systems will learn and improve so queries come back faster with higher confidence.
sunildkumar marked this conversation as resolved.
Show resolved Hide resolved

### Escalation Technology

Expand Down
4 changes: 3 additions & 1 deletion generated/docs/ImageQueriesApi.md
Original file line number Diff line number Diff line change
Expand Up @@ -207,6 +207,7 @@ with openapi_client.ApiClient(configuration) as api_client:
detector_id = "detector_id_example" # str | Choose a detector by its ID.
human_review = "human_review_example" # str | If set to `DEFAULT`, use the regular escalation logic (i.e., send the image query for human review if the ML model is not confident). If set to `ALWAYS`, always send the image query for human review even if the ML model is confident. If set to `NEVER`, never send the image query for human review even if the ML model is not confident. (optional)
patience_time = 3.14 # float | How long to wait for a confident response. (optional)
want_async = "want_async_example" # str | If \"true\" then submitting an image query returns immediately without a result. The result will be computed asynchronously and can be retrieved later. (optional)
body = open('@path/to/image.jpeg', 'rb') # file_type | (optional)

# example passing only required values which don't have defaults set
Expand All @@ -219,7 +220,7 @@ with openapi_client.ApiClient(configuration) as api_client:
# example passing only required values which don't have defaults set
# and optional values
try:
api_response = api_instance.submit_image_query(detector_id, human_review=human_review, patience_time=patience_time, body=body)
api_response = api_instance.submit_image_query(detector_id, human_review=human_review, patience_time=patience_time, want_async=want_async, body=body)
pprint(api_response)
except openapi_client.ApiException as e:
print("Exception when calling ImageQueriesApi->submit_image_query: %s\n" % e)
Expand All @@ -233,6 +234,7 @@ Name | Type | Description | Notes
**detector_id** | **str**| Choose a detector by its ID. |
**human_review** | **str**| If set to &#x60;DEFAULT&#x60;, use the regular escalation logic (i.e., send the image query for human review if the ML model is not confident). If set to &#x60;ALWAYS&#x60;, always send the image query for human review even if the ML model is confident. If set to &#x60;NEVER&#x60;, never send the image query for human review even if the ML model is not confident. | [optional]
**patience_time** | **float**| How long to wait for a confident response. | [optional]
**want_async** | **str**| If \&quot;true\&quot; then submitting an image query returns immediately without a result. The result will be computed asynchronously and can be retrieved later. | [optional]
**body** | **file_type**| | [optional]

### Return type
Expand Down
2 changes: 1 addition & 1 deletion generated/docs/ImageQuery.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Name | Type | Description | Notes
**query** | **str** | A question about the image. | [readonly]
**detector_id** | **str** | Which detector was used on this image query? | [readonly]
**result_type** | **bool, date, datetime, dict, float, int, list, str, none_type** | What type of result are we returning? | [readonly]
**result** | **bool, date, datetime, dict, float, int, list, str, none_type** | | [readonly]
**result** | **bool, date, datetime, dict, float, int, list, str, none_type** | | [optional] [readonly]
**any string name** | **bool, date, datetime, dict, float, int, list, str, none_type** | any string name can be used but the value must be the correct type | [optional]

[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
Expand Down
4 changes: 2 additions & 2 deletions generated/model.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# generated by datamodel-codegen:
# filename: public-api.yaml
# timestamp: 2023-08-09T20:46:11+00:00
# timestamp: 2023-10-16T23:29:00+00:00

from __future__ import annotations

Expand Down Expand Up @@ -69,7 +69,7 @@ class ImageQuery(BaseModel):
query: str = Field(..., description="A question about the image.")
detector_id: str = Field(..., description="Which detector was used on this image query?")
result_type: ResultTypeEnum = Field(..., description="What type of result are we returning?")
result: ClassificationResult
result: Optional[ClassificationResult] = None


class PaginatedDetectorList(BaseModel):
Expand Down
5 changes: 5 additions & 0 deletions generated/openapi_client/api/image_queries_api.py
Original file line number Diff line number Diff line change
Expand Up @@ -133,6 +133,7 @@ def __init__(self, api_client=None):
"detector_id",
"human_review",
"patience_time",
"want_async",
"body",
],
"required": [
Expand All @@ -149,17 +150,20 @@ def __init__(self, api_client=None):
"detector_id": (str,),
"human_review": (str,),
"patience_time": (float,),
"want_async": (str,),
"body": (file_type,),
},
"attribute_map": {
"detector_id": "detector_id",
"human_review": "human_review",
"patience_time": "patience_time",
"want_async": "want_async",
},
"location_map": {
"detector_id": "query",
"human_review": "query",
"patience_time": "query",
"want_async": "query",
"body": "body",
},
"collection_format_map": {},
Expand Down Expand Up @@ -299,6 +303,7 @@ def submit_image_query(self, detector_id, **kwargs):
Keyword Args:
human_review (str): If set to `DEFAULT`, use the regular escalation logic (i.e., send the image query for human review if the ML model is not confident). If set to `ALWAYS`, always send the image query for human review even if the ML model is confident. If set to `NEVER`, never send the image query for human review even if the ML model is not confident. . [optional]
patience_time (float): How long to wait for a confident response.. [optional]
want_async (str): If \"true\" then submitting an image query returns immediately without a result. The result will be computed asynchronously and can be retrieved later.. [optional]
body (file_type): [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
Expand Down
8 changes: 3 additions & 5 deletions generated/openapi_client/model/image_query.py
Original file line number Diff line number Diff line change
Expand Up @@ -168,9 +168,7 @@ def discriminator():

@classmethod
@convert_js_args_to_python_args
def _from_openapi_data(
cls, id, type, created_at, query, detector_id, result_type, result, *args, **kwargs
): # noqa: E501
def _from_openapi_data(cls, id, type, created_at, query, detector_id, result_type, *args, **kwargs): # noqa: E501
"""ImageQuery - a model defined in OpenAPI

Args:
Expand All @@ -180,7 +178,6 @@ def _from_openapi_data(
query (str): A question about the image.
detector_id (str): Which detector was used on this image query?
result_type (bool, date, datetime, dict, float, int, list, str, none_type): What type of result are we returning?
result (bool, date, datetime, dict, float, int, list, str, none_type):

Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
Expand Down Expand Up @@ -213,6 +210,7 @@ def _from_openapi_data(
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
result (bool, date, datetime, dict, float, int, list, str, none_type): [optional] # noqa: E501
"""

_check_type = kwargs.pop("_check_type", True)
Expand Down Expand Up @@ -247,7 +245,6 @@ def _from_openapi_data(
self.query = query
self.detector_id = detector_id
self.result_type = result_type
self.result = result
for var_name, var_value in kwargs.items():
if (
var_name not in self.attribute_map
Expand Down Expand Up @@ -306,6 +303,7 @@ def __init__(self, *args, **kwargs): # noqa: E501
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
result (bool, date, datetime, dict, float, int, list, str, none_type): [optional] # noqa: E501
"""

_check_type = kwargs.pop("_check_type", True)
Expand Down
Loading
Loading