Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update type hints and examples #15

Merged
merged 6 commits into from
Nov 4, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,5 @@

/target

.venv/
.venv/
__pycache__/
12 changes: 12 additions & 0 deletions examples/json_request/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
# Send a JSON request

This example shows how to send a JSON request to the prediction endpoint.
The prediction function will syncronously sleep for 10 seconds, and
then print the request JSON body to the console
before returning the same body.

Send a POST request using `curl`:

```
curl -X POST http://localhost:4000/predict -H 'Content-Type: application/json' -d '{"key":"value"}'
```
Original file line number Diff line number Diff line change
@@ -1,16 +1,18 @@
import json
import time
import os
from meteorite import Meteorite

app = Meteorite()
app.set_webhook_url("https://envzvlfwlg78.x.pipedream.net")


@app.predict
def main(data):
data = json.loads(data)
print("Sleeping for 10 seconds")
time.sleep(10)
print(data["key"])
print(data)
return data


app.start(port=5001)
app.start()
6 changes: 0 additions & 6 deletions examples/json_requests/Dockerfile

This file was deleted.

7 changes: 0 additions & 7 deletions examples/json_requests/README.md

This file was deleted.

8 changes: 6 additions & 2 deletions examples/plaintext_request/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,10 @@
# Send a plain text request to your model
# Send a plain text request

### Sending the POST request via CURL
This example shows how to send a plain text request to the prediction endpoint.
The prediction function will decode the request body as a string with `utf-8`
before returning the same body.

Sending a POST request via `curl`:

```
curl -X POST http://localhost:4000/predict -H 'Content-Type: text/plain' -d 'hello'
Expand Down
1 change: 1 addition & 0 deletions examples/plaintext_request/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
@app.predict
def hello(data):
body = data.decode("utf-8")
print(body)
return body


Expand Down
31 changes: 31 additions & 0 deletions examples/webhook/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
# Setup a webhook

This example shows how to setup a webhook to receive the result of a prediction request.
When the meteorite inference server receives a request,
the request is queued and the server immediately returns a 200 response.
The request is then processed by the inference function that returns an output.
This output is sent to the webhook server as a POST request.

Install dependencies.

```shell
pip install meteorite fastapi uvicorn
```

Run the meteorite inference server that listens to port 4000.

```shell
python inference.py
```

In another terminal, run the webhook server that listens to port 8000.

```shell
uvicorn webhook:app --reload
```

In another terminal, send a POST request to the inference server.

```shell
curl -X POST http://localhost:4000/predict -H 'Content-Type: application/json' -d '{"key": "value"}'
```
16 changes: 16 additions & 0 deletions examples/webhook/inference.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
import os
from meteorite import Meteorite

app = Meteorite()

webhook_url = "http://localhost:4400"
app.set_webhook_url(webhook_url)


@app.predict
def main(data):
print(data)
return "my prediction is 1"


app.start()
15 changes: 15 additions & 0 deletions examples/webhook/webhook.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
from typing import Union

from fastapi import FastAPI
from starlette.requests import Request
from starlette.responses import Response


app = FastAPI()


@app.post("/", response_class=Response)
async def handle(request: Request):
body = await request.body()
print(body.decode("utf-8"))
return "OK"
15 changes: 12 additions & 3 deletions meteorite.pyi
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
import typing


class Meteorite:
"""
A fast and simple server app for machine learning models.
Expand All @@ -10,8 +9,9 @@ class Meteorite:
"""
Initializes the Meteorite server app
"""

def predict(self, wraps: typing.Callable[[bytes], typing.Union[str, dict[str, any]]]) -> None:
def predict(
self, wraps: typing.Callable[[bytes], typing.Union[str, dict[str, typing.Any]]]
) -> None:
"""
Decorator to wrap a model inference function.
The inference function should take in a bytes object and return a string or a dictionary.
Expand All @@ -24,7 +24,16 @@ class Meteorite:
>>> def infer(data: bytes) -> str:
>>> return "Hello World"
"""
def set_webhook_url(self, webhook_url: str) -> None:
"""
Sets the webhook url for the `predict` inference endpoint.
After the model inference function finishes processing one inference request,
the object returned by the function will be sent to the webhook url as a POST request.

:param webhook_url: webhook url

>>> app.set_webhook_url("https://example.com")
"""
def start(self, port: int = 4000) -> None:
"""
Starts the server app
Expand Down
Loading