-
-
Notifications
You must be signed in to change notification settings - Fork 946
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Server Sent Events #20
Comments
Any update on this? It would be really helpful for building realtime applications |
Not yet - tho I'd be happy to help guide anyone who's interested in taking on a pull request for it. |
I'm trying to do a simple working example for server side events in This is my code: from asyncio.queues import Queue
import uvicorn
from starlette.applications import Starlette
from starlette.requests import Request
from starlette.responses import JSONResponse, StreamingResponse
class SSE:
def __init__(self, data, event=None, event_id=None, retry=None):
self.data = data
self.event = event
self.id = event_id
self.retry = retry
def encode(self):
message = f"data: {self.data}"
if self.event is not None:
message += f"\nevent: {self.event}"
if self.id is not None:
message += f"\nid: {self.id}"
if self.retry is not None:
message += f"\nretry: {self.retry}"
message += "\r\n\r\n"
return message.encode("utf-8")
app = Starlette(debug=True)
app.queues = []
@app.route("/subscribe", methods=["GET"])
async def subscribe(request: Request):
async def event_publisher():
while True:
event = await queue.get()
yield event.encode()
queue = Queue()
app.queues.append(queue)
headers = {
"Content-Type": "text/event-stream",
"Cache-Control": "no-cache",
"Connection": "keep-alive",
"X-Accel-Buffering": "no",
}
return StreamingResponse(content=event_publisher(), headers=headers)
@app.route("/publish", methods=["POST"])
async def publish(request: Request):
payload = await request.json()
data = payload["data"]
for queue in app.queues:
event = SSE(data)
await queue.put(event)
return JSONResponse({"message": "ok"})
if __name__ == "__main__":
uvicorn.run("__main__:app", host="0.0.0.0", port=4321, reload=True) Obviously it's a naive implementation at the moment, but the main thing is that whenever I publish a new event it won't get broadcasted to the subscribers. When debugging I can see that the event is added to the queue, and also the generator can fetch it from the queue, but I never see it streamed to the client. |
Oh, to be honest it seems like it's working, I tried it first from Firefox and it tries to download the stream by default as a file, but with Chrome it works just fine. |
Interesting, @Kamforka do you have the frontend code as well? It should definitely be supported on Firefox |
@jacopofar There is no code, usually I just navigate to the url This is how it looks like in Chrome (pretty convenient for debugging): |
So at the moment I created these POC classes to enable class SSE:
def __init__(self, data, event=None, event_id=None, retry=None):
self.data = data
self.event = event
self.id = event_id
self.retry = retry
def encode(self, charset="utf-8"):
message = f"data: {self.data}"
if self.event is not None:
message += f"\nevent: {self.event}"
if self.id is not None:
message += f"\nid: {self.id}"
if self.retry is not None:
message += f"\nretry: {self.retry}"
message += "\r\n\r\n"
return message.encode(charset)
class EventSourceResponse(StreamingResponse):
def __init__(
self, content, headers={}, media_type=None, status_code=200, background=None,
):
default_headers = {
**headers,
"Content-Type": "text/event-stream",
"Cache-Control": "no-cache",
"Connection": "keep-alive",
}
super().__init__(
content=content,
status_code=status_code,
headers=default_headers,
media_type=media_type,
background=background,
)
async def __call__(self, scope, receive, send) -> None:
await send(
{
"type": "http.response.start",
"status": self.status_code,
"headers": self.raw_headers,
}
)
async for event in self.body_iterator:
if not isinstance(event, SSE):
raise Exception("Event source body must be an SSE instance")
await send(
{
"type": "http.response.body",
"body": event.encode(self.charset),
"more_body": True,
}
)
await send({"type": "http.response.body", "body": b"", "more_body": False})
if self.background is not None:
await self.background() It works just fine, but I identified two pain points:
async def event_publisher():
while True:
if not await request.is_disconnected():
try:
event = await asyncio.wait_for(queue.get(), 1.0)
except asyncio.TimeoutError:
continue
yield event
else:
return
Any thoughts what is the most idiomatic way to overcome these issues within |
Ah, I didn't know that it was possible to see them in Chrome by just visiting the address. MDN reports it's not supported only by IE and Edge (probably the new Edge based on Blink will) I really like it, seems much easier to manage than websockets |
So, I think we ought to change Something along these lines... async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:
disconnected = False
async def listen_for_disconnect():
while True:
message = await receive()
if message['type'] == 'http.disconnect':
disconnected = True
break
task = asyncio.create_task(listen_for_disconnect())
try:
await send(
{
"type": "http.response.start",
"status": self.status_code,
"headers": self.raw_headers,
}
)
async for chunk in self.body_iterator:
if not isinstance(chunk, bytes):
chunk = chunk.encode(self.charset)
await send({"type": "http.response.body", "body": chunk, "more_body": True})
if disconnected:
break
if not disconnected:
await send({"type": "http.response.body", "body": b"", "more_body": False})
finally:
if task.done():
task.result()
else:
task.cancel()
if self.background is not None:
await self.background() |
@tomchristie I think the above implementation would still block until a new value is being yielded from the Also the nested I think we should somehow cancel the While it should solve the problem with disconnected clients I'm still not sure that it will solve the hang issue of the server shutdown process, what do you think? |
Sure - tweakable by pushing the streaming into a async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:
async def stream_response():
nonlocal self, send
await send(
{
"type": "http.response.start",
"status": self.status_code,
"headers": self.raw_headers,
}
)
async for chunk in self.body_iterator:
if not isinstance(chunk, bytes):
chunk = chunk.encode(self.charset)
await send({"type": "http.response.body", "body": chunk, "more_body": True})
await send({"type": "http.response.body", "body": b"", "more_body": False})
async def listen_for_disconnect(task):
nonlocal self, receive
while True:
message = await receive()
if message['type'] == 'http.disconnect':
if not task.done():
task.cancel()
break
stream_task = asyncio.create_task(stream_response())
disconnect_task = asyncio.create_task(listen_for_disconnect(stream_task))
await stream_task
disconnect_task.result() if disconnect_task.done() else disconnect_task.cancel()
stream_task.result()
if self.background is not None:
await self.background()
I've not looked into it - depends if uvicorn (or daphne/hypercorn) sends |
Wow I like this one! I reworked it a bit though: async def stream_response(self, send):
await send(
{
"type": "http.response.start",
"status": self.status_code,
"headers": self.raw_headers,
}
)
async for chunk in self.body_iterator:
if not isinstance(chunk, bytes):
chunk = chunk.encode(self.charset)
await send({"type": "http.response.body", "body": chunk, "more_body": True})
await send({"type": "http.response.body", "body": b"", "more_body": False})
async def listen_for_disconnect(self, receive):
while True:
message = await receive()
if message["type"] == "http.disconnect":
break
async def __call__(self, scope, receive, send):
done, pending = await asyncio.wait(
[self.stream_response(send), self.listen_for_disconnect(receive)],
return_when=asyncio.FIRST_COMPLETED,
)
for task in pending:
task.cancel()
if self.background is not None:
await self.background() Tested and works. Thoughts? |
That's a nice implementation, yup. My one other concern here would be cases where we might inadvertantly end up with multiple readers listening for the disconnect event. For example, the HTTP base middleware returns a |
@tomchristie I will try adding a middleware like that to my setup and check what happens. Also I found another caveat specific to event source subscriptions. With the above implementation there is no way to check when a response was cancelled. What do you think implementing a callback e.g.: I mean something like this: return StreamingResponse(content=event_publisher(), on_disconnect=lambda: app.subscriptions.remove(queue)) And then the listener logic could call it when receives a disconnect: if message["type"] == "http.disconnect":
self.on_disconnect()
break |
You'll get a cancelled exception raised within the streaming code. The sensible thing to do here would be to use |
@tomchristie however that require to add And then you could do something like this: async def hello(request):
async with StreamingResponse() as resp:
while True:
await resp.send(data)
await asyncio.sleep(1)
return resp So in this case you could try-catch the exception when the disconnect cancels, right? Or maybe I overthink something? Cuz my problem here is that I need to do the cleanup logic from the view function and not inside the response object. |
The async iterator that gets passed to the response instance will have the exception raised there. |
Hmm, you sure about that? |
Which python version are you running? |
3.7.x and 3.8.x |
I tested the proposal using the |
I think I found a legitimate solution for the cleanup as well using background tasks. @app.route("/subscribe", methods=["GET"])
async def subscribe(request: Request):
async def remove_subscriptions():
app.subscriptions.remove(queue)
async def event_iterator():
while True:
# yielding events here
queue = Queue()
app.subscriptions.add(queue)
return EventSourceResponse(
content=event_iterator(), background=BackgroundTask(remove_subscriptions)
) Since background tasks are executed whenever the response is disconnected or finished it kinda feels appropriate to do cleanups with them. |
So at the moment I think I'd like to see any implementation here tackled as a third party package, as per comment on #757. I'm trying to keep Starlette's scope down to a minimum, and having an |
So basically there were an already worked out PR on this all time long? |
Btw I think that so far we discussed an alternative implementation for the |
Potentially. Let's wait and see what any pull request here looks like, then we'd be in a better position to take a call on it. |
There is a third-party package that implements SSE for starlette: https://github.com/sysid/sse-starlette |
It says Caveat: SSE streaming does not work in combination with GZipMiddleware., is it because of #919 ? |
@jacopofar No, it's not specific to Starlette - it's a constraint of how SSE works. You could potentially do something like compress the content of the individual messages themselves if they were large enough for that to mater, but you can't compress the stream itself. (It wouldn't be a valid SSE response if you did, since you'd be scrambling the framing that indicates "here's a new message".) |
Hi would be nice if the GZipMiddleware or maybe middlewares in general, would have a set of routes (strings or regex pattern) to ignore |
I guess the summary here is:
|
As per @tomchristie 's reply on #51 (comment) 3 months ago, I'm closing this. |
Helpers for sending SSE event streams over HTTP connections.
Related resources:
The text was updated successfully, but these errors were encountered: