From a03bd7751db98543adb3a7ed2ccdede3d7833c16 Mon Sep 17 00:00:00 2001 From: Andrew <15331990+ahuang11@users.noreply.github.com> Date: Fri, 2 Feb 2024 13:32:33 -0800 Subject: [PATCH] Fix typo --- docs/concepts/streaming.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/concepts/streaming.md b/docs/concepts/streaming.md index feb4497..299b7a3 100644 --- a/docs/concepts/streaming.md +++ b/docs/concepts/streaming.md @@ -8,7 +8,7 @@ This can enhance the user experience by already showing part of the response but If you want to stream all the tokens generated quickly to your console output, you can use the `settings.console_stream = True` setting. -## `strem_to()` wrapper +## `stream_to()` wrapper For streaming with non runnable funcchains you can wrap the LLM generation call into the `stream_to()` context manager. This would look like this: