Skip to content

Commit

Permalink
add documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
yoomlam committed May 27, 2024
1 parent 3f1d07b commit 8b238c5
Show file tree
Hide file tree
Showing 10 changed files with 151 additions and 27 deletions.
56 changes: 55 additions & 1 deletion 05-assistive-chatbot/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,62 @@ After running the chatbot and providing feedback in the UI, review the feedback

## Run

1. Start the chatbot service: `./chatbot-chainlit.py` or `chainlit run ./chatbot-chainlit.py`
All apps use configurations set in `.env`, which can be overridden by environment variables, like `CHAT_ENGINE` and `LLM_MODEL_NAME`. See `_init_settings()` in `chatbot/__init__.py` for other variables.

### Run web chatbot app

1. Start the Chainlit-based chatbot service: `./chatbot-chainlit.py` or `chainlit run ./chatbot-chainlit.py`
1. Open a browser to `http://localhost:8000/`

For development, run something like `chainlit run -w -h --port 9000 ./chatbot-chainlit.py`.

Running the chatbot app will also run the API, which is defined in `chatbot_api.py`.

### Run only the API

1. Run `./chatbot_api.py`
1. Open a browser to the `/query` endpoint followed by a question, such as `http://localhost:8001/query/tell me a joke`

### Run commandline app

1. Run `./cmdline.py`

To quickly set variables and run the app on a single line:
`CHATBOT_LOG_LEVEL=INFO CHAT_ENGINE=Direct LLM_MODEL_NAME='langchain.ollama :: openhermes' ./cmdline.py`

To see more logs, adjust the log level like `CHATBOT_LOG_LEVEL=DEBUG`.


## Development

- The chatbot package `chatbot/__init__.py` is run for all apps because they `import chatbot`.
- It initializes settings (`_init_settings()`) and creates a specified chat engine (`create_chat_engine(settings)`).

### Adding a chat engine

A chat engine specifies a process that interfaces with an LLM (or multiple LLMs) and ultimately produces a response.
To create a chat engine, add a new Python file under `chatbot/engines` with:
- a constant `ENGINE_NAME` set to a unique chat engine name; this name is used as a value for the `CHAT_ENGINE` setting or environment variable.
- an `init_engine(settings)` function to instantiate a chat engine class
- a chat engine class that:
- creates a client to an LLM (`create_llm_client(settings)`), then
- uses the LLM client to generate a response to specified query (`gen_response(self, query)`)

The `chat_engine.gen_response(query)` function is called by the apps when a user submits a query.

### Adding an LLM client

An LLM client enables interacting with a specified language model via some LLM API. To add a new LLM model to an existing LLM client, add the model's name to `MODEL_NAMES` of the corresponding `*_client.py` file.

To create a new LLM client, add a new Python file under `chatbot/llms` with:
- a constant `CLIENT_NAME` set to a unique LLM provider name; this name is used as a value for the `LLM_MODEL_NAME` setting or environment variable;
- a constant `MODEL_NAMES` set to a list of language model names recognized by the LLM provider;
- an `init_client(model_name, settings)` function to instantiate an LLM client class;
- an LLM client class that:
- sets `self.client` based on the provided `settings`, and
- implements a `submit(self, message)` function that uses `self.client` to generate a response, which may need to be parsed so that a string is returned to `chat_engine.gen_response(self, query)`.

An LLM client can be used in any arbitrary program by:
- setting `client = init_client(model_name, settings)`
- then calling `client.submit(message)`
See `client_example_usage()` in `chatbot/llms/mock_llm_client.py`.
7 changes: 6 additions & 1 deletion 05-assistive-chatbot/chatbot-chainlit.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,10 @@
#!/usr/bin/env chainlit run -h

"""
ChainLit-based chatbot, providing a web user interface for the selected chat engine and settings.
See README.md for instructions to enable user feedback.
"""

import logging
import pprint

Expand Down Expand Up @@ -99,7 +104,7 @@ async def message_submitted(message: cl.Message):
# settings = cl.user_session.get("settings")

chat_engine = cl.user_session.get("chat_engine")
response = chat_engine.get_response(message.content)
response = chat_engine.gen_response(message.content)

await cl.Message(content=f"*Response*: {response}").send()

Expand Down
4 changes: 2 additions & 2 deletions 05-assistive-chatbot/chatbot/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
# - add unit tests


## Logging
## Initialize logging


def _configure_logging():
Expand All @@ -28,7 +28,7 @@ def _configure_logging():
logger = logging.getLogger(__name__)


## Settings
## Initialize settings


@utils.verbose_timer(logger)
Expand Down
2 changes: 1 addition & 1 deletion 05-assistive-chatbot/chatbot/engines/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ def _discover_chat_engines(force=False):
return _engines


## Factory methods
## Factory functions


@utils.timer
Expand Down
10 changes: 5 additions & 5 deletions 05-assistive-chatbot/chatbot/engines/direct_engine.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,17 +7,17 @@
logger = logging.getLogger(__name__)


def init_engine(settings):
return DirectChatEngine(settings)


class DirectChatEngine:
def __init__(self, settings):
self.settings = settings
self.client = engines.create_llm_client(settings)

@utils.timer
def get_response(self, query):
def gen_response(self, query):
logger.debug("Query: %s", query)
response = self.client.submit(query)
return response


def init_engine(settings):
return DirectChatEngine(settings)
10 changes: 5 additions & 5 deletions 05-assistive-chatbot/chatbot/engines/v2_household_engine.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,17 +3,17 @@
ENGINE_NAME = "Summaries"


def init_engine(settings):
return SummariesChatEngine(settings)


class SummariesChatEngine:
def __init__(self, settings):
self.settings = settings
self.client = engines.create_llm_client(settings)

@utils.timer
def get_response(self, query):
def gen_response(self, query):
response = self.client.submit(query)
# TODO: create summaries
return response


def init_engine(settings):
return SummariesChatEngine(settings)
13 changes: 13 additions & 0 deletions 05-assistive-chatbot/chatbot/llms/mock_llm_client.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,3 +15,16 @@ def __init__(self, model_name, settings):

def submit(self, message):
return self.mock_responses.get(message, f"Mock LLM> Your query was: {message}")


if __name__ == "__main__":
# An LLM client can be used in any arbitrary program
def client_example_usage():
settings = {"temperature": 1}
client = init_client("llm", settings)
while True:
message = input("You> ")
response = client.submit(message)
print(f"Mock LLM> {response}")

client_example_usage()
10 changes: 7 additions & 3 deletions 05-assistive-chatbot/chatbot/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@


def timer(func):
"A decorator that logs the time it takes for the decorated function to run"
module = inspect.getmodule(func)
if module:
logger = logging.getLogger(module.__name__)
Expand All @@ -29,6 +30,8 @@ def wrapper_timer(*args, **kwargs):

# https://stackoverflow.com/a/10176276/23458508
def verbose_timer(logger):
"A decorator that logs the time it takes for the decorated function to run and the return value"

def timer_decorator(func):
@functools.wraps(func)
def wrapper_timer(*args, **kwargs):
Expand All @@ -46,14 +49,15 @@ def wrapper_timer(*args, **kwargs):


def scan_modules(ns_pkg):
"Return a dictionary of Python modules found in the given namespace package"
# From https://packaging.python.org/en/latest/guides/creating-and-discovering-plugins/#using-namespace-packages
itr = pkgutil.iter_modules(ns_pkg.__path__, ns_pkg.__name__ + ".")
return {name: import_module_if_possible(name) for _, name, _ in itr}
return {name: _import_module_if_possible(name) for _, name, _ in itr}


def import_module_if_possible(name):
def _import_module_if_possible(name):
try:
return importlib.import_module(name)
except ImportError:
# logging.warning("Could not import module: %s", name)
# logger.warn("Could not import module: %s", name)
return None
53 changes: 46 additions & 7 deletions 05-assistive-chatbot/chatbot_api.py
100644 → 100755
Original file line number Diff line number Diff line change
@@ -1,20 +1,59 @@
#!/usr/bin/env python3

"""
This is a sample API file that demonstrates how to create an API using FastAPI,
which is compatible with ChainLit. This file is a starting point for creating
an API that can be deployed with the ChainLit chatbot.
"""

import logging

from chainlit.server import app
from fastapi import Request
from fastapi import FastAPI, Request
from fastapi.responses import HTMLResponse

import chatbot

if __name__ == "__main__":
# If running directly, define the FastAPI app
app = FastAPI()
else:
# Otherwise use ChainLit's app
from chainlit.server import app

logger = logging.getLogger(f"chatbot.{__name__}")

_chat_engine = None


def chat_engine():
# pylint: disable=global-statement
global _chat_engine
if not _chat_engine:
# Load the initial settings
settings = chatbot.initial_settings
chatbot.validate_settings(settings)

# Create the chat engine
_chat_engine = chatbot.create_chat_engine(settings)

return _chat_engine


# See https://docs.chainlit.io/deploy/api#how-it-works
@app.get("/hello")
def hello(request: Request):
logger.info(request.headers)
return HTMLResponse("Hello World")
@app.get("/query/{message}")
def query(message: str):
response = chat_engine().gen_response(message)
return HTMLResponse(response)


@app.get("/healthcheck")
def healthcheck():
def healthcheck(request: Request):
logger.info(request.headers)
# TODO: Add a health check - https://pypi.org/project/fastapi-healthchecks/
return HTMLResponse("Healthy")


if __name__ == "__main__":
import uvicorn

uvicorn.run("__main__:app", host="0.0.0.0", port=8001, log_level="info")
13 changes: 11 additions & 2 deletions 05-assistive-chatbot/cmdline.py
Original file line number Diff line number Diff line change
@@ -1,19 +1,28 @@
#!/usr/bin/env python3

"""
Commandline interface for running the same operations as the chatbot.
Useful for development, debugging, and testing.
"""

import logging

import chatbot

logger = logging.getLogger(f"chatbot.{__name__}")


# Load the initial settings
settings = chatbot.initial_settings
chatbot.validate_settings(settings)

# Create the chat engine
chat_engine = chatbot.create_chat_engine(settings)

# Query the chat engine
message = "Hello, what's your name?"
response = chat_engine.get_response(message)
response = chat_engine.gen_response(message)

# Check the response type in case the chat_engine returns a non-string object
if not isinstance(response, str):
logger.error("Unexpected type: %s", type(response))

Expand Down

0 comments on commit 8b238c5

Please sign in to comment.