Llama2 Self-hosted Chatbot API
This repository contains a FastAPI application that serves as a local personal chatbot API. It allows you to interact with the Llama2 LLM (and other open-source LLMs) to have natural language conversations, generate text, and perform various language-related tasks.
- Chat with the Llama2 LLM or other open-source LLMs.
- Perform text generation, translation, and more.
- Customizable and extendable for your specific needs.
-
Clone this repository to your local machine:
git clone https://github.com/ehsanghaffar/llm-practice
-
Change the working directory to the project folder:
cd llm-practice
-
Install the required dependencies:
pip install -r requirements.txt
Do'nt forget move your Llama2 model to
/static
directory
-
Run the FastAPI application:
uvicorn main:app --reload
-
Access the API at
http://localhost:8000
in your web browser or through API client tools likecurl
,httpie
, or Postman.
docker build -t llm .
docker run -d -p 8800:8000 --name llm_app llm
/chatting
: Start a chat session with the LLM.
For detailed usage examples, refer to the documentation.
[Link to detailed API documentation goes here]
- You can customize the behavior and settings of the LLM in the
config.py
file.
We welcome contributions to enhance the functionality of this FastAPI chatbot API. Feel free to submit issues, feature requests, or pull requests.
This project is licensed under the MIT License.
- Acknowledgments or credits to the Llama2 LLM and any other open-source LLMs used in this project.
For questions or feedback, please contact Ehsan Ghaffar