Chat with your GPT-4 model using OpenAI API via Streamlit web app using latest OAI and streamlit packages
- Set up OpenAI API key as a streamlit secret
- Create
.streamlit/secrets.toml
file in the project/current directory and add the following lines to it:
OPENAI_API_KEY = "YOUR_API_KEY"
NOTE: You need to procure your API key from the OpenAI website by making the required payment. You could start small with $5.
- Set up virtual environment using venv in Unix/macOS:
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
- Start the app by following below steps:
-
Activate the virtual environment:
source .venv/bin/activate
(if not already activated) -
Then, use Makefile command:
make run
or directly start app usingstreamlit run chat_with_gpt4_streamlit/main.py
-
Navigate to
http://localhost:8501
where streamlit runs by default
- More features of the chat app:
- Streaming generation of the response - No waiting!
- Export the current conversation to save on API calls - outputs are written in the project directory inside a folder called
exports
- Get total tokens used using the
tiktoken
library
- Advanced configurations:
- Adjust the name of the OpenAI text generation model used by changing the
OAI_MODEL
parameter in theconfigs.py
file List of all possible models from OpenAI here and here - Adjust the location where the chat exports are saved by changing the
EXPORT_DIR
parameter in theconfigs.py
file. This path is with respect to the project root directory.