The pgGPT Discord Bot is an LLM-driven system designed for Discord servers. With a primary focus on leveraging the capabilities of the Subgrounds, this bot aims to provide a seamless interface for querying and downloading decentralized subgraph data directly within Discord.
Further enhancing its capabilities is the integration of a sophisticated AI language model, enabling intelligent interactions and in-depth assistance for Subgrounds
. The integration of the Playgrounds Docs with an LLM provides a dynamic conversational system with augmented understanding of the Playgrounds ecosystem.
- pgGPT_Discord Bot
+------------------------+
| Discord Server |
| |
| +------------------+ |
| | pgGPT Bot |<--------- GraphQL Queries ----------> [Playgrounds GraphQL API]
| +------------------+ |
| | |
| | API Key |
| | |
+-----------v-------------+
|
v
+------------------------+
| User's DM Channel |
+------------------------+
- Discord Server: The primary environment where the bot operates.
- pgGPT Bot: Our main bot module, responsible for processing commands and interactions.
- Playgrounds GraphQL API: External GraphQL API for querying decentralized subgraphs.
- User's DM Channel: Private channel for sensitive interactions like API key collection.
- Command Initiation: Users trigger the AI by using the
/ask_ai
command. - AI Backend Processing: Utilizing a combination of retrieval and generative methods, the bot fetches or constructs responses based on the provided
Subgrounds
document. - Response Delivery: The AI's response is presented to the user in a structured embed format.
- Command Start: Users initiate with the
/query_subgraph
command. - API Key Collection: A DM is sent to the user to gather their Playgrounds API key securely.
- Entity Listing: Post key retrieval, the bot fetches and displays available entities from the specified subgraph.
- Entity Querying: Users select an entity. The bot performs the query and generates the results.
- Result Presentation: Query results are presented to users in a downloadable CSV format.
- main.py: Entry point of the bot. Handles command registration, Discord events, and utility functions.
- pgGPT.py: Contains the AI's backend logic, integrating OpenAI language models and the retrieval mechanism.
- graphql_handler.py: Responsible for GraphQL interactions, schema introspections, and query executions.
- Repository Setup: Clone the repository.
- Environment Prep: Activate a Python virtual environment.
- Dependency Management: Use
pip install -r requirements.txt
. - Environment Variables: Configure necessary environment variables (e.g., bot token, OpenAI API key).
- Starting the Bot: Execute
python main.py
.
- Ensure Docker is installed. If not, get it from Docker's official website.
-
Navigate to the project root (where the Dockerfile is) and run:
docker build -t pggpt_bot_image .
-
Replace
pggpt_bot_image
if you wish to use a different image name.
- With the image built, you can now run the bot in a container:
docker run -e DISCORD_BOT_TOKEN=your_discord_bot_token -e BOT_TOKEN=your_bot_token -e DISCORD_CLIENT_ID=your_discord_client_id -e ALLOWED_SERVER_IDS=your_allowed_server_ids -e SERVER_TO_MODERATION_CHANNEL=your_server_to_moderation_channel -e OPENAI_API_KEY=your_openai_api_key pggpt_bot_image
Remember to replace placeholders with your actual values.
- If the application in the Docker container writes logs to stdout or stderr, view these logs using:
docker logs container_name_or_id
- To copy files (e.g., logs) from your container to your host:
docker cp container_name_or_id:/path/in/container /path/on/host
-
If you update your bot's code and wish to reflect those in Docker:
- Rebuild the Docker image as shown above.
- Run the container using the new image.
Notes:
When running Docker locally, it uses system resources. Shutting down the machine stops the containers. For continuous service, deploy Docker containers on a cloud provider.
The pgGPT Discord Bot leverages Large Language Models (LLMs) to provide a deep and interactive AI experience for users. The AI backend, designed around OpenAI's LLMs, can process and generate responses based on structured and unstructured data sources. This section delves into the AI system's architecture and how it integrates with the bot.
- Document Loading: The bot uses loaders to ingest unstructured data, like markdown documents, into the system. This data is stored as LangChain Documents.
- Text Splitting: Once loaded, documents are split into manageable chunks. This allows for more efficient embedding and storage.
- Vector Storage: Chunks are embedded into vectors and stored in vector stores. The bot uses FAISS for efficient vector storage and retrieval.
- Retrieval: When a question is posed, relevant chunks are retrieved from the vector store using similarity search. This ensures that the AI has contextually appropriate data to generate responses.
- Answer Generation: With relevant chunks retrieved, an LLM produces answers. The bot uses a ChatOpenAI model for this, which integrates the LLM and the retrieval mechanism.
- Custom Prompts: The bot uses a custom prompt template to guide the LLM's interactions, ensuring responses are consistent and contextually aware.
- Source Citations: The bot can also return the source documents used to generate an answer, providing users with the context and origin of the information.
For a deeper dive into the LLM system's architecture and capabilities, refer to the provided API and library documentation.
- Bot Onboarding: Invite the bot to your Discord server.
- Hello Command: Begin a conversation using
/hello
. - Subgraph Queries: Use
/query_subgraph
to start a query process. - AI Interactions: Engage with the AI via
/ask_ai
followed by a question. - Help: Unsure about commands? Just type
/help
.
Developers are encouraged to contribute. Start by forking the repository and then submit your pull requests. Ensure that you follow code style guidelines and provide adequate documentation.
- License: MIT license.
- Contact: For questions or technical issues, open an issue on our GitHub repository.