HiveMind is an innovative AI-powered code editor designed for large organizations, built upon the foundation of Cursor. It addresses the common challenge of multiple team members encountering and solving the same issues independently, especially when working with complex codebases and dependencies. By implementing a shared central context across all users, HiveMind enables efficient knowledge sharing and problem-solving within teams.
For demo, architecture of the project and updates, visit our Devpost page. We will be releasing our extension on VSCode soon!
Architecure for HiveMind. More details below.
Mixture of Experts Framework in HiveMind with OpenAI swarm.
- Shared Memory: Solutions to common issues are stored and accessible to all team members, reducing redundant problem-solving.
- Mixture of Experts: Utilizes an OpenAI swarm model with specialized experts (e.g., Systems, ML, Frontend) to provide tailored solutions.
- Real-time Collaboration: Seamless integration with Continue (open-source alternative to Cursor) for a collaborative coding experience.
- Advanced Context Retrieval: Implements Retrieval-Augmented Generation (RAG) using Pinecone for enhanced context-aware responses.
HiveMind consists of two main components:
- Frontend: Built on Continue, providing the user interface for code editing and AI interactions.
- Backend: Python-based server using FastAPI, interfacing with Pinecone for vector storage and OpenAI for AI model interactions.
To set up HiveMind, follow these steps:
- Clone the repository:
git clone https://github.com/Sam-Fatehmanesh/Cerebral_hive_backend.git
cd Cerebral_hive_backend
- Install Dependencies:
pip install -r requirements.txt
- Set up environment variables: Create a .env file in the root directory and add the following: text
OPENAI_API_KEY=your_openai_api_key
PINECONE_API_KEY=your_pinecone_api_key
PINECONE_ENVIRONMENT=your_pinecone_environment
To run the HiveMind backend:
uvicorn main:app --reload
uvicorn main:app --reload
This will start the FastAPI server, which handles requests from the Continue frontend and manages interactions with OpenAI and Pinecone.
- Main FastAPI application integrating all components.
- Key features:
- Pinecone vector database integration for context storage and retrieval
- OpenAI API integration for generating embeddings
/chat/completions
endpoint for processing queries and streaming responses- Answer storage functionality
- Defines the core logic for the Mixture of Experts system.
- Key components:
- Various expert agents (Python, Data Structures, Algorithms, Web Development, Database, Machine Learning)
- Web Scraper Agent for gathering online information
- Router Agent for directing queries to appropriate experts
get_response
function to process user queries
- Implements a FastAPI server for handling chat completions and text completions.
- Key features:
/v1/chat/completions
endpoint for chat model interactions/v1/completions
endpoint for text completion model interactions- Mock response generation for demonstration purposes
- Preprocesses the CodeQA dataset for use in the system.
- Functions:
download_file
: Downloads the dataset from GitHubprocess_codeqa_dataset
: Processes the raw data into a usable format
- Contains tests for the API endpoints.
- Tests:
- Chat completion functionality
- Answer storage functionality
- Prints vector database contents for verification
- Comprehensive unit tests for the Context API.
- Tests various aspects including:
- Context retrieval
- Context length
- Repeated queries
- Special character handling
- Concurrent request handling
- Implements web scraping functionality.
- Key functions:
get_html_content
: Extracts and cleans content from web pagesscrape_web
: Performs Google searches and retrieves content from top results
- FastAPI for backend API development
- Pinecone for vector database storage and retrieval
- OpenAI API for generating embeddings and language model interactions
- Mixture of Experts architecture using specialized agent models
- Web scraping capabilities for real-time information retrieval
- Streaming responses for efficient communication with the frontend
- Integration with Continue (open-source Cursor alternative) for the user interface