diff --git a/Assets/DocsGPT tee-back.jpeg b/Assets/DocsGPT tee-back.jpeg new file mode 100644 index 000000000..b13fe53d7 Binary files /dev/null and b/Assets/DocsGPT tee-back.jpeg differ diff --git a/Assets/DocsGPT tee-front.jpeg b/Assets/DocsGPT tee-front.jpeg new file mode 100644 index 000000000..8a4b7374c Binary files /dev/null and b/Assets/DocsGPT tee-front.jpeg differ diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index d864e73f2..c32117a16 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -19,19 +19,31 @@ Thank you for choosing to contribute to DocsGPT! We are all very grateful! - We value contributions in the form of discussions or suggestions. We recommend taking a look at existing issues and our [roadmap](https://github.com/orgs/arc53/projects/2). + - If you're interested in contributing code, here are some important things to know: - We have a frontend built on React (Vite) and a backend in Python. +======= +Before creating issues, please check out how the latest version of our app looks and works by launching it via [Quickstart](https://github.com/arc53/DocsGPT#quickstart) the version on our live demo is slightly modified with login. Your issues should relate to the version that you can launch via [Quickstart](https://github.com/arc53/DocsGPT#quickstart). + +### πŸ‘¨β€πŸ’» If you're interested in contributing code, here are some important things to know: + + +Tech Stack Overview: + +- 🌐 Frontend: Built with React (Vite) βš›οΈ, -### If you are looking to contribute to frontend (βš›οΈReact, Vite): +- πŸ–₯ Backend: Developed in Python 🐍 + +### 🌐 If you are looking to contribute to frontend (βš›οΈReact, Vite): - The current frontend is being migrated from [`/application`](https://github.com/arc53/DocsGPT/tree/main/application) to [`/frontend`](https://github.com/arc53/DocsGPT/tree/main/frontend) with a new design, so please contribute to the new one. - Check out this [milestone](https://github.com/arc53/DocsGPT/milestone/1) and its issues. -- The Figma design can be found [here](https://www.figma.com/file/OXLtrl1EAy885to6S69554/DocsGPT?node-id=0%3A1&t=hjWVuxRg9yi5YkJ9-1). +- The updated Figma design can be found [here](https://www.figma.com/file/OXLtrl1EAy885to6S69554/DocsGPT?node-id=0%3A1&t=hjWVuxRg9yi5YkJ9-1). Please try to follow the guidelines. -### If you are looking to contribute to Backend (🐍 Python): +### πŸ–₯ If you are looking to contribute to Backend (🐍 Python): - Review our issues and contribute to [`/application`](https://github.com/arc53/DocsGPT/tree/main/application) or [`/scripts`](https://github.com/arc53/DocsGPT/tree/main/scripts) (please disregard old [`ingest_rst.py`](https://github.com/arc53/DocsGPT/blob/main/scripts/old/ingest_rst.py) [`ingest_rst_sphinx.py`](https://github.com/arc53/DocsGPT/blob/main/scripts/old/ingest_rst_sphinx.py) files; they will be deprecated soon). - All new code should be covered with unit tests ([pytest](https://github.com/pytest-dev/pytest)). Please find tests under [`/tests`](https://github.com/arc53/DocsGPT/tree/main/tests) folder. @@ -44,9 +56,62 @@ To run unit tests from the root of the repository, execute: python -m pytest ``` -### Workflow: -Fork the repository, make your changes on your forked version, and then submit those changes as a pull request. +## Workflow πŸ“ˆ + +Here's a step-by-step guide on how to contribute to DocsGPT: + +1. **Fork the Repository:** + - Click the "Fork" button at the top-right of this repository to create your fork. + +2. **Create and Switch to a New Branch:** + - Create a new branch for your contribution using: + ```shell + git checkout -b your-branch-name + ``` + +3. **Make Changes:** + - Make the required changes in your branch. + +4. **Add Changes to the Staging Area:** + - Add your changes to the staging area using: + ```shell + git add . + ``` + +5. **Commit Your Changes:** + - Commit your changes with a descriptive commit message using: + ```shell + git commit -m "Your descriptive commit message" + ``` + +6. **Push Your Changes to the Remote Repository:** + - Push your branch with changes to your fork on GitHub using: + ```shell + git push origin your-branch-name + ``` + +7. **Submit a Pull Request (PR):** + - Create a Pull Request from your branch to the main repository. Make sure to include a detailed description of your changes and reference any related issues. + +8. **Collaborate:** + - Be responsive to comments and feedback on your PR. + - Make necessary updates as suggested. + - Once your PR is approved, it will be merged into the main repository. + +9. **Testing:** + - Before submitting a Pull Request, ensure your code passes all unit tests. + - To run unit tests from the root of the repository, execute: + ```shell + python -m pytest + ``` + +*Note: You should run the unit test only after making the changes to the backend code.* + +10. **Questions and Collaboration:** + - Feel free to join our Discord. We're very friendly and welcoming to new contributors, so don't hesitate to reach out. + +Thank you for considering contributing to DocsGPT! πŸ™ ## Questions/collaboration Feel free to join our [Discord](https://discord.gg/n5BX8dh8rU). We're very friendly and welcoming to new contributors, so don't hesitate to reach out. -# Thank you so much for considering contributing to DocsGPT!πŸ™ +# Thank you so much for considering to contribute DocsGPT!πŸ™ diff --git a/HACKTOBERFEST.md b/HACKTOBERFEST.md index b16466190..5b693fe94 100644 --- a/HACKTOBERFEST.md +++ b/HACKTOBERFEST.md @@ -32,4 +32,10 @@ Once you have created your PR and our maintainers have merged it, please fill in Feel free to join our Discord server. We're here to help newcomers, so don't hesitate to jump in! [Join us here](https://discord.gg/n5BX8dh8rU). -Thank you very much for considering contributing to DocsGPT during Hacktoberfest! πŸ™ Your contributions could earn you a stylish new t-shirt as a token of our appreciation. 🎁 Join us, and let's code together! πŸš€ +Thank you very much for considering contributing to DocsGPT during Hacktoberfest! πŸ™ Your contributions (not just simple typo) could earn you a stylish new t-shirt as a token of our appreciation. 🎁 Join us, and let's code together! πŸš€ + +Here is a preview of the shirts: +

+ + +

diff --git a/README.md b/README.md index 03a3fe830..8105e67e5 100644 --- a/README.md +++ b/README.md @@ -7,7 +7,7 @@

- DocsGPT is a cutting-edge open-source solution that streamlines the process of finding information in project documentation. With its integration of the powerful GPT models, developers can easily ask questions about a project and receive accurate answers. + DocsGPT is a cutting-edge open-source solution that streamlines the process of finding information in the project documentation. With its integration of the powerful GPT models, developers can easily ask questions about a project and receive accurate answers. Say goodbye to time-consuming manual searches, and let DocsGPT help you quickly find the information you need. Try it out and see how it revolutionizes your project documentation experience. Contribute to its development and be a part of the future of AI-powered assistance.

@@ -21,61 +21,56 @@ Say goodbye to time-consuming manual searches, and let - + + Contributors ## License + The source code license is [MIT](https://opensource.org/license/mit/), as described in the [LICENSE](LICENSE) file. Built with [πŸ¦œοΈπŸ”— LangChain](https://github.com/hwchase17/langchain) diff --git a/application/api/user/routes.py b/application/api/user/routes.py index 02a0876c3..b80562b20 100644 --- a/application/api/user/routes.py +++ b/application/api/user/routes.py @@ -53,6 +53,15 @@ def get_single_conversation(): conversation = conversations_collection.find_one({"_id": ObjectId(conversation_id)}) return jsonify(conversation['queries']) +@user.route("/api/update_conversation_name", methods=["POST"]) +def update_conversation_name(): + # update data for a conversation + data = request.get_json() + id = data["id"] + name = data["name"] + conversations_collection.update_one({"_id": ObjectId(id)},{"$set":{"name":name}}) + return {"status": "ok"} + @user.route("/api/feedback", methods=["POST"]) def api_feedback(): @@ -75,6 +84,19 @@ def api_feedback(): ) return {"status": http.client.responses.get(response.status_code, "ok")} +@user.route("/api/delete_by_ids", methods=["get"]) +def delete_by_ids(): + """Delete by ID. These are the IDs in the vectorstore""" + + ids = request.args.get("path") + if not ids: + return {"status": "error"} + + if settings.VECTOR_STORE == "faiss": + result = vectors_collection.delete_index(ids=ids) + if result: + return {"status": "ok"} + return {"status": "error"} @user.route("/api/delete_old", methods=["get"]) def delete_old(): diff --git a/application/requirements.txt b/application/requirements.txt index b4c712f44..9c60e4219 100644 --- a/application/requirements.txt +++ b/application/requirements.txt @@ -41,7 +41,7 @@ Jinja2==3.1.2 jmespath==1.0.1 joblib==1.2.0 kombu==5.2.4 -langchain==0.0.308 +langchain==0.0.312 loguru==0.6.0 lxml==4.9.2 MarkupSafe==2.1.2 @@ -104,3 +104,4 @@ urllib3==1.26.17 vine==5.0.0 wcwidth==0.2.6 yarl==1.8.2 +sentence-transformers==2.2.2 \ No newline at end of file diff --git a/application/vectorstore/faiss.py b/application/vectorstore/faiss.py index 217b04571..3a0a7b823 100644 --- a/application/vectorstore/faiss.py +++ b/application/vectorstore/faiss.py @@ -1,5 +1,5 @@ -from application.vectorstore.base import BaseVectorStore from langchain.vectorstores import FAISS +from application.vectorstore.base import BaseVectorStore from application.core.settings import settings class FaissStore(BaseVectorStore): @@ -7,20 +7,40 @@ class FaissStore(BaseVectorStore): def __init__(self, path, embeddings_key, docs_init=None): super().__init__() self.path = path + embeddings = self._get_embeddings(settings.EMBEDDINGS_NAME, embeddings_key) if docs_init: self.docsearch = FAISS.from_documents( - docs_init, self._get_embeddings(settings.EMBEDDINGS_NAME, embeddings_key) + docs_init, embeddings ) else: self.docsearch = FAISS.load_local( - self.path, self._get_embeddings(settings.EMBEDDINGS_NAME, settings.EMBEDDINGS_KEY) + self.path, embeddings ) + self.assert_embedding_dimensions(embeddings) def search(self, *args, **kwargs): return self.docsearch.similarity_search(*args, **kwargs) def add_texts(self, *args, **kwargs): return self.docsearch.add_texts(*args, **kwargs) - + def save_local(self, *args, **kwargs): return self.docsearch.save_local(*args, **kwargs) + + def delete_index(self, *args, **kwargs): + return self.docsearch.delete(*args, **kwargs) + + def assert_embedding_dimensions(self, embeddings): + """ + Check that the word embedding dimension of the docsearch index matches + the dimension of the word embeddings used + """ + if settings.EMBEDDINGS_NAME == "huggingface_sentence-transformers/all-mpnet-base-v2": + try: + word_embedding_dimension = embeddings.client[1].word_embedding_dimension + except AttributeError as e: + raise AttributeError("word_embedding_dimension not found in embeddings.client[1]") from e + docsearch_index_dimension = self.docsearch.index.d + if word_embedding_dimension != docsearch_index_dimension: + raise ValueError(f"word_embedding_dimension ({word_embedding_dimension}) " + + f"!= docsearch_index_word_embedding_dimension ({docsearch_index_dimension})") \ No newline at end of file diff --git a/application/worker.py b/application/worker.py index 5c87c7072..71fcd6158 100644 --- a/application/worker.py +++ b/application/worker.py @@ -21,8 +21,7 @@ def metadata_from_filename(title): - store = title.split('/') - store = store[1] + '/' + store[2] + store = '/'.join(title.split('/')[1:3]) return {'title': title, 'store': store} diff --git a/docs/README.md b/docs/README.md index 5b0ef6a47..4e41a0de7 100644 --- a/docs/README.md +++ b/docs/README.md @@ -1 +1,53 @@ -# nextra-docsgpt \ No newline at end of file +# nextra-docsgpt + +## Setting Up Docs Folder of DocsGPT Locally + +### 1. Clone the DocsGPT repository: + +``` +git clone https://github.com/arc53/DocsGPT.git + +``` +### 2. Navigate to the docs folder: + +``` +cd DocsGPT/docs + +``` +The docs folder contains the markdown files that make up the documentation. The majority of the files are in the pages directory. Some notable files in this folder include: + +`index.mdx`: The main documentation file. +`_app.js`: This file is used to customize the default Next.js application shell. +`theme.config.jsx`: This file is for configuring the Nextra theme for the documentation. + +### 3. Verify that you have Node.js and npm installed in your system. You can check by running: + +``` +node --version +npm --version + +``` +### 4. If not installed, download Node.js and npm from the respective official websites. + +### 5. Once you have Node.js and npm running, proceed to install yarn - another package manager that helps to manage project dependencies: + +``` +npm install --global yarn + +``` +### 6. Install the project dependencies using yarn: + +``` +yarn install + +``` +### 7. After the successful installation of the project dependencies, start the local server: + +``` +yarn dev + +``` + +- Now, you should be able to view the docs on your local environment by visiting `http://localhost:5000`. You can explore the different markdown files and make changes as you see fit. + +- **Footnotes:** This guide assumes you have Node.js and npm installed. The guide involves running a local server using yarn, and viewing the documentation offline. If you encounter any issues, it may be worth verifying your Node.js and npm installations and whether you have installed yarn correctly. diff --git a/docs/package.json b/docs/package.json index 1585c0a39..c74059739 100644 --- a/docs/package.json +++ b/docs/package.json @@ -1,4 +1,10 @@ { +"scripts":{ + "dev": "next dev", + "build": "next build", + "start": "next start" + }, + "license": "MIT", "dependencies": { "@vercel/analytics": "^1.0.2", "docsgpt": "^0.2.4", @@ -8,4 +14,4 @@ "react": "^18.2.0", "react-dom": "^18.2.0" } -} +} \ No newline at end of file diff --git a/docs/pages/Deploying/Hosting-the-app.md b/docs/pages/Deploying/Hosting-the-app.md index 13296b49f..31c3f55a3 100644 --- a/docs/pages/Deploying/Hosting-the-app.md +++ b/docs/pages/Deploying/Hosting-the-app.md @@ -4,39 +4,32 @@ Here's a step-by-step guide on how to setup an Amazon Lightsail instance to host ## Configuring your instance -(If you know how to create a Lightsail instance, you can skip to the recommended configuration part by clicking here). +(If you know how to create a Lightsail instance, you can skip to the recommended configuration part by clicking [here](#connecting-to-your-newly-created-instance)). -### 1. Create an account or login to https://lightsail.aws.amazon.com +### 1. Create an AWS Account: +If you haven't already, create or log in to your AWS account at https://lightsail.aws.amazon.com. -### 2. Click on "Create instance" +### 2. Create an Instance: -### 3. Create your instance +a. Click "Create Instance." -The first step is to select the "Instance location". In most cases, there's no need to switch locations as the default one will work well. +b. Select the "Instance location." In most cases, the default location works fine. -After that, it is time to pick your Instance Image. We recommend using "Linux/Unix" as the image and "Ubuntu 20.04 LTS" as the Operating System. +c. Choose "Linux/Unix" as the image and "Ubuntu 20.04 LTS" as the Operating System. -As for instance plan, it'll vary depending on your unique demands, but a "1 GB, 1vCPU, 40GB SSD and 2TB transfer" setup should cover most scenarios. +d. Configure the instance plan based on your requirements. A "1 GB, 1vCPU, 40GB SSD, and 2TB transfer" setup is recommended for most scenarios. -Lastly, identify your instance by giving it a unique name and then hit "Create instance". +e. Give your instance a unique name and click "Create Instance." -PS: Once you create your instance, it'll likely take a few minutes for the setup to be completed. +PS: It may take a few minutes for the instance setup to complete. -#### The recommended configuration is as follows: +### Connecting to Your newly created Instance -- Ubuntu 20.04 LTS -- 1GB RAM -- 1vCPU -- 40GB SSD Hard Drive -- 2TB transfer +Your instance will be ready a few minutes after creation. To access it, open the instance and click "Connect using SSH." -### Connecting to your newly created instance +#### Clone the DocsGPT Repository -Your instance will be ready for use a few minutes after being created. To access it, just open it up and click on "Connect using SSH". - -#### Clone the repository - -A terminal window will pop up, and the first step will be to clone the DocsGPT git repository: +A terminal window will pop up, and the first step will be to clone the DocsGPT Git repository: `git clone https://github.com/arc53/DocsGPT.git` @@ -56,15 +49,15 @@ And now install docker-compose: `sudo apt install docker-compose` -#### Access the DocsGPT folder +#### Access the DocsGPT Folder -Enter the following command to access the folder in which DocsGPT docker-compose file is present. +Enter the following command to access the folder in which the DocsGPT docker-compose file is present. `cd DocsGPT/` -#### Prepare the environment +#### Prepare the Environment -Inside the DocsGPT folder, create a `.env` file and copy the contents of `.env_sample` into it. +Inside the DocsGPT folder create a `.env` file and copy the contents of `.env_sample` into it. `nano .env` @@ -78,16 +71,16 @@ SELF_HOSTED_MODEL=false To save the file, press CTRL+X, then Y, and then ENTER. -Next, we need to set a correct IP for our Backend. To do so, open the docker-compose.yml file: +Next, set the correct IP for the Backend by opening the docker-compose.yml file: `nano docker-compose.yml` -And change this line 7 `VITE_API_HOST=http://localhost:7091` +And Change line 7 to: `VITE_API_HOST=http://localhost:7091` to this `VITE_API_HOST=http://:7091` This will allow the frontend to connect to the backend. -#### Running the app +#### Running the Application You're almost there! Now that all the necessary bits and pieces have been installed, it is time to run the application. To do so, use the following command: @@ -95,18 +88,21 @@ You're almost there! Now that all the necessary bits and pieces have been instal Launching it for the first time will take a few minutes to download all the necessary dependencies and build. -Once this is done, you can go ahead and close the terminal window. +Once this is done you can go ahead and close the terminal window. -#### Enabling ports +#### Enabling Ports -Before you are able to access your live instance, you must first enable the port that it is using. +a. Before you are able to access your live instance, you must first enable the port that it is using. -Open your Lightsail instance and head to "Networking". +b. Open your Lightsail instance and head to "Networking". -Then click on "Add rule" under "IPv4 Firewall", enter `5173` as your port, and hit "Create". +c. Then click on "Add rule" under "IPv4 Firewall", enter `5173` as your port, and hit "Create". Repeat the process for port `7091`. #### Access your instance -Your instance will now be available under your Public IP Address and port `5173`. Enjoy! +Your instance is now available at your Public IP Address on port 5173. Enjoy using DocsGPT! + +## Other Deployment Options +- [Deploy DocsGPT on Civo Compute Cloud](https://dev.to/rutamhere/deploying-docsgpt-on-civo-compute-c) diff --git a/docs/pages/Deploying/Quickstart.md b/docs/pages/Deploying/Quickstart.md index 5ed37a5b4..411e3234c 100644 --- a/docs/pages/Deploying/Quickstart.md +++ b/docs/pages/Deploying/Quickstart.md @@ -1,24 +1,107 @@ ## Launching Web App -Note: Make sure you have Docker installed +**Note**: Make sure you have Docker installed -On macOS or Linux, just write: +**On macOS or Linux:** +Just run the following command:: `./setup.sh` -It will install all the dependencies and give you an option to download the local model or use OpenAI +This command will install all the necessary dependencies and provide you with an option to download the local model or use OpenAI. -Otherwise, refer to this Guide: +If you prefer to follow manual steps, refer to this guide: -1. Open and download this repository with `git clone https://github.com/arc53/DocsGPT.git`. +1. Open and download this repository with +`git clone https://github.com/arc53/DocsGPT.git`. 2. Create a `.env` file in your root directory and set your `API_KEY` with your [OpenAI API key](https://platform.openai.com/account/api-keys). -3. Run `docker-compose build && docker-compose up`. +3. Run the following commands: +`docker-compose build && docker-compose up`. 4. Navigate to `http://localhost:5173/`. -To stop, just run `Ctrl + C`. +To stop, simply press Ctrl + C. + +**For WINDOWS:** + +To run the setup on Windows, you have two options: using the Windows Subsystem for Linux (WSL) or using Git Bash or Command Prompt. + +**Option 1: Using Windows Subsystem for Linux (WSL):** + +1. Install WSL if you haven't already. You can follow the official Microsoft documentation for installation: (https://learn.microsoft.com/en-us/windows/wsl/install). +2. After setting up WSL, open the WSL terminal. +3. Clone the repository and create the `.env` file: + ``` + git clone https://github.com/arc53/DocsGPT.git + cd DocsGPT + echo "API_KEY=Yourkey" > .env + echo "VITE_API_STREAMING=true" >> .env + ``` +4. Run the following command to start the setup with Docker Compose: + `./run-with-docker-compose.sh` +5. Open your web browser and navigate to (http://localhost:5173/). +6. To stop the setup, just press `Ctrl + C` in the WSL terminal + +**Option 2: Using Git Bash or Command Prompt (CMD):** + +1. Install Git for Windows if you haven't already. Download it from the official website: (https://gitforwindows.org/). +2. Open Git Bash or Command Prompt. +3. Clone the repository and create the `.env` file: + ``` + git clone https://github.com/arc53/DocsGPT.git + cd DocsGPT + echo "API_KEY=Yourkey" > .env + echo "VITE_API_STREAMING=true" >> .env + ``` +4.Run the following command to start the setup with Docker Compose: + `./run-with-docker-compose.sh` +5.Open your web browser and navigate to (http://localhost:5173/). +6.To stop the setup, just press Ctrl + C in the Git Bash or Command Prompt terminal. + +These steps should help you set up and run the project on Windows using either WSL or Git Bash/Command Prompt. +**Important:** Ensure that Docker is installed and properly configured on your Windows system for these steps to work. + + +For WINDOWS: + +To run the given setup on Windows, you can use the Windows Subsystem for Linux (WSL) or a Git Bash terminal to execute similar commands. Here are the steps adapted for Windows: + +Option 1: Using Windows Subsystem for Linux (WSL): + +1. Install WSL if you haven't already. You can follow the official Microsoft documentation for installation: (https://learn.microsoft.com/en-us/windows/wsl/install). +2. After setting up WSL, open the WSL terminal. +3. Clone the repository and create the `.env` file: + ``` + git clone https://github.com/arc53/DocsGPT.git + cd DocsGPT + echo "API_KEY=Yourkey" > .env + echo "VITE_API_STREAMING=true" >> .env + ``` +4. Run the following command to start the setup with Docker Compose: + `./run-with-docker-compose.sh` +5. Open your web browser and navigate to (http://localhost:5173/). +6. To stop the setup, just press `Ctrl + C` in the WSL terminal + +Option 2: Using Git Bash or Command Prompt (CMD): + +1. Install Git for Windows if you haven't already. You can download it from the official website: (https://gitforwindows.org/). +2. Open Git Bash or Command Prompt. +3. Clone the repository and create the `.env` file: + ``` + git clone https://github.com/arc53/DocsGPT.git + cd DocsGPT + echo "API_KEY=Yourkey" > .env + echo "VITE_API_STREAMING=true" >> .env + ``` +4.Run the following command to start the setup with Docker Compose: + `./run-with-docker-compose.sh` +5.Open your web browser and navigate to (http://localhost:5173/). +6.To stop the setup, just press Ctrl + C in the Git Bash or Command Prompt terminal. + +These steps should help you set up and run the project on Windows using either WSL or Git Bash/Command Prompt. Make sure you have Docker installed and properly configured on your Windows system for this to work. + ### Chrome Extension -To install the Chrome extension: +#### Installing the Chrome extension: +To enhance your DocsGPT experience, you can install the DocsGPT Chrome extension. Here's how: 1. In the DocsGPT GitHub repository, click on the "Code" button and select "Download ZIP". 2. Unzip the downloaded file to a location you can easily access. diff --git a/docs/pages/Developing/API-docs.md b/docs/pages/Developing/API-docs.md index 099a94261..09e4f875d 100644 --- a/docs/pages/Developing/API-docs.md +++ b/docs/pages/Developing/API-docs.md @@ -1,9 +1,25 @@ -Currently, the application provides the following main API endpoints: +# API Endpoints Documentation -### /api/answer -It's a POST request that sends a JSON in body with 4 values. It will receive an answer for a user provided question. -Here is a JavaScript fetch example: +*Currently, the application provides the following main API endpoints:* + +### 1. /api/answer +**Description:** + +This endpoint is used to request answers to user-provided questions. + +**Request:** + +Method: POST +Headers: Content-Type should be set to "application/json; charset=utf-8" +Request Body: JSON object with the following fields: +* **question:** The user's question +* **history:** (Optional) Previous conversation history +* **api_key:** Your API key +* **embeddings_key:** Your embeddings key +* **active_docs:** The location of active documentation + +Here is a JavaScript Fetch Request example: ```js // answer (POST http://127.0.0.1:5000/api/answer) fetch("http://127.0.0.1:5000/api/answer", { @@ -18,8 +34,9 @@ fetch("http://127.0.0.1:5000/api/answer", { .then(console.log.bind(console)) ``` -In response, you will get a JSON document like this one: +**Response** +In response, you will get a JSON document containing the answer,query and the result: ```json { "answer": " Hi there! How can I help you?\n", @@ -28,10 +45,17 @@ In response, you will get a JSON document like this one: } ``` -### /api/docs_check -It will make sure documentation is loaded on a server (just run it every time user is switching between libraries (documentations)). -It's a POST request that sends a JSON in a body with 1 value. Here is a JavaScript fetch example: +### 2. /api/docs_check +**Description:** + +This endpoint will make sure documentation is loaded on the server (just run it every time user is switching between libraries (documentations)). + +**Request:** + +Headers: Content-Type should be set to "application/json; charset=utf-8" +Request Body: JSON object with the field: +* **docs:** The location of the documentation ```js // answer (POST http://127.0.0.1:5000/api/docs_check) fetch("http://127.0.0.1:5000/api/docs_check", { @@ -45,7 +69,9 @@ fetch("http://127.0.0.1:5000/api/docs_check", { .then(console.log.bind(console)) ``` -In response, you will get a JSON document like this one: +**Response:** + +In response, you will get a JSON document like this one indicating whether the documentation exists or not.: ```json { "status": "exists" @@ -53,18 +79,36 @@ In response, you will get a JSON document like this one: ``` -### /api/combine -Provides JSON that tells UI which vectors are available and where they are located with a simple get request. +### 3. /api/combine +**Description:** + +This endpoint provides information about available vectors and their locations with a simple GET request. + +**Request:** + +Method: GET + +**Response:** Response will include: `date`, `description`, `docLink`, `fullName`, `language`, `location` (local or docshub), `model`, `name`, `version`. + Example of JSON in Docshub and local: + image -### /api/upload -Uploads file that needs to be trained, response is JSON with task ID, which can be used to check on task's progress +### 4. /api/upload +**Description:** + +This endpoint is used to upload a file that needs to be trained, response is JSON with task ID, which can be used to check on task's progress. + +**Request:** + +Method: POST +Request Body: A multipart/form-data form with file upload and additional fields, including "user" and "name." + HTML example: ```html @@ -79,20 +123,24 @@ HTML example: ``` -Response: -```json -{ - "status": "ok", - "task_id": "b2684988-9047-428b-bd47-08518679103c" -} +**Response:** -``` +JSON response with a status and a task ID that can be used to check the task's progress. + + +### 5. /api/task_status +**Description:** + +This endpoint is used to get the status of a task (`task_id`) from `/api/upload` + +**Request:** +Method: GET +Query Parameter: task_id (task ID to check) -### /api/task_status -Gets task status (`task_id`) from `/api/upload`: +**Sample JavaScript Fetch Request:** ```js // Task status (Get http://127.0.0.1:5000/api/task_status) -fetch("http://localhost:5001/api/task_status?task_id=b2d2a0f4-387c-44fd-a443-e4fe2e7454d1", { +fetch("http://localhost:5001/api/task_status?task_id=YOUR_TASK_ID", { "method": "GET", "headers": { "Content-Type": "application/json; charset=utf-8" @@ -102,9 +150,12 @@ fetch("http://localhost:5001/api/task_status?task_id=b2d2a0f4-387c-44fd-a443-e4f .then(console.log.bind(console)) ``` -Responses: +**Response:** + There are two types of responses: -1. While task is still running, where "current" will show progress from 0 to 100: + +1. While the task is still running, the 'current' value will show progress from 0 to 100. + ```json { "result": { @@ -132,8 +183,14 @@ There are two types of responses: } ``` -### /api/delete_old -Deletes old Vector stores: +### 6. /api/delete_old +**Description:** + +This endpoint is used to delete old Vector Stores. + +**Request:** + +Method: GET ```js // Task status (GET http://127.0.0.1:5000/api/docs_check) fetch("http://localhost:5001/api/task_status?task_id=b2d2a0f4-387c-44fd-a443-e4fe2e7454d1", { @@ -144,10 +201,11 @@ fetch("http://localhost:5001/api/task_status?task_id=b2d2a0f4-387c-44fd-a443-e4f }) .then((res) => res.text()) .then(console.log.bind(console)) -``` -Response: +``` +**Response:** +JSON response indicating the status of the operation. ```json { "status": "ok" } ``` diff --git a/docs/pages/Extensions/Chatwoot-extension.md b/docs/pages/Extensions/Chatwoot-extension.md index e95891a4f..f9db022b4 100644 --- a/docs/pages/Extensions/Chatwoot-extension.md +++ b/docs/pages/Extensions/Chatwoot-extension.md @@ -1,29 +1,42 @@ -### To start chatwoot extension: -1. Prepare and start the DocsGPT itself (load your documentation too). Follow our [wiki](https://github.com/arc53/DocsGPT/wiki) to start it and to [ingest](https://github.com/arc53/DocsGPT/wiki/How-to-train-on-other-documentation) data. -2. Go to chatwoot, **Navigate** to your profile (bottom left), click on profile settings, scroll to the bottom and copy **Access Token**. -3. Navigate to `/extensions/chatwoot`. Copy `.env_sample` and create `.env` file. -4. Fill in the values. +### To Start Chatwoot Extension: -``` -docsgpt_url= -chatwoot_url= -docsgpt_key= -chatwoot_token= -``` +1. **Prepare and Start DocsGPT:** + - Launch DocsGPT using the instructions in our [wiki](https://github.com/arc53/DocsGPT/wiki). + - Make sure to load your documentation. -5. Start with `flask run` command. +2. **Get Access Token from Chatwoot:** + - Navigate to Chatwoot. + - Go to your profile (bottom left), click on profile settings. + - Scroll to the bottom and copy the **Access Token**. -If you want for bot to stop responding to questions for a specific user or session, just add a label `human-requested` in your conversation. +3. **Set Up Chatwoot Extension:** + - Navigate to `/extensions/chatwoot`. + - Copy `.env_sample` and create a `.env` file. + - Fill in the values in the `.env` file: + ```env + docsgpt_url= + chatwoot_url= + docsgpt_key= + chatwoot_token= + ``` -### Optional (extra validation) -In `app.py` uncomment lines 12-13 and 71-75 +4. **Start the Extension:** + - Use the command `flask run` to start the extension. -in your `.env` file add: +5. **Optional: Extra Validation** + - In `app.py`, uncomment lines 12-13 and 71-75. + - Add the following lines to your `.env` file: -``` -account_id=(optional) 1 -assignee_id=(optional) 1 -``` + ```env + account_id=(optional) 1 + assignee_id=(optional) 1 + ``` -Those are chatwoot values and will allow you to check if you are responding to correct widget and responding to questions assigned to specific user. + These Chatwoot values help ensure you respond to the correct widget and handle questions assigned to a specific user. + +### Stopping Bot Responses for Specific User or Session: +- If you want the bot to stop responding to questions for a specific user or session, add a label `human-requested` in your conversation. + +### Additional Notes: +- For further details on training on other documentation, refer to our [wiki](https://github.com/arc53/DocsGPT/wiki/How-to-train-on-other-documentation). diff --git a/docs/pages/Extensions/react-widget.md b/docs/pages/Extensions/react-widget.md index 1cc11321b..a31306a26 100644 --- a/docs/pages/Extensions/react-widget.md +++ b/docs/pages/Extensions/react-widget.md @@ -14,9 +14,9 @@ import "docsgpt/dist/style.css"; Then you can use it like this: `` DocsGPTWidget takes 3 props: -- `apiHost` β€” URL of your DocsGPT API. -- `selectDocs` β€” documentation that you want to use for your widget (e.g. `default` or `local/docs1.zip`). -- `apiKey` β€” usually it's empty. +1. `apiHost` β€” URL of your DocsGPT API. +2. `selectDocs` β€” documentation that you want to use for your widget (e.g. `default` or `local/docs1.zip`). +3. `apiKey` β€” usually it's empty. ### How to use DocsGPTWidget with [Nextra](https://nextra.site/) (Next.js + MDX) Install your widget as described above and then go to your `pages/` folder and create a new file `_app.js` with the following content: diff --git a/docs/pages/Guides/Customising-prompts.md b/docs/pages/Guides/Customising-prompts.md index 19dcdefda..6cfbbff76 100644 --- a/docs/pages/Guides/Customising-prompts.md +++ b/docs/pages/Guides/Customising-prompts.md @@ -1,4 +1,27 @@ -## To customize a main prompt, navigate to `/application/prompt/combine_prompt.txt` +# Customizing the Main Prompt -You can try editing it to see how the model responses. +To customize the main prompt for DocsGPT, follow these steps: + +1. Navigate to `/application/prompt/combine_prompt.txt`. + +2. Edit the `combine_prompt.txt` file to modify the prompt text. You can experiment with different phrasings and structures to see how the model responds. + +## Example Prompt Modification + +**Original Prompt:** +```markdown +You are a DocsGPT, friendly and helpful AI assistant by Arc53 that provides help with documents. You give thorough answers with code examples if possible. +Use the following pieces of context to help answer the users question. If its not relevant to the question, provide friendly responses. +You have access to chat history, and can use it to help answer the question. +When using code examples, use the following format: + +(code) +{summaries} +``` + + + +## Conclusion + +Customizing the main prompt for DocsGPT allows you to tailor the AI's responses to your unique requirements. Whether you need in-depth explanations, code examples, or specific insights, you can achieve it by modifying the main prompt. Remember to experiment and fine-tune your prompts to get the best results. diff --git a/docs/pages/Guides/How-to-train-on-other-documentation.md b/docs/pages/Guides/How-to-train-on-other-documentation.md index 2e8e4afab..aa1ff41d1 100644 --- a/docs/pages/Guides/How-to-train-on-other-documentation.md +++ b/docs/pages/Guides/How-to-train-on-other-documentation.md @@ -12,28 +12,28 @@ It currently uses OPEN_AI to create the vector store, so make sure your document You can usually find documentation on Github in `docs/` folder for most open-source projects. ### 1. Find documentation in .rst/.md and create a folder with it in your scripts directory -- Name it `inputs/` -- Put all your .rst/.md files in there -- The search is recursive, so you don't need to flatten them +- Name it `inputs/`. +- Put all your .rst/.md files in there. +- The search is recursive, so you don't need to flatten them. -If there are no .rst/.md files just convert whatever you find to .txt and feed it. (don't forget to change the extension in script) +If there are no .rst/.md files just convert whatever you find to .txt file and feed it. (don't forget to change the extension in script) ### 2. Create .env file in `scripts/` folder And write your OpenAI API key inside -`OPENAI_API_KEY=` +`OPENAI_API_KEY=`. ### 3. Run scripts/ingest.py `python ingest.py ingest` -It will tell you how much it will cost +It will tell you how much it will cost. ### 4. Move `index.faiss` and `index.pkl` generated in `scripts/output` to `application/` folder. ### 5. Run web app -Once you run it will use new context that is relevant to your documentation -Make sure you select default in the dropdown in the UI +Once you run it will use new context that is relevant to your documentation. +Make sure you select default in the dropdown in the UI. ## Customization You can learn more about options while running ingest.py by running: diff --git a/docs/pages/Guides/How-to-use-different-LLM.md b/docs/pages/Guides/How-to-use-different-LLM.md index c0245a15f..8d7ccccec 100644 --- a/docs/pages/Guides/How-to-use-different-LLM.md +++ b/docs/pages/Guides/How-to-use-different-LLM.md @@ -1,10 +1,10 @@ -Fortunately, there are many providers for LLM's and some of them can even be run locally +Fortunately, there are many providers for LLMs, and some of them can even be run locally. There are two models used in the app: 1. Embeddings. 2. Text generation. -By default, we use OpenAI's models but if you want to change it or even run it locally, it's very simple! +By default, we use OpenAI's models, but if you want to change it or even run it locally, it's very simple! ### Go to .env file or set environment variables: @@ -21,12 +21,16 @@ By default, we use OpenAI's models but if you want to change it or even run it l You don't need to provide keys if you are happy with users providing theirs, so make sure you set `LLM_NAME` and `EMBEDDINGS_NAME`. Options: -LLM_NAME (openai, manifest, cohere, Arc53/docsgpt-14b, Arc53/docsgpt-7b-falcon) +LLM_NAME (openai, manifest, cohere, Arc53/docsgpt-14b, Arc53/docsgpt-7b-falcon, llama.cpp) EMBEDDINGS_NAME (openai_text-embedding-ada-002, huggingface_sentence-transformers/all-mpnet-base-v2, huggingface_hkunlp/instructor-large, cohere_medium) +If using Llama, set the `EMBEDDINGS_NAME` to `huggingface_sentence-transformers/all-mpnet-base-v2` and be sure to download [this model](https://d3dg1063dc54p9.cloudfront.net/models/docsgpt-7b-f16.gguf) into the `models/` folder: `https://d3dg1063dc54p9.cloudfront.net/models/docsgpt-7b-f16.gguf`. + +Alternatively, if you wish to run Llama locally, you can run `setup.sh` and choose option 1 when prompted. You do not need to manually add the DocsGPT model mentioned above to your `models/` folder if you use `setup.sh`, as the script will manage that step for you. + That's it! ### Hosting everything locally and privately (for using our optimised open-source models) -If you are working with important data and don't want anything to leave your premises. +If you are working with critical data and don't want anything to leave your premises. -Make sure you set `SELF_HOSTED_MODEL` as true in your `.env` variable and for your `LLM_NAME` you can use anything that's on Hugging Face. +Make sure you set `SELF_HOSTED_MODEL` as true in your `.env` variable, and for your `LLM_NAME`, you can use anything that is on Hugging Face. diff --git a/frontend/package-lock.json b/frontend/package-lock.json index 7c08dfb73..5fed84513 100644 --- a/frontend/package-lock.json +++ b/frontend/package-lock.json @@ -61,12 +61,13 @@ } }, "node_modules/@babel/code-frame": { - "version": "7.18.6", - "resolved": "https://registry.npmjs.org/@babel/code-frame/-/code-frame-7.18.6.tgz", - "integrity": "sha512-TDCmlK5eOvH+eH7cdAFlNXeVJqWIQ7gW9tY1GJIpUtFb6CmjVyq2VM3u71bOyR8CRihcCgMUYoDNyLXao3+70Q==", + "version": "7.22.13", + "resolved": "https://registry.npmjs.org/@babel/code-frame/-/code-frame-7.22.13.tgz", + "integrity": "sha512-XktuhWlJ5g+3TJXc5upd9Ks1HutSArik6jf2eAjYFyIOf4ej3RN+184cZbzDvbPnuTJIUhPKKJE3cIsYTiAT3w==", "dev": true, "dependencies": { - "@babel/highlight": "^7.18.6" + "@babel/highlight": "^7.22.13", + "chalk": "^2.4.2" }, "engines": { "node": ">=6.9.0" @@ -112,13 +113,14 @@ } }, "node_modules/@babel/generator": { - "version": "7.20.14", - "resolved": "https://registry.npmjs.org/@babel/generator/-/generator-7.20.14.tgz", - "integrity": "sha512-AEmuXHdcD3A52HHXxaTmYlb8q/xMEhoRP67B3T4Oq7lbmSoqroMZzjnGj3+i1io3pdnF8iBYVu4Ilj+c4hBxYg==", + "version": "7.23.0", + "resolved": "https://registry.npmjs.org/@babel/generator/-/generator-7.23.0.tgz", + "integrity": "sha512-lN85QRR+5IbYrMWM6Y4pE/noaQtg4pNiqeNGX60eqOfo6gtEj6uw/JagelB8vVztSd7R6M5n1+PQkDbHbBRU4g==", "dev": true, "dependencies": { - "@babel/types": "^7.20.7", + "@babel/types": "^7.23.0", "@jridgewell/gen-mapping": "^0.3.2", + "@jridgewell/trace-mapping": "^0.3.17", "jsesc": "^2.5.1" }, "engines": { @@ -159,34 +161,34 @@ } }, "node_modules/@babel/helper-environment-visitor": { - "version": "7.18.9", - "resolved": "https://registry.npmjs.org/@babel/helper-environment-visitor/-/helper-environment-visitor-7.18.9.tgz", - "integrity": "sha512-3r/aACDJ3fhQ/EVgFy0hpj8oHyHpQc+LPtJoY9SzTThAsStm4Ptegq92vqKoE3vD706ZVFWITnMnxucw+S9Ipg==", + "version": "7.22.20", + "resolved": "https://registry.npmjs.org/@babel/helper-environment-visitor/-/helper-environment-visitor-7.22.20.tgz", + "integrity": "sha512-zfedSIzFhat/gFhWfHtgWvlec0nqB9YEIVrpuwjruLlXfUSnA8cJB0miHKwqDnQ7d32aKo2xt88/xZptwxbfhA==", "dev": true, "engines": { "node": ">=6.9.0" } }, "node_modules/@babel/helper-function-name": { - "version": "7.19.0", - "resolved": "https://registry.npmjs.org/@babel/helper-function-name/-/helper-function-name-7.19.0.tgz", - "integrity": "sha512-WAwHBINyrpqywkUH0nTnNgI5ina5TFn85HKS0pbPDfxFfhyR/aNQEn4hGi1P1JyT//I0t4OgXUlofzWILRvS5w==", + "version": "7.23.0", + "resolved": "https://registry.npmjs.org/@babel/helper-function-name/-/helper-function-name-7.23.0.tgz", + "integrity": "sha512-OErEqsrxjZTJciZ4Oo+eoZqeW9UIiOcuYKRJA4ZAgV9myA+pOXhhmpfNCKjEH/auVfEYVFJ6y1Tc4r0eIApqiw==", "dev": true, "dependencies": { - "@babel/template": "^7.18.10", - "@babel/types": "^7.19.0" + "@babel/template": "^7.22.15", + "@babel/types": "^7.23.0" }, "engines": { "node": ">=6.9.0" } }, "node_modules/@babel/helper-hoist-variables": { - "version": "7.18.6", - "resolved": "https://registry.npmjs.org/@babel/helper-hoist-variables/-/helper-hoist-variables-7.18.6.tgz", - "integrity": "sha512-UlJQPkFqFULIcyW5sbzgbkxn2FKRgwWiRexcuaR8RNJRy8+LLveqPjwZV/bwrLZCN0eUHD/x8D0heK1ozuoo6Q==", + "version": "7.22.5", + "resolved": "https://registry.npmjs.org/@babel/helper-hoist-variables/-/helper-hoist-variables-7.22.5.tgz", + "integrity": "sha512-wGjk9QZVzvknA6yKIUURb8zY3grXCcOZt+/7Wcy8O2uctxhplmUPkOdlgoNhmdVee2c92JXbf1xpMtVNbfoxRw==", "dev": true, "dependencies": { - "@babel/types": "^7.18.6" + "@babel/types": "^7.22.5" }, "engines": { "node": ">=6.9.0" @@ -245,30 +247,30 @@ } }, "node_modules/@babel/helper-split-export-declaration": { - "version": "7.18.6", - "resolved": "https://registry.npmjs.org/@babel/helper-split-export-declaration/-/helper-split-export-declaration-7.18.6.tgz", - "integrity": "sha512-bde1etTx6ZyTmobl9LLMMQsaizFVZrquTEHOqKeQESMKo4PlObf+8+JA25ZsIpZhT/WEd39+vOdLXAFG/nELpA==", + "version": "7.22.6", + "resolved": "https://registry.npmjs.org/@babel/helper-split-export-declaration/-/helper-split-export-declaration-7.22.6.tgz", + "integrity": "sha512-AsUnxuLhRYsisFiaJwvp1QF+I3KjD5FOxut14q/GzovUe6orHLesW2C7d754kRm53h5gqrz6sFl6sxc4BVtE/g==", "dev": true, "dependencies": { - "@babel/types": "^7.18.6" + "@babel/types": "^7.22.5" }, "engines": { "node": ">=6.9.0" } }, "node_modules/@babel/helper-string-parser": { - "version": "7.19.4", - "resolved": "https://registry.npmjs.org/@babel/helper-string-parser/-/helper-string-parser-7.19.4.tgz", - "integrity": "sha512-nHtDoQcuqFmwYNYPz3Rah5ph2p8PFeFCsZk9A/48dPc/rGocJ5J3hAAZ7pb76VWX3fZKu+uEr/FhH5jLx7umrw==", + "version": "7.22.5", + "resolved": "https://registry.npmjs.org/@babel/helper-string-parser/-/helper-string-parser-7.22.5.tgz", + "integrity": "sha512-mM4COjgZox8U+JcXQwPijIZLElkgEpO5rsERVDJTc2qfCDfERyob6k5WegS14SX18IIjv+XD+GrqNumY5JRCDw==", "dev": true, "engines": { "node": ">=6.9.0" } }, "node_modules/@babel/helper-validator-identifier": { - "version": "7.19.1", - "resolved": "https://registry.npmjs.org/@babel/helper-validator-identifier/-/helper-validator-identifier-7.19.1.tgz", - "integrity": "sha512-awrNfaMtnHUr653GgGEs++LlAvW6w+DcPrOliSMXWCKo597CwL5Acf/wWdNkf/tfEQE3mjkeD1YOVZOUV/od1w==", + "version": "7.22.20", + "resolved": "https://registry.npmjs.org/@babel/helper-validator-identifier/-/helper-validator-identifier-7.22.20.tgz", + "integrity": "sha512-Y4OZ+ytlatR8AI+8KZfKuL5urKp7qey08ha31L8b3BwewJAoJamTzyvxPR/5D+KkdJCGPq/+8TukHBlY10FX9A==", "dev": true, "engines": { "node": ">=6.9.0" @@ -298,13 +300,13 @@ } }, "node_modules/@babel/highlight": { - "version": "7.18.6", - "resolved": "https://registry.npmjs.org/@babel/highlight/-/highlight-7.18.6.tgz", - "integrity": "sha512-u7stbOuYjaPezCuLj29hNW1v64M2Md2qupEKP1fHc7WdOA3DgLh37suiSrZYY7haUB7iBeQZ9P1uiRF359do3g==", + "version": "7.22.20", + "resolved": "https://registry.npmjs.org/@babel/highlight/-/highlight-7.22.20.tgz", + "integrity": "sha512-dkdMCN3py0+ksCgYmGG8jKeGA/8Tk+gJwSYYlFGxG5lmhfKNoAy004YpLxpS1W2J8m/EK2Ew+yOs9pVRwO89mg==", "dev": true, "dependencies": { - "@babel/helper-validator-identifier": "^7.18.6", - "chalk": "^2.0.0", + "@babel/helper-validator-identifier": "^7.22.20", + "chalk": "^2.4.2", "js-tokens": "^4.0.0" }, "engines": { @@ -312,9 +314,9 @@ } }, "node_modules/@babel/parser": { - "version": "7.20.15", - "resolved": "https://registry.npmjs.org/@babel/parser/-/parser-7.20.15.tgz", - "integrity": "sha512-DI4a1oZuf8wC+oAJA9RW6ga3Zbe8RZFt7kD9i4qAspz3I/yHet1VvC3DiSy/fsUvv5pvJuNPh0LPOdCcqinDPg==", + "version": "7.23.0", + "resolved": "https://registry.npmjs.org/@babel/parser/-/parser-7.23.0.tgz", + "integrity": "sha512-vvPKKdMemU85V9WE/l5wZEmImpCtLqbnTvqDS2U1fJ96KrxoW7KrXhNsNCblQlg8Ck4b85yxdTyelsMUgFUXiw==", "dev": true, "bin": { "parser": "bin/babel-parser.js" @@ -365,33 +367,33 @@ } }, "node_modules/@babel/template": { - "version": "7.20.7", - "resolved": "https://registry.npmjs.org/@babel/template/-/template-7.20.7.tgz", - "integrity": "sha512-8SegXApWe6VoNw0r9JHpSteLKTpTiLZ4rMlGIm9JQ18KiCtyQiAMEazujAHrUS5flrcqYZa75ukev3P6QmUwUw==", + "version": "7.22.15", + "resolved": "https://registry.npmjs.org/@babel/template/-/template-7.22.15.tgz", + "integrity": "sha512-QPErUVm4uyJa60rkI73qneDacvdvzxshT3kksGqlGWYdOTIUOwJ7RDUL8sGqslY1uXWSL6xMFKEXDS3ox2uF0w==", "dev": true, "dependencies": { - "@babel/code-frame": "^7.18.6", - "@babel/parser": "^7.20.7", - "@babel/types": "^7.20.7" + "@babel/code-frame": "^7.22.13", + "@babel/parser": "^7.22.15", + "@babel/types": "^7.22.15" }, "engines": { "node": ">=6.9.0" } }, "node_modules/@babel/traverse": { - "version": "7.20.13", - "resolved": "https://registry.npmjs.org/@babel/traverse/-/traverse-7.20.13.tgz", - "integrity": "sha512-kMJXfF0T6DIS9E8cgdLCSAL+cuCK+YEZHWiLK0SXpTo8YRj5lpJu3CDNKiIBCne4m9hhTIqUg6SYTAI39tAiVQ==", - "dev": true, - "dependencies": { - "@babel/code-frame": "^7.18.6", - "@babel/generator": "^7.20.7", - "@babel/helper-environment-visitor": "^7.18.9", - "@babel/helper-function-name": "^7.19.0", - "@babel/helper-hoist-variables": "^7.18.6", - "@babel/helper-split-export-declaration": "^7.18.6", - "@babel/parser": "^7.20.13", - "@babel/types": "^7.20.7", + "version": "7.23.2", + "resolved": "https://registry.npmjs.org/@babel/traverse/-/traverse-7.23.2.tgz", + "integrity": "sha512-azpe59SQ48qG6nu2CzcMLbxUudtN+dOM9kDbUqGq3HXUJRlo7i8fvPoxQUzYgLZ4cMVmuZgm8vvBpNeRhd6XSw==", + "dev": true, + "dependencies": { + "@babel/code-frame": "^7.22.13", + "@babel/generator": "^7.23.0", + "@babel/helper-environment-visitor": "^7.22.20", + "@babel/helper-function-name": "^7.23.0", + "@babel/helper-hoist-variables": "^7.22.5", + "@babel/helper-split-export-declaration": "^7.22.6", + "@babel/parser": "^7.23.0", + "@babel/types": "^7.23.0", "debug": "^4.1.0", "globals": "^11.1.0" }, @@ -400,13 +402,13 @@ } }, "node_modules/@babel/types": { - "version": "7.20.7", - "resolved": "https://registry.npmjs.org/@babel/types/-/types-7.20.7.tgz", - "integrity": "sha512-69OnhBxSSgK0OzTJai4kyPDiKTIe3j+ctaHdIGVbRahTLAT7L3R9oeXHC2aVSuGYt3cVnoAMDmOCgJ2yaiLMvg==", + "version": "7.23.0", + "resolved": "https://registry.npmjs.org/@babel/types/-/types-7.23.0.tgz", + "integrity": "sha512-0oIyUfKoI3mSqMvsxBdclDwxXKXAUA8v/apZbc+iSyARYou1o8ZGDxbUYyLFoW2arqS2jDGqJuZvv1d/io1axg==", "dev": true, "dependencies": { - "@babel/helper-string-parser": "^7.19.4", - "@babel/helper-validator-identifier": "^7.19.1", + "@babel/helper-string-parser": "^7.22.5", + "@babel/helper-validator-identifier": "^7.22.20", "to-fast-properties": "^2.0.0" }, "engines": { diff --git a/frontend/src/About.tsx b/frontend/src/About.tsx index ca318a6ef..53a87bc8c 100644 --- a/frontend/src/About.tsx +++ b/frontend/src/About.tsx @@ -4,7 +4,7 @@ export default function About() { return (
-
+

About DocsGPT

πŸ¦–

@@ -54,7 +54,7 @@ export default function About() { Currently It uses{' '} DocsGPT{' '} documentation, so it will respond to information relevant to{' '} - DocsGPT . If you + DocsGPT. If you want to train it on different documentation - please follow diff --git a/frontend/src/Navigation.tsx b/frontend/src/Navigation.tsx index e065d65df..79383bbf5 100644 --- a/frontend/src/Navigation.tsx +++ b/frontend/src/Navigation.tsx @@ -32,6 +32,7 @@ import { useMediaQuery, useOutsideAlerter } from './hooks'; import Upload from './upload/Upload'; import { Doc, getConversations } from './preferences/preferenceApi'; import SelectDocsModal from './preferences/SelectDocsModal'; +import ConversationTile from './conversation/ConversationTile'; interface NavigationProps { navOpen: boolean; @@ -68,27 +69,26 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) { useEffect(() => { if (!conversations) { - getConversations() - .then((fetchedConversations) => { - dispatch(setConversations(fetchedConversations)); - }) - .catch((error) => { - console.error('Failed to fetch conversations: ', error); - }); + fetchConversations(); } }, [conversations, dispatch]); + async function fetchConversations() { + return await getConversations() + .then((fetchedConversations) => { + dispatch(setConversations(fetchedConversations)); + }) + .catch((error) => { + console.error('Failed to fetch conversations: ', error); + }); + } + const handleDeleteConversation = (id: string) => { fetch(`${apiHost}/api/delete_conversation?id=${id}`, { method: 'POST', }) .then(() => { - // remove the image element from the DOM - const imageElement = document.querySelector( - `#img-${id}`, - ) as HTMLElement; - const parentElement = imageElement.parentNode as HTMLElement; - parentElement.parentNode?.removeChild(parentElement); + fetchConversations(); }) .catch((error) => console.error(error)); }; @@ -126,6 +126,29 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) { ); }); }; + + async function updateConversationName(updatedConversation: { + name: string; + id: string; + }) { + await fetch(`${apiHost}/api/update_conversation_name`, { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + }, + body: JSON.stringify(updatedConversation), + }) + .then((response) => response.json()) + .then((data) => { + if (data) { + navigate('/'); + fetchConversations(); + } + }) + .catch((err) => { + console.error(err); + }); + } useOutsideAlerter( navRef, () => { @@ -142,11 +165,7 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) { */ useEffect(() => { - if (isMobile) { - setNavOpen(false); - return; - } - setNavOpen(true); + setNavOpen(!isMobile); }, [isMobile]); return ( @@ -171,7 +190,7 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) { ref={navRef} className={`${ !navOpen && '-ml-96 md:-ml-[18rem]' - } duration-20 fixed z-20 flex h-full w-72 flex-col border-r-2 bg-gray-50 transition-all`} + } duration-20 fixed top-0 z-20 flex h-full w-72 flex-col border-r-2 bg-gray-50 transition-all`} >
@@ -290,7 +281,7 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {

{doc.name} {doc.version}

- {doc.location === 'local' ? ( + {doc.location === 'local' && ( Exit - ) : null} + )}
); } @@ -371,7 +362,7 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
-
+
- ) : null} + )} {queries.length > 0 && (
@@ -152,7 +155,9 @@ export default function Conversation() {
)}
-

+

This is a chatbot that uses the GPT-3, Faiss and LangChain to answer questions.

diff --git a/frontend/src/conversation/ConversationBubble.module.css b/frontend/src/conversation/ConversationBubble.module.css new file mode 100644 index 000000000..4a8d3a12f --- /dev/null +++ b/frontend/src/conversation/ConversationBubble.module.css @@ -0,0 +1,11 @@ +.list p { + display: inline; +} + +.list li:not(:first-child) { + margin-top: 1em; +} + +.list li > .list { + margin-top: 1em; +} diff --git a/frontend/src/conversation/ConversationBubble.tsx b/frontend/src/conversation/ConversationBubble.tsx index 190196130..37b93f2e3 100644 --- a/frontend/src/conversation/ConversationBubble.tsx +++ b/frontend/src/conversation/ConversationBubble.tsx @@ -1,6 +1,7 @@ import { forwardRef, useState } from 'react'; import Avatar from '../Avatar'; import { FEEDBACK, MESSAGE_TYPE } from './conversationModels'; +import classes from './ConversationBubble.module.css'; import Alert from './../assets/alert.svg'; import { ReactComponent as Like } from './../assets/like.svg'; import { ReactComponent as Dislike } from './../assets/dislike.svg'; @@ -27,7 +28,6 @@ const ConversationBubble = forwardRef< { message, type, className, feedback, handleFeedback, sources }, ref, ) { - const [showFeedback, setShowFeedback] = useState(false); const [openSource, setOpenSource] = useState(null); const [copied, setCopied] = useState(false); @@ -40,16 +40,6 @@ const ConversationBubble = forwardRef< }, 2000); }; - const List = ({ - ordered, - children, - }: { - ordered?: boolean; - children: React.ReactNode; - }) => { - const Tag = ordered ? 'ol' : 'ul'; - return {children}; - }; let bubble; if (type === 'QUESTION') { @@ -65,26 +55,21 @@ const ConversationBubble = forwardRef< ); } else { bubble = ( -
setShowFeedback(true)} - onMouseLeave={() => setShowFeedback(false)} - > +
{type === 'ERROR' && ( alert )} ); }, - ul({ node, children }) { - return {children}; + ul({ children }) { + return ( +
    + {children} +
+ ); }, - ol({ node, children }) { - return {children}; + ol({ children }) { + return ( +
    + {children} +
+ ); }, }} > {message}
{DisableSourceFE || type === 'ERROR' ? null : ( - - )} -
- {DisableSourceFE || type === 'ERROR' ? null : ( -
- Sources: -
- )} -
- {DisableSourceFE - ? null - : sources?.map((source, index) => ( + <> + +
+
Sources:
+
+ {sources?.map((source, index) => (
))} -
-
+
+
+ + )}
{copied ? ( - + ) : ( { handleCopyClick(message); }} @@ -169,14 +162,14 @@ const ConversationBubble = forwardRef< )}
void; + onDeleteConversation: (arg1: string) => void; + onSave: ({ name, id }: ConversationProps) => void; +} + +export default function ConversationTile({ + conversation, + selectConversation, + onDeleteConversation, + onSave, +}: ConversationTileProps) { + const conversationId = useSelector(selectConversationId); + const tileRef = useRef(null); + + const [isEdit, setIsEdit] = useState(false); + const [conversationName, setConversationsName] = useState(''); + useOutsideAlerter( + tileRef, + () => + handleSaveConversation({ + id: conversationId || conversation.id, + name: conversationName, + }), + [conversationName], + ); + + useEffect(() => { + setConversationsName(conversation.name); + }, [conversation.name]); + + function handleEditConversation() { + setIsEdit(true); + } + + function handleSaveConversation(changedConversation: ConversationProps) { + if (changedConversation.name.trim().length) { + onSave(changedConversation); + setIsEdit(false); + } else { + onClear(); + } + } + + function onClear() { + setConversationsName(conversation.name); + setIsEdit(false); + } + return ( +
{ + selectConversation(conversation.id); + }} + className={`my-auto mx-4 mt-4 flex h-9 cursor-pointer items-center justify-between gap-4 rounded-3xl hover:bg-gray-100 ${ + conversationId === conversation.id ? 'bg-gray-100' : '' + }`} + > +
+ + {isEdit ? ( + setConversationsName(e.target.value)} + /> + ) : ( +

+ {conversationName} +

+ )} +
+ {conversationId === conversation.id && ( +
+ Edit { + event.stopPropagation(); + isEdit + ? handleSaveConversation({ + id: conversationId, + name: conversationName, + }) + : handleEditConversation(); + }} + /> + Exit { + event.stopPropagation(); + isEdit ? onClear() : onDeleteConversation(conversation.id); + }} + /> +
+ )} +
+ ); +} diff --git a/frontend/src/upload/Upload.tsx b/frontend/src/upload/Upload.tsx index 755a9ad43..ecde22507 100644 --- a/frontend/src/upload/Upload.tsx +++ b/frontend/src/upload/Upload.tsx @@ -206,7 +206,10 @@ export default function Upload({
diff --git a/frontend/tailwind.config.cjs b/frontend/tailwind.config.cjs index 8e395a07d..b76b02223 100644 --- a/frontend/tailwind.config.cjs +++ b/frontend/tailwind.config.cjs @@ -1,6 +1,7 @@ /** @type {import('tailwindcss').Config} */ module.exports = { content: ['./index.html', './src/**/*.{js,ts,jsx,tsx}'], + darkMode: 'class', theme: { extend: { spacing: { diff --git a/scripts/ingest.py b/scripts/ingest.py index 6ab9cce1a..8c74fd03d 100644 --- a/scripts/ingest.py +++ b/scripts/ingest.py @@ -78,14 +78,12 @@ def process_one_docs(directory, folder_name): # Here we check for command line arguments for bot calls. # If no argument exists or the yes is not True, then the # user permission is requested to call the API. - if len(sys.argv) > 1: - if yes: - call_openai_api(docs, folder_name) - else: - get_user_permission(docs, folder_name) + if len(sys.argv) > 1 and yes: + call_openai_api(docs, folder_name) else: get_user_permission(docs, folder_name) + folder_counts = defaultdict(int) folder_names = [] for dir_path in dir: @@ -110,14 +108,19 @@ def convert(dir: Optional[str] = typer.Option("inputs", Creates documentation linked to original functions from specified location. By default /inputs folder is used, .py is parsed. """ - if formats == 'py': - functions_dict, classes_dict = extract_py(dir) - elif formats == 'js': - functions_dict, classes_dict = extract_js(dir) - elif formats == 'java': - functions_dict, classes_dict = extract_java(dir) + # Using a dictionary to map between the formats and their respective extraction functions + # makes the code more scalable. When adding more formats in the future, + # you only need to update the extraction_functions dictionary. + extraction_functions = { + 'py': extract_py, + 'js': extract_js, + 'java': extract_java + } + + if formats in extraction_functions: + functions_dict, classes_dict = extraction_functions[formats](dir) else: - raise Exception("Sorry, language not supported yet") + raise Exception("Sorry, language not supported yet") transform_to_docs(functions_dict, classes_dict, formats, dir) diff --git a/scripts/requirements.txt b/scripts/requirements.txt index 7d48373f8..5ead1d072 100644 --- a/scripts/requirements.txt +++ b/scripts/requirements.txt @@ -47,7 +47,7 @@ javalang==0.13.0 Jinja2==3.1.2 jmespath==1.0.1 joblib==1.3.1 -langchain==0.0.308 +langchain==0.0.312 lxml==4.9.3 manifest-ml==0.1.8 MarkupSafe==2.1.3 diff --git a/tests/test_vector_store.py b/tests/test_vector_store.py new file mode 100644 index 000000000..297a225ce --- /dev/null +++ b/tests/test_vector_store.py @@ -0,0 +1,19 @@ +""" +Tests regarding the vector store class, including checking +compatibility between different transformers and local vector +stores (index.faiss) +""" +import pytest +from application.vectorstore.faiss import FaissStore +from application.core.settings import settings + +def test_init_local_faiss_store_huggingface(): + """ + Test that asserts that trying to initialize a FaissStore with + the huggingface sentence transformer below together with the + index.faiss file in the application/ folder results in a + dimension mismatch error. + """ + settings.EMBEDDINGS_NAME = "huggingface_sentence-transformers/all-mpnet-base-v2" + with pytest.raises(ValueError): + FaissStore("application/", "", None)