Skip to content

Commit

Permalink
Merge pull request #441 from ka1bi4/update/documentation-update-forma…
Browse files Browse the repository at this point in the history
…tting

Improved docs readability and fix some typo.
  • Loading branch information
dartpain authored Oct 5, 2023
2 parents d13e5e7 + d37885e commit 627dc2d
Show file tree
Hide file tree
Showing 8 changed files with 68 additions and 69 deletions.
14 changes: 7 additions & 7 deletions docs/pages/Deploying/Hosting-the-app.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Here's a step-by-step guide on how to setup an Amazon Lightsail instance to host

## Configuring your instance

(If you know how to create a Lightsail instance, you can skip to the recommended configuration part by clicking here)
(If you know how to create a Lightsail instance, you can skip to the recommended configuration part by clicking here).

### 1. Create an account or login to https://lightsail.aws.amazon.com

Expand Down Expand Up @@ -36,7 +36,7 @@ Your instance will be ready for use a few minutes after being created. To access

#### Clone the repository

A terminal window will pop up, and the first step will be to clone the DocsGPT git repository.
A terminal window will pop up, and the first step will be to clone the DocsGPT git repository:

`git clone https://github.com/arc53/DocsGPT.git`

Expand Down Expand Up @@ -64,11 +64,11 @@ Enter the following command to access the folder in which DocsGPT docker-compose

#### Prepare the environment

Inside the DocsGPT folder create a .env file and copy the contents of .env_sample into it.
Inside the DocsGPT folder create a `.env` file and copy the contents of `.env_sample` into it.

`nano .env`

Make sure your .env file looks like this:
Make sure your `.env` file looks like this:

```
OPENAI_API_KEY=(Your OpenAI API key)
Expand Down Expand Up @@ -103,10 +103,10 @@ Before you are able to access your live instance, you must first enable the port

Open your Lightsail instance and head to "Networking".

Then click on "Add rule" under "IPv4 Firewall", enter 5173 as your port, and hit "Create".
Repeat the process for port 7091.
Then click on "Add rule" under "IPv4 Firewall", enter `5173` as your port, and hit "Create".
Repeat the process for port `7091`.

#### Access your instance

Your instance will now be available under your Public IP Address and port 5173. Enjoy!
Your instance will now be available under your Public IP Address and port `5173`. Enjoy!

26 changes: 13 additions & 13 deletions docs/pages/Deploying/Quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,23 +9,23 @@ It will install all the dependencies and give you an option to download the loca

Otherwise, refer to this Guide:

1. Open and download this repository with `git clone https://github.com/arc53/DocsGPT.git`
2. Create a .env file in your root directory and set your `API_KEY` with your openai api key
3. Run `docker-compose build && docker-compose up`
4. Navigate to `http://localhost:5173/`
1. Open and download this repository with `git clone https://github.com/arc53/DocsGPT.git`.
2. Create a `.env` file in your root directory and set your `API_KEY` with your openai api key.
3. Run `docker-compose build && docker-compose up`.
4. Navigate to `http://localhost:5173/`.

To stop just run Ctrl + C
To stop just run `Ctrl + C`.

### Chrome Extension

To install the Chrome extension:

1. In the DocsGPT GitHub repository, click on the "Code" button and select Download ZIP
2. Unzip the downloaded file to a location you can easily access
3. Open the Google Chrome browser and click on the three dots menu (upper right corner)
4. Select "More Tools" and then "Extensions"
5. Turn on the "Developer mode" switch in the top right corner of the Extensions page
6. Click on the "Load unpacked" button
7. Select the "Chrome" folder where the DocsGPT files have been unzipped (docsgpt-main > extensions > chrome)
8. The extension should now be added to Google Chrome and can be managed on the Extensions page
1. In the DocsGPT GitHub repository, click on the "Code" button and select "Download ZIP".
2. Unzip the downloaded file to a location you can easily access.
3. Open the Google Chrome browser and click on the three dots menu (upper right corner).
4. Select "More Tools" and then "Extensions".
5. Turn on the "Developer mode" switch in the top right corner of the Extensions page.
6. Click on the "Load unpacked" button.
7. Select the "Chrome" folder where the DocsGPT files have been unzipped (docsgpt-main > extensions > chrome).
8. The extension should now be added to Google Chrome and can be managed on the Extensions page.
9. To disable or remove the extension, simply turn off the toggle switch on the extension card or click the "Remove" button.
42 changes: 21 additions & 21 deletions docs/pages/Developing/API-docs.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
App currently has two main api endpoints:
Currently, the application provides the following main API endpoints:

### /api/answer
Its a POST request that sends a JSON in body with 4 values. Here is a JavaScript fetch example
It will receive an answer for a user provided question
It's a POST request that sends a JSON in body with 4 values. It will receive an answer for a user provided question.
Here is a JavaScript fetch example:

```js
// answer (POST http://127.0.0.1:5000/api/answer)
Expand All @@ -29,8 +29,8 @@ In response you will get a json document like this one:
```

### /api/docs_check
It will make sure documentation is loaded on a server (just run it every time user is switching between libraries (documentations)
Its a POST request that sends a JSON in body with 1 value. Here is a JavaScript fetch example
It will make sure documentation is loaded on a server (just run it every time user is switching between libraries (documentations)).
It's a POST request that sends a JSON in body with 1 value. Here is a JavaScript fetch example:

```js
// answer (POST http://127.0.0.1:5000/api/docs_check)
Expand All @@ -54,10 +54,10 @@ In response you will get a json document like this one:


### /api/combine
Provides json that tells UI which vectors are available and where they are located with a simple get request
Provides json that tells UI which vectors are available and where they are located with a simple get request.

Respsonse will include:
date, description, docLink, fullName, language, location (local or docshub), model, name, version
Response will include:
`date`, `description`, `docLink`, `fullName`, `language`, `location` (local or docshub), `model`, `name`, `version`.

Example of json in Docshub and local:
<img width="295" alt="image" src="https://user-images.githubusercontent.com/15183589/224714085-f09f51a4-7a9a-4efb-bd39-798029bb4273.png">
Expand All @@ -69,15 +69,14 @@ HTML example:

```html
<form action="/api/upload" method="post" enctype="multipart/form-data" class="mt-2">
<input type="file" name="file" class="py-4" id="file-upload">
<input type="text" name="user" value="local" hidden>
<input type="text" name="name" placeholder="Name:">


<button type="submit" class="py-2 px-4 text-white bg-blue-500 rounded-md hover:bg-blue-600 focus:outline-none focus:ring-2 focus:ring-offset-2 focus:ring-blue-500">
Upload
</button>
</form>
<input type="file" name="file" class="py-4" id="file-upload">
<input type="text" name="user" value="local" hidden>
<input type="text" name="name" placeholder="Name:">

<button type="submit" class="py-2 px-4 text-white bg-blue-500 rounded-md hover:bg-blue-600 focus:outline-none focus:ring-2 focus:ring-offset-2 focus:ring-blue-500">
Upload
</button>
</form>
```

Response:
Expand All @@ -90,7 +89,7 @@ Response:
```

### /api/task_status
Gets task status (task_id) from /api/upload
Gets task status (`task_id`) from `/api/upload`:
```js
// Task status (Get http://127.0.0.1:5000/api/task_status)
fetch("http://localhost:5001/api/task_status?task_id=b2d2a0f4-387c-44fd-a443-e4fe2e7454d1", {
Expand All @@ -105,7 +104,7 @@ fetch("http://localhost:5001/api/task_status?task_id=b2d2a0f4-387c-44fd-a443-e4f

Responses:
There are two types of responses:
1. while task it still running, where "current" will show progress from 0 - 100
1. while task it still running, where "current" will show progress from 0 to 100
```json
{
"result": {
Expand Down Expand Up @@ -134,7 +133,7 @@ There are two types of responses:
```

### /api/delete_old
deletes old vecotstores
Deletes old vecotstores:
```js
// Task status (GET http://127.0.0.1:5000/api/docs_check)
fetch("http://localhost:5001/api/task_status?task_id=b2d2a0f4-387c-44fd-a443-e4fe2e7454d1", {
Expand All @@ -146,7 +145,8 @@ fetch("http://localhost:5001/api/task_status?task_id=b2d2a0f4-387c-44fd-a443-e4f
.then((res) => res.text())
.then(console.log.bind(console))
```
response:

Response:

```json
{ "status": "ok" }
Expand Down
26 changes: 13 additions & 13 deletions docs/pages/Extensions/Chatwoot-extension.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,8 @@
### To start chatwoot extension:
1. Prepare and start the DocsGPT itself (load your documentation too)
Follow our [wiki](https://github.com/arc53/DocsGPT/wiki) to start it and to [ingest](https://github.com/arc53/DocsGPT/wiki/How-to-train-on-other-documentation) data
2. Go to chatwoot, Navigate to your profile (bottom left), click on profile settings, scroll to the bottom and copy Access Token
2. Navigate to `/extensions/chatwoot`. Copy .env_sample and create .env file
3. Fill in the values
1. Prepare and start the DocsGPT itself (load your documentation too). Follow our [wiki](https://github.com/arc53/DocsGPT/wiki) to start it and to [ingest](https://github.com/arc53/DocsGPT/wiki/How-to-train-on-other-documentation) data.
2. Go to chatwoot, **Navigate** to your profile (bottom left), click on profile settings, scroll to the bottom and copy **Access Token**.
3. Navigate to `/extensions/chatwoot`. Copy `.env_sample` and create `.env` file.
4. Fill in the values.

```
docsgpt_url=<docsgpt_api_url>
Expand All @@ -12,18 +11,19 @@ docsgpt_key=<openai_api_key or other llm key>
chatwoot_token=<from part 2>
```

4. start with `flask run` command
5. Start with `flask run` command.

If you want for bot to stop responding to questions for a specific user or session just add label `human-requested` in your conversation
If you want for bot to stop responding to questions for a specific user or session just add label `human-requested` in your conversation.


### Optional (extra validation)
In app.py uncomment lines 12-13 and 71-75
In `app.py` uncomment lines 12-13 and 71-75

in your .env file add:
in your `.env` file add:

`account_id=(optional) 1 `

`assignee_id=(optional) 1`
```
account_id=(optional) 1
assignee_id=(optional) 1
```

Those are chatwoot values and will allow you to check if you are responding to correct widget and responding to questions assigned to specific user
Those are chatwoot values and will allow you to check if you are responding to correct widget and responding to questions assigned to specific user.
8 changes: 4 additions & 4 deletions docs/pages/Extensions/react-widget.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
### How to set up react docsGPT widget on your website:

### Installation
Got to your project and install a new dependency: `npm install docsgpt`
Got to your project and install a new dependency: `npm install docsgpt`.

### Usage
Go to your project and in the file where you want to use the widget import it:
Expand All @@ -14,9 +14,9 @@ import "docsgpt/dist/style.css";
Then you can use it like this: `<DocsGPTWidget />`

DocsGPTWidget takes 3 props:
- `apiHost` - url of your DocsGPT API
- `selectDocs` - documentation that you want to use for your widget (eg. `default` or `local/docs1.zip`)
- `apiKey` - usually its empty
- `apiHost` url of your DocsGPT API.
- `selectDocs` documentation that you want to use for your widget (eg. `default` or `local/docs1.zip`).
- `apiKey` usually its empty.

### How to use DocsGPTWidget with [Nextra](https://nextra.site/) (Next.js + MDX)
Install you widget as described above and then go to your `pages/` folder and create a new file `_app.js` with the following content:
Expand Down
2 changes: 1 addition & 1 deletion docs/pages/Guides/Customising-prompts.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
## To customise a main prompt navigate to `/application/prompt/combine_prompt.txt`

You can try editing it to see how the model responds.
You can try editing it to see how the model responses.

7 changes: 3 additions & 4 deletions docs/pages/Guides/How-to-train-on-other-documentation.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,14 +3,13 @@ This AI can use any documentation, but first it needs to be prepared for similar

![video-example-of-how-to-do-it](https://d3dg1063dc54p9.cloudfront.net/videos/how-to-vectorise.gif)

Start by going to
`/scripts/` folder
Start by going to `/scripts/` folder.

If you open this file you will see that it uses RST files from the folder to create a `index.faiss` and `index.pkl`.

It currently uses OPEN_AI to create vector store, so make sure your documentation is not too big. Pandas cost me around 3-4$
It currently uses OPEN_AI to create vector store, so make sure your documentation is not too big. Pandas cost me around 3-4$.

You can usually find documentation on github in docs/ folder for most open-source projects.
You can usually find documentation on github in `docs/` folder for most open-source projects.

### 1. Find documentation in .rst/.md and create a folder with it in your scripts directory
Name it `inputs/`
Expand Down
12 changes: 6 additions & 6 deletions docs/pages/Guides/How-to-use-different-LLM.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
Fortunately there are many providers for LLM's and some of them can even be ran locally

There are two models used in the app:
1. Embeddings
2. Text generation
1. Embeddings.
2. Text generation.

By default we use OpenAI's models but if you want to change it or even run it locally, its very simple!
By default, we use OpenAI's models but if you want to change it or even run it locally, it's very simple!

### Go to .env file or set environment variables:

Expand All @@ -18,7 +18,7 @@ By default we use OpenAI's models but if you want to change it or even run it lo

`VITE_API_STREAMING=<true or false (true if using openai, false for all others)>`

You dont need to provide keys if you are happy with users providing theirs, so make sure you set LLM_NAME and EMBEDDINGS_NAME
You don't need to provide keys if you are happy with users providing theirs, so make sure you set `LLM_NAME` and `EMBEDDINGS_NAME`.

Options:
LLM_NAME (openai, manifest, cohere, Arc53/docsgpt-14b, Arc53/docsgpt-7b-falcon)
Expand All @@ -27,6 +27,6 @@ EMBEDDINGS_NAME (openai_text-embedding-ada-002, huggingface_sentence-transformer
That's it!

### Hosting everything locally and privately (for using our optimised open-source models)
If you are working with important data and dont want anything to leave your premises.
If you are working with important data and don't want anything to leave your premises.

Make sure you set SELF_HOSTED_MODEL as true in you .env variable and for your LLM_NAME you can use anything that's on Hugging Face
Make sure you set `SELF_HOSTED_MODEL` as true in you `.env` variable and for your `LLM_NAME` you can use anything that's on Hugging Face.

2 comments on commit 627dc2d

@vercel
Copy link

@vercel vercel bot commented on 627dc2d Oct 5, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Successfully deployed to the following URLs:

docs-gpt – ./frontend

docs-gpt-git-main-arc53.vercel.app
docs-gpt-arc53.vercel.app
docs-gpt-brown.vercel.app

@vercel
Copy link

@vercel vercel bot commented on 627dc2d Oct 5, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Successfully deployed to the following URLs:

nextra-docsgpt – ./docs

nextra-docsgpt-git-main-arc53.vercel.app
nextra-docsgpt.vercel.app
nextra-docsgpt-arc53.vercel.app
docs.docsgpt.co.uk

Please sign in to comment.