Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
alybun authored Apr 14, 2024
1 parent d7277ca commit 36715cc
Showing 1 changed file with 140 additions and 47 deletions.
187 changes: 140 additions & 47 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,21 +1,21 @@
# nisaba
# Nisaba

<img src="images/preview.png" width="800" />

Nisaba is an IRC bot written in Go, designed to interact with users in a chat channel, using [llamafile](https://github.com/Mozilla-Ocho/llamafile) for generating responses.
Nisaba is an IRC bot written in Go, designed to interact with users in a chat channel, using llamafile for generating responses.

## Background

Nisaba began as a project to learn [Go](https://go.dev/learn/), aimed at creating a frontend for interacting with local OpenAI or similar endpoints.
[Nisaba](https://en.wikipedia.org/wiki/Nisaba) is named after the Mesopotamian goddess of writing and grain.

This project began as a way to learn [Go](https://go.dev/learn/), aimed at creating a frontend for interacting with local OpenAI or similar endpoints.

Initially, the project used [PrivateGPT](https://github.com/zylon-ai/private-gpt) as its backend for generating responses.

As the project evolved, the need for more flexible API options led to a transition to [llamafile](https://github.com/Mozilla-Ocho/llamafile).

This switch was motivated by llamafile's [ease of use](https://justine.lol/oneliners/) and its API endpoint being [llama.cpp](https://github.com/ggerganov/llama.cpp) compatible.

The included Docker container for Nisaba is built to include the llamafile binary, and serve an API endpoint for the Nisaba bot.

## Features

- Responds to messages directed at it by consulting llamafile for generating responses.
Expand All @@ -28,21 +28,33 @@ The included Docker container for Nisaba is built to include the llamafile binar
## Requirements

<details>
<summary><strong>Docker Setup</strong></summary>
<summary><strong>General</strong> (Automated Setup, Docker and Building)</summary>

These requirements apply to all setup methods.

- Linux, Mac, or Windows computer capable of running an LLM model for the AI backend.
- Fully configured llamafile API endpoint.
- This is automatically downloaded and configured by the setup script.

</details>

<details>
<summary><strong>Docker</strong></summary>

The optional Docker container can be built to include all requirements.

- [Install Docker](https://docs.docker.com/engine/install/)

</details>

<details>
<summary><strong>Standalone Setup</strong></summary>
<summary><strong>Buliding</strong></summary>

To build the standalone Go binary, you will need the build requirements.

- [Install Go](https://go.dev/doc/install)
- Go Dependencies
- [github.com/thoj/go-ircevent](https://github.com/thoj/go-ircevent)
- Fully configured llamafile API endpoint
- [github.com/thoj/go-ircevent](https://github.com/thoj/go-ircevent)

</details>

Expand All @@ -51,38 +63,87 @@ To build the standalone Go binary, you will need the build requirements.
Nisaba can be run either as a standalone application or within a Docker container.

Each method requires a configured `config.json` file, and optionally `options.json`, located in the `./config` directory.
- These files can be created automatically by the `setup.sh` or `setup.bat` script, explained in the Automated Setup instructions.

<details>
<summary><strong>Docker Setup</strong> - Guide for deploying Nisaba with Docker, including llamafile.</summary>

1. **Prepare Configurations**
- Place `config.json`, `options.json` (if used), and `model.gguf` in a directory named `config` in the same directory as your `docker-compose.yml`.
- Example `options.json` files are provided under `config/` for popular API presets:
- `options.precise.json.example` for "LLaMA Precise"
- `options.divine.json.example` for "Divine Intellect"
- `options.json.example` to reference all available options
- Choose the configuration appropriate for your use case and rename it to `options.json`.
Choose one of the setup methods below and follow the directions to configure Nisaba.

2. **Build and Run with Docker Compose**
- Ensure the Docker Compose file is set to mount the `config` directory correctly:
```yaml
version: '3.8'
services:
nisaba:
build: .
volumes:
- ./model.gguf:/app/model.gguf
- ./config:/app/config
```
- Run the following command in the directory containing `docker-compose.yml`:
```
docker-compose up --build
```
<details>
<summary><strong>Automated (Pre-Built) Setup</strong> - Simple setup using prepared scripts and binaries for Windows/Linux/Mac.</summary>

Follow these detailed steps to get Nisaba running quickly using the pre-built scripts included with the releases:

1. **Download the Pre-Built Binary Archive**
- Visit the [Releases page](https://github.com/sourcebunny/nisaba/releases) on GitHub.
- Download the appropriate archive for your operating system:
- `nisaba-linux.tar.gz` for Linux
- `nisaba-mac.tar.gz` for Mac
- `nisaba-windows.zip` for Windows

2. **Prepare the Setup Script**
- **For Linux or Mac**:
- Extract the contents of the `.tar.gz` archive.
- Open a terminal and navigate to the extracted directory.
- Make the setup script executable:
```bash
chmod +x setup.sh
```
- **For Windows**:
- Extract the contents of the `.zip` archive.
- Open Command Prompt and navigate to the extracted directory.

3. **Run the Setup Script**
- **For Linux or Mac**:
- In your terminal, execute the script by running:
```bash
./setup.sh
```
- **For Windows**:
- In Command Prompt, execute the script by running:
```cmd
setup.bat
```
- Follow the on-screen prompts to configure your setup. The script will guide you through several steps:
- **Download llamafile Binary**: The script will ask if you want to download the llamafile binary required for the API endpoint. Answer `y` for yes.
- **Configure Requried Settings**: You will be prompted to configure required settings to create a config.json file. Answer `y` to proceed.
- **Enter Configuration Details**: The script will then prompt you to enter various configuration details such as nickname, server, port, etc. Press 'Enter' to accept default values or enter your custom settings.
- **Choose API Endpoint Options**: You'll have the option to select a default options preset for the API endpoint. Answer `y` and choose between provided presets like "LLaMA Precise" or "Divine Intellect".
- **Make the Binaries Executable**: You will be prompted to make the binaries for Nisaba and llamafile executable. Answer `y` to proceed.
- **Model Download**: Finally, the script will ask if you want to download a model and save it as `model.gguf`. Answer `y` and select the LLM model to download.
4. **Run Nisaba and Llamafile**
- After configuration, start the services:
- **For Linux**:
- Run the llamafile binary first to start the endpoint:
```bash
./llamafile -m model.gguf -ngl 0
```
- Then run the Nisaba binary:
```bash
./nisaba-linux-amd64.bin
```
- **For Mac**:
- Run the llamafile binary first to start the endpoint:
```bash
./llamafile -m model.gguf -ngl 0
```
- Then run the Nisaba binary:
```bash
./nisaba-mac-amd64.bin
```
- **For Windows**:
- Run the llamafile binary first to start the endpoint:
```cmd
.\llamafile.exe -m model.gguf -ngl 0
```
- Then run the Nisaba binary:
```cmd
.\nisaba-windows-amd64.exe
```
</details>
<details>
<summary><strong>Standalone Setup</strong> - Instructions for setting up Nisaba without Docker.</summary>
<summary><strong>Building Instructions and Setup</strong> - Instructions for manually building and running Nisaba from source.</summary>
1. **Install Go**
- If you haven't already, follow the instructions on the official [Go website](https://golang.org/dl/).
Expand All @@ -94,8 +155,10 @@ Each method requires a configured `config.json` file, and optionally `options.js
```

3. **Configure the Bot**
- Create a `config` directory in your project root and place your `config.json` file within this directory. Optionally, add an `options.json` for additional parameters.
- Example `options.json` files are provided under `config/` for popular API presets:
- Manually create a `config` directory in your project root and place your `config.json` file within this directory. Optionally, add an `options.json` for API parameters.
- Use the `setup.sh` or `setup.bat` script to generate these files automatically.
- Example `config.json` and `options.json` files are provided under `config/` for reference including popular API presets:
- `config.json.example` to reference required settings file
- `options.precise.json.example` for "LLaMA Precise"
- `options.divine.json.example` for "Divine Intellect"
- `options.json.example` to reference all available options
Expand All @@ -108,48 +171,78 @@ Each method requires a configured `config.json` file, and optionally `options.js
```

5. **Run the Bot**:
- Ensure that you have a llamafile API endpoint running.
- Start the bot by running the binary:
```
./nisaba.bin
```

</details>

<details>
<summary><strong>Docker Setup</strong> - Guide for deploying Nisaba with Docker, including llamafile.</summary>

1. **Prepare Configurations**
- Place `config.json`, `options.json` (if used), and `model.gguf` in a directory named `config` in the same directory as your `docker-compose.yml`.
- Example `config.json` and `options.json` files are provided under `config/` for reference including popular API presets:
- `config.json.example` to reference required settings file
- `options.precise.json.example` for "LLaMA Precise"
- `options.divine.json.example` for "Divine Intellect"
- `options.json.example` to reference all available options

2. **Build and Run with Docker Compose**
- Ensure the Docker Compose file is set to mount the `config` directory correctly:
```yaml
version: '3.8'
services:
nisaba:
build: .
volumes:
- ./model.gguf:/app/model.gguf
- ./config:/app/config
```
- Run the following command in the directory containing `docker-compose.yml`:
```
docker-compose up --build
```

</details>

## Configuration

These configuration files can be placed in the `config/` directory, or the same directory as the Nisaba binary.

<details>
<summary><strong>Configuration Files</strong> - Overview of various configuration files used by Nisaba.</summary>

- **config.json**: Main configuration for the IRC bot, specifying connection details and API settings.
- **config.json**: Required main configuration for the IRC bot, specifying connection details and API settings.
- **options.json**: Optional parameters file designed to adjust llamafile's behavior, with settings like `temperature`, `top_k`, etc.
- **systemprompt.txt**: System prompt for Nisaba sent to the llamafile endpoint.
- **blocklist.txt**: Blocks specific IRC nicknames from interacting with Nisaba.
- **history.txt**: Stores message context dynamically; should not be edited manually.
- **llamafile_args.txt** (Docker only): Custom arguments to replace default llamafile settings under Docker.
These configuration files can also be placed in the same directory as the compiled binary.

</details>
## Usage
<details>
<summary><strong>Basic Interaction</strong> - How to interact with Nisaba.</summary>
To get a response from Nisaba, simply prefix your message with the bot's name, followed by your query. For example:
To get a response from Nisaba, simply prefix your message with the bot's name, followed by your query.

```
Nisaba, how are you?
```

After sending a query or command, Nisaba will process the input and respond in the IRC channel based on the current configuration and any active settings modified by commands.
For example: `Nisaba, how are you?`

After you send a message or command, Nisaba will use the API endpoint to generate a response, and then send that response back to you in the designated IRC channel.

</details>

<details>
<summary><strong>Using Commands</strong> - Commands available for controlling Nisaba.</summary>

Nisaba supports several commands that can be used to control the bot or modify its behavior dynamically. These commands should be prefixed with the bot's name, followed by a comma or colon, and the command:
Nisaba supports several commands that can be used to control the bot or modify its behavior dynamically.

These commands should be prefixed with the bot's name, optionally followed by a comma or colon, and the command:
- **!clear**: Clears the message history stored by the bot. Useful for resetting the context in "chat" mode.
- `Nisaba, !clear`
Expand Down

0 comments on commit 36715cc

Please sign in to comment.