diff --git a/README.md b/README.md index f836a7b..95cdb7a 100644 --- a/README.md +++ b/README.md @@ -1,12 +1,14 @@ -# nisaba +# Nisaba -Nisaba is an IRC bot written in Go, designed to interact with users in a chat channel, using [llamafile](https://github.com/Mozilla-Ocho/llamafile) for generating responses. +Nisaba is an IRC bot written in Go, designed to interact with users in a chat channel, using llamafile for generating responses. ## Background -Nisaba began as a project to learn [Go](https://go.dev/learn/), aimed at creating a frontend for interacting with local OpenAI or similar endpoints. +[Nisaba](https://en.wikipedia.org/wiki/Nisaba) is named after the Mesopotamian goddess of writing and grain. + +This project began as a way to learn [Go](https://go.dev/learn/), aimed at creating a frontend for interacting with local OpenAI or similar endpoints. Initially, the project used [PrivateGPT](https://github.com/zylon-ai/private-gpt) as its backend for generating responses. @@ -14,8 +16,6 @@ As the project evolved, the need for more flexible API options led to a transiti This switch was motivated by llamafile's [ease of use](https://justine.lol/oneliners/) and its API endpoint being [llama.cpp](https://github.com/ggerganov/llama.cpp) compatible. -The included Docker container for Nisaba is built to include the llamafile binary, and serve an API endpoint for the Nisaba bot. - ## Features - Responds to messages directed at it by consulting llamafile for generating responses. @@ -28,21 +28,33 @@ The included Docker container for Nisaba is built to include the llamafile binar ## Requirements
-Docker Setup +General (Automated Setup, Docker and Building) + +These requirements apply to all setup methods. + +- Linux, Mac, or Windows computer capable of running an LLM model for the AI backend. +- Fully configured llamafile API endpoint. + - This is automatically downloaded and configured by the setup script. + +
+ +
+Docker The optional Docker container can be built to include all requirements. + - [Install Docker](https://docs.docker.com/engine/install/)
-Standalone Setup +Buliding To build the standalone Go binary, you will need the build requirements. + - [Install Go](https://go.dev/doc/install) - Go Dependencies - - [github.com/thoj/go-ircevent](https://github.com/thoj/go-ircevent) -- Fully configured llamafile API endpoint + - [github.com/thoj/go-ircevent](https://github.com/thoj/go-ircevent)
@@ -51,38 +63,87 @@ To build the standalone Go binary, you will need the build requirements. Nisaba can be run either as a standalone application or within a Docker container. Each method requires a configured `config.json` file, and optionally `options.json`, located in the `./config` directory. +- These files can be created automatically by the `setup.sh` or `setup.bat` script, explained in the Automated Setup instructions. -
-Docker Setup - Guide for deploying Nisaba with Docker, including llamafile. - -1. **Prepare Configurations** - - Place `config.json`, `options.json` (if used), and `model.gguf` in a directory named `config` in the same directory as your `docker-compose.yml`. - - Example `options.json` files are provided under `config/` for popular API presets: - - `options.precise.json.example` for "LLaMA Precise" - - `options.divine.json.example` for "Divine Intellect" - - `options.json.example` to reference all available options - - Choose the configuration appropriate for your use case and rename it to `options.json`. +Choose one of the setup methods below and follow the directions to configure Nisaba. -2. **Build and Run with Docker Compose** - - Ensure the Docker Compose file is set to mount the `config` directory correctly: - ```yaml - version: '3.8' - services: - nisaba: - build: . - volumes: - - ./model.gguf:/app/model.gguf - - ./config:/app/config - ``` - - Run the following command in the directory containing `docker-compose.yml`: - ``` - docker-compose up --build - ``` +
+Automated (Pre-Built) Setup - Simple setup using prepared scripts and binaries for Windows/Linux/Mac. + +Follow these detailed steps to get Nisaba running quickly using the pre-built scripts included with the releases: + +1. **Download the Pre-Built Binary Archive** + - Visit the [Releases page](https://github.com/sourcebunny/nisaba/releases) on GitHub. + - Download the appropriate archive for your operating system: + - `nisaba-linux.tar.gz` for Linux + - `nisaba-mac.tar.gz` for Mac + - `nisaba-windows.zip` for Windows + +2. **Prepare the Setup Script** + - **For Linux or Mac**: + - Extract the contents of the `.tar.gz` archive. + - Open a terminal and navigate to the extracted directory. + - Make the setup script executable: + ```bash + chmod +x setup.sh + ``` + - **For Windows**: + - Extract the contents of the `.zip` archive. + - Open Command Prompt and navigate to the extracted directory. + +3. **Run the Setup Script** + - **For Linux or Mac**: + - In your terminal, execute the script by running: + ```bash + ./setup.sh + ``` + - **For Windows**: + - In Command Prompt, execute the script by running: + ```cmd + setup.bat + ``` + - Follow the on-screen prompts to configure your setup. The script will guide you through several steps: + - **Download llamafile Binary**: The script will ask if you want to download the llamafile binary required for the API endpoint. Answer `y` for yes. + - **Configure Requried Settings**: You will be prompted to configure required settings to create a config.json file. Answer `y` to proceed. + - **Enter Configuration Details**: The script will then prompt you to enter various configuration details such as nickname, server, port, etc. Press 'Enter' to accept default values or enter your custom settings. + - **Choose API Endpoint Options**: You'll have the option to select a default options preset for the API endpoint. Answer `y` and choose between provided presets like "LLaMA Precise" or "Divine Intellect". + - **Make the Binaries Executable**: You will be prompted to make the binaries for Nisaba and llamafile executable. Answer `y` to proceed. + - **Model Download**: Finally, the script will ask if you want to download a model and save it as `model.gguf`. Answer `y` and select the LLM model to download. + +4. **Run Nisaba and Llamafile** + - After configuration, start the services: + - **For Linux**: + - Run the llamafile binary first to start the endpoint: + ```bash + ./llamafile -m model.gguf -ngl 0 + ``` + - Then run the Nisaba binary: + ```bash + ./nisaba-linux-amd64.bin + ``` + - **For Mac**: + - Run the llamafile binary first to start the endpoint: + ```bash + ./llamafile -m model.gguf -ngl 0 + ``` + - Then run the Nisaba binary: + ```bash + ./nisaba-mac-amd64.bin + ``` + - **For Windows**: + - Run the llamafile binary first to start the endpoint: + ```cmd + .\llamafile.exe -m model.gguf -ngl 0 + ``` + - Then run the Nisaba binary: + ```cmd + .\nisaba-windows-amd64.exe + ```
-Standalone Setup - Instructions for setting up Nisaba without Docker. +Building Instructions and Setup - Instructions for manually building and running Nisaba from source. 1. **Install Go** - If you haven't already, follow the instructions on the official [Go website](https://golang.org/dl/). @@ -94,8 +155,10 @@ Each method requires a configured `config.json` file, and optionally `options.js ``` 3. **Configure the Bot** - - Create a `config` directory in your project root and place your `config.json` file within this directory. Optionally, add an `options.json` for additional parameters. - - Example `options.json` files are provided under `config/` for popular API presets: + - Manually create a `config` directory in your project root and place your `config.json` file within this directory. Optionally, add an `options.json` for API parameters. + - Use the `setup.sh` or `setup.bat` script to generate these files automatically. + - Example `config.json` and `options.json` files are provided under `config/` for reference including popular API presets: + - `config.json.example` to reference required settings file - `options.precise.json.example` for "LLaMA Precise" - `options.divine.json.example` for "Divine Intellect" - `options.json.example` to reference all available options @@ -108,6 +171,7 @@ Each method requires a configured `config.json` file, and optionally `options.js ``` 5. **Run the Bot**: + - Ensure that you have a llamafile API endpoint running. - Start the bot by running the binary: ``` ./nisaba.bin @@ -115,20 +179,49 @@ Each method requires a configured `config.json` file, and optionally `options.js
+
+Docker Setup - Guide for deploying Nisaba with Docker, including llamafile. + +1. **Prepare Configurations** + - Place `config.json`, `options.json` (if used), and `model.gguf` in a directory named `config` in the same directory as your `docker-compose.yml`. + - Example `config.json` and `options.json` files are provided under `config/` for reference including popular API presets: + - `config.json.example` to reference required settings file + - `options.precise.json.example` for "LLaMA Precise" + - `options.divine.json.example` for "Divine Intellect" + - `options.json.example` to reference all available options + +2. **Build and Run with Docker Compose** + - Ensure the Docker Compose file is set to mount the `config` directory correctly: + ```yaml + version: '3.8' + services: + nisaba: + build: . + volumes: + - ./model.gguf:/app/model.gguf + - ./config:/app/config + ``` + - Run the following command in the directory containing `docker-compose.yml`: + ``` + docker-compose up --build + ``` + +
+ ## Configuration +These configuration files can be placed in the `config/` directory, or the same directory as the Nisaba binary. +
Configuration Files - Overview of various configuration files used by Nisaba. -- **config.json**: Main configuration for the IRC bot, specifying connection details and API settings. +- **config.json**: Required main configuration for the IRC bot, specifying connection details and API settings. - **options.json**: Optional parameters file designed to adjust llamafile's behavior, with settings like `temperature`, `top_k`, etc. - **systemprompt.txt**: System prompt for Nisaba sent to the llamafile endpoint. - **blocklist.txt**: Blocks specific IRC nicknames from interacting with Nisaba. - **history.txt**: Stores message context dynamically; should not be edited manually. - **llamafile_args.txt** (Docker only): Custom arguments to replace default llamafile settings under Docker. -These configuration files can also be placed in the same directory as the compiled binary. -
## Usage @@ -136,20 +229,20 @@ These configuration files can also be placed in the same directory as the compil
Basic Interaction - How to interact with Nisaba. -To get a response from Nisaba, simply prefix your message with the bot's name, followed by your query. For example: +To get a response from Nisaba, simply prefix your message with the bot's name, followed by your query. - ``` - Nisaba, how are you? - ``` - -After sending a query or command, Nisaba will process the input and respond in the IRC channel based on the current configuration and any active settings modified by commands. +For example: `Nisaba, how are you?` + +After you send a message or command, Nisaba will use the API endpoint to generate a response, and then send that response back to you in the designated IRC channel.
Using Commands - Commands available for controlling Nisaba. -Nisaba supports several commands that can be used to control the bot or modify its behavior dynamically. These commands should be prefixed with the bot's name, followed by a comma or colon, and the command: +Nisaba supports several commands that can be used to control the bot or modify its behavior dynamically. + +These commands should be prefixed with the bot's name, optionally followed by a comma or colon, and the command: - **!clear**: Clears the message history stored by the bot. Useful for resetting the context in "chat" mode. - `Nisaba, !clear`