Chat with AI Roles, they'll get things done.
The ChatRoles-repo platform is a place to gather various user-defined llm role-bots (called chatroles
), to leverage power of the SOTA large language models(LLM) like ChatGPT, and many other multimodal deep learning models.
You may assign different tasks to specific chatroles, who will collaborate with other roles in repo, and finally accomplish the goal.
You may also define your own chatroles
, and easily integrate them into your own systems through the chat-actors APIs.
A question is a task. While a big task may involve many questions.
Usually we firstly decompose a task into sub-tasks, and forward them to professionals to handle in organization.
If we formalize these processes into chatroles
repo, and let them cooperate as expected,
then we may just ask a BIG question/task, let chatroles
to process it as thousands of internal QAs, then we get the final BIG result.
This is a bit like Auto-GPT, while the auto process is formalized as chatroles
for reuse.
You may define your own chatrole
, or just chat to existing ones in repo.
chatroles
is structured as below,
a chatrole definition consists of:
- chatrole: the professional role with specific task goal
- host: the programmed prompts of the role, invoking tools and members
- toolbox: platform offers several tools, e.g. llm, restAPI, vectorDB...
- members: each role has several
chatroles
members as a team
There are some predefined chatroles
, please read more here.
To chat with a role, you need to instantiate an actor
from the role. Then send a ChatDto to an actor entry.
You need to registered to the repo to invoke all APIs:
- OAuth with your github account
- Config your own LLM access-tokens Currently, we offer 3k ChatGPT tokens to test for free each day.
- define or chat with
chatroles
If you want to deploy chatrole-repo
locally, please follow instructions below.
git clone, then:
pnpm install
copy .env.dev
to .env
docker run -dt -e POSTGRES_PASSWORD=postgres -p 5432:5432 --name postgres-pgvector ankane/pgvector
pnpm run generate # generate Prisma Client, dto/entity
pnpm run seed # execute seed.ts, generate db seed data
npx prisma migrate dev # all in one: generate db migrations file; apply db schema change; generate Prisma Client, dto/entity, db seed
# pnpm run migrate # to apply migrate deployment
# development
pnpm run start
# watch mode
pnpm run start:dev
# production mode
pnpm run start:prod
# unit tests
npm run test
# TDD: runs specific e2e test cases
npm run test:e2e -- --watch --coverage --testPathPattern 'actors' --testNamePattern actors
# e2e test with sql query logs, loglevel=[0,6], from silent to verbose
loglevel=1 npm run test:e2e [...]
# e2e tests all
npm run test:e2e
# test coverage
npm run test:cov
access token in header. auto refreshed in JWT_REFRESH_EXPIRES_IN
duration. new token is set into the response header.
repo server may send event logs or local llm requests to client, client may handle these events as flows,
const evtSource = new EventSource("//repo.roles.chat/api/events/subscribe", {
withCredentials: true,
});
evtSource.addEventListener("llm", (event) => {
const { actorId, payload } = JSON.parse(event.data);
// handle event.
// $.post(`//repo.roles.chat/api/events/respond/${event.id}`)
});
- deprecate singleton
- event logging & error handling.
- private role?
- scopedStorage tx bind to db
- bug: auth not changed when internal chat, only correspond to request jwt,
- support for maxQAs counting, negative maxQAs
- support
escape
for templite, e.g. '{ "query": "{ Get { ChatRoles(limit: 2; search: <<escape(req.data)>>) } }" }' - test 2 simultaneous long conversations, they should be irrelevant in diff scopes with same var name,.
- message response: content/hints/ops,
chatroles-repo is MIT licensed.