English · 中文
LLM AI Agent
multi session HTTP/WebSocket service
- Support pure text agent without JSON Spec.
- Support OpenAPI/OpenRPC/OpenModbus/OpenTool JSON Spec.
- Support LLM Function calling to
HTTP API
/json-rpc 2.0 over HTTP
/Modbus
and more custom tools. - HTTP Server wrapper Lite Agent core Dart
- Base on Lite Agent Core AgentService(DTO included), add Controller、Router, wrapper to HTTP/WS API.
- Some OpenSpec json file, according to
/example/json/open*/*.json
, which is callable. - Run your tool server, which is described in json file.
- Add
.env
file in theexample
folder, and add below content in the.env
file:baseUrl = https://xxx.xxx.com # LLM API BaseURL apiKey = sk-xxxxxxxxxxxxxxxxxxxx # LLM API ApiKey
debug
orrun
mode run/bin/server.dart
filemain()
- session control command, include:
http://127.0.0.1:9527/api
- Feature:get version number, to confirm server running
- Request params: empty
- Response body sample:
{ "version": "0.3.0" }
-
Feature: initial new session agent
-
Request body:
- LLM config: baseUrl, apiKey, model
- System Prompt: Agent character, ToDo/NotToDo description
- Tools Description: (Optional) openapi, openrpc, openmodbus Spec. According to third APIs in Spec to set
apiKey
or not - Session List: (Optional) for multi agents supported. Init other agents, and add their sessionIds in this field
- Timeout:3600 seconds in default. When agent stopped, massages context will be clear
- Sample:
{ "llmConfig": { "baseUrl": "<LLM API baseUrl, e.g. https://api.openai.com/v1>", "apiKey": "<LLM API apiKey, e.g. sk-xxxxxxxxxx>", "model": "<LLM API model name, e.g. gpt-3.5-turbo. And temperature、maxTokens、topP can be changed below >", "temperature": 0, "maxTokens": 4096, "topP": 1 }, "systemPrompt": "<System Prompt. LLM character, capabilities, need to help user fixed what problems>", "openSpecList": [ { "openSpec": "<(Optional) tool spec json string, support openapi、openmodbus、openrpc>", "apiKey": { "type": "<basic or bearer>", "apiKey": "<Third APIs apiKey>" }, "protocol": "Support openapi, openmodbus, jsonrpcHttp" }, { "openSpec": "<(Optional) Another spec json string, can be another protocol>", "protocol": "Support openapi, openmodbus, jsonrpcHttp" } ], "sessionList": [ { "id": "<Sub Agent sessionId 1>" }, { "id": "<Sub Agent sessionId 2>" } ], "timeoutSeconds": 3600 }
-
Response body:
- sessionId, will be used as session websocket subscribe, stop and clear operations.
- Sample:
{ "id": "b2ac9280-70d6-4651-bd3a-45eb81cd8c30" }
- Response body:
- agent messages context as list
- Sample:
[ { "sessionId": "b2ac9280-70d6-4651-bd3a-45eb81cd8c30", "from": "system | user | agent | llm | tool", "to": "user | agent | llm | tool | client", "type": "text | imageUrl | functionCallList | toolReturn | contentList", "message": "<need to parse according type>", "completions": { "tokenUsage": { "promptTokens": 100, "completionTokens": 522, "totalTokens": 622 }, "id": "chatcmpl-9bgYkOjpdtLV0o0JugSmnNzGrRFMG", "model": "gpt-3.5-turbo" }, "createTime": "2023-06-18T15:45:30.000+0800" } ]
- Response body:
- sessionId, to confirm the operation of the session
- Sample:
{ "id": "b2ac9280-70d6-4651-bd3a-45eb81cd8c30" }
- Response body:
- sessionId, to confirm the operation of the session
- Sample:
{ "id": "b2ac9280-70d6-4651-bd3a-45eb81cd8c30" }
- Send and subscribe session AgentMessage
ws://127.0.0.1:9527/api/chat?id=<sessionId>
- client(ping) -> server: send
"ping"
to server - client <- server(pong): respond
"pong"
to client
- client([UserMessageDto]) -> server :Wrap and send server
- Sample:
[
{
"type": "text",
"message": "Get some tool status"
}
]
{
"taskId": "Optional. For identify which task AgentMessage from. If NULL, server will create one.",
"contentList": [
{
"type": "text",
"message": "Get some tool status"
}
]
}
- client <- server(AgentMessage) :server will keep sending AgentMessage to client
- Sample:
{
"sessionId": "b2ac9280-70d6-4651-bd3a-45eb81cd8c30",
"taskId": "0b127f1d-4667-4a52-bbcb-0b636f9a471a",
"from": "system | user | agent | llm | tool",
"to": "user | agent | llm | tool | client",
"type": "text | imageUrl | functionCallList | toolReturn | contentList",
"message": "<need to parse according type>",
"completions": {
"tokenUsage": {
"promptTokens": 100,
"completionTokens": 522,
"totalTokens": 622
},
"id": "chatcmpl-9bgYkOjpdtLV0o0JugSmnNzGrRFMG",
"model": "gpt-3.5-turbo"
},
"createTime": "2023-06-18T15:45:30.000+0800"
}
- According
type
to parsemessage
- text、imageUrl:
- String
- Sample:
"Tool result: PASS"
- functionCallList:
- Struct:
[ { "id":"<LLM respond id in function call>", "name":"<function name>", "parameters": "<LLM respond parameters in map>" } ]
- Sample:
[ { "id":"call_z5FK2dAfU8TXzn61IJXzRl5I", "name":"SomeFunction", "parameters": { "operation":"result" } } ]
- toolReturn:
- Struct:
{ "id":"<LLM respond id in function call>", "result": "<JSON Map, different tools in defferent result>" }
- Sample:
{ "id":"call_z5FK2dAfU8TXzn61IJXzRl5I", "result": { "statusCode":200, "body":"{\"code\":200,\"message\":\"PASS\"}" } }
- contentList:
- Struct:
[ { "type":"text | imageUrl", "message":"String" } ]
- Sample:
[ { "type":"text", "message":"What’s in this image?" }, { "type":"imageUrl", "message":"https://www.xxx.com/xxx.jpg" } ]
- text、imageUrl:
- When to=Client, message in below status:
"[TASK_START]"
:agent receive user messages, and ready to run task"[TOOLS_START]"
: ready to call Tools"[TOOLS_DONE]"
: Tools return finished"[TASK_STOP]"
:agent receive stop or clear command, stop task"[TASK_DONE]"
:agent run task finished
[/init request] {llmConfig: ..., systemPrompt:..., openSpecList: [...]}
[/init response SessionId] {id: eccdacc8-a1a8-463f-b0af-7aebc278c842, taskId: 0b127f1d-4667-4a52-bbcb-0b636f9a471a}
[After /chat connect ws, send userTaskDto] {taskId: 0b127f1d-4667-4a52-bbcb-0b636f9a471a, contentList: [{type: text, message: Get some tool status}]}
[ws push] id: eccdacc8-a1a8-463f-b0af-7aebc278c842, taskId: 0b127f1d-4667-4a52-bbcb-0b636f9a471a# 🤖AGENT -> 🔗CLIENT: [text] [TASK_START]
[ws push] id: eccdacc8-a1a8-463f-b0af-7aebc278c842, taskId: 0b127f1d-4667-4a52-bbcb-0b636f9a471a# 👤USER -> 🤖AGENT: [text] Get some tool status
[ws push] id: eccdacc8-a1a8-463f-b0af-7aebc278c842, taskId: 0b127f1d-4667-4a52-bbcb-0b636f9a471a# 🤖AGENT -> 💡LLM: [text] Get some tool status
[ws push] id: eccdacc8-a1a8-463f-b0af-7aebc278c842, taskId: 0b127f1d-4667-4a52-bbcb-0b636f9a471a# 💡LLM -> 🤖AGENT: [functionCallList] [{"id":"call_73xLVZDe70QgLHsURgY5BNT0","name":"SomeFunction","parameters":{"operation":"result"}}]
[ws push] id: eccdacc8-a1a8-463f-b0af-7aebc278c842, taskId: 0b127f1d-4667-4a52-bbcb-0b636f9a471a# 🤖AGENT -> 🔧TOOL: [functionCallList] [{"id":"call_73xLVZDe70QgLHsURgY5BNT0","name":"SomeFunction","parameters":{"operation":"result"}}]
[ws push] id: eccdacc8-a1a8-463f-b0af-7aebc278c842, taskId: 0b127f1d-4667-4a52-bbcb-0b636f9a471a# 🤖AGENT -> 🔗CLIENT: [text] [TOOLS_START]
[ws push] id: eccdacc8-a1a8-463f-b0af-7aebc278c842, taskId: 0b127f1d-4667-4a52-bbcb-0b636f9a471a# 🔧TOOL -> 🤖AGENT: [toolReturn] {"id":"call_73xLVZDe70QgLHsURgY5BNT0","result":{"statusCode":200,"body":"{\"code\":200,\"message\":\"FAIL\"}"}}
[ws push] id: eccdacc8-a1a8-463f-b0af-7aebc278c842, taskId: 0b127f1d-4667-4a52-bbcb-0b636f9a471a# 🤖AGENT -> 💡LLM: [toolReturn] {"id":"call_73xLVZDe70QgLHsURgY5BNT0","result":{"statusCode":200,"body":"{\"code\":200,\"message\":\"FAIL\"}"}}
[ws push] id: eccdacc8-a1a8-463f-b0af-7aebc278c842, taskId: 0b127f1d-4667-4a52-bbcb-0b636f9a471a# 🔧TOOL -> 🤖AGENT: [text] [TOOLS_DONE]
[ws push] id: eccdacc8-a1a8-463f-b0af-7aebc278c842, taskId: 0b127f1d-4667-4a52-bbcb-0b636f9a471a# 🤖AGENT -> 🔗CLIENT: [text] [TOOLS_DONE]
[ws push] id: eccdacc8-a1a8-463f-b0af-7aebc278c842, taskId: 0b127f1d-4667-4a52-bbcb-0b636f9a471a# 💡LLM -> 🤖AGENT: [text] Tool status: FAIL.
[ws push] id: eccdacc8-a1a8-463f-b0af-7aebc278c842, taskId: 0b127f1d-4667-4a52-bbcb-0b636f9a471a# 🤖AGENT -> 👤USER: [text] Tool status: FAIL.
[ws push] id: eccdacc8-a1a8-463f-b0af-7aebc278c842, taskId: 0b127f1d-4667-4a52-bbcb-0b636f9a471a# 🤖AGENT -> 🔗CLIENT: [text] [TASK_DONE]
[ws push] id: eccdacc8-a1a8-463f-b0af-7aebc278c842, taskId: 0b127f1d-4667-4a52-bbcb-0b636f9a471a# 🤖AGENT -> 🔗CLIENT: [text] [TASK_STOP]
[/stop request] {id: eccdacc8-a1a8-463f-b0af-7aebc278c842}
[/clear request] {id: eccdacc8-a1a8-463f-b0af-7aebc278c842}
[ws close] WebSocket connection closed
- Build in shell script:
dart compile exe bin/server.dart -o build/lite_agent_core_dart_server
- Then the
lite_agent_core_dart_server
file will be inbuild
folder - Copy
config
(withinconfig.json
) folder tolite_agent_core_dart_server
same folder - Run in shell script:
./lite_agent_core_dart_server
- Terminal will show:
INFO: 2024-06-24 14:48:05.862057: PID 34567: [HTTP] Start Server - http://0.0.0.0:9527/api
- After server running, will create
log
folder andagent.log
file in the folder, to record server running logs.
- Build image,
cd
to project root directory, then run:docker build -t lite_agent_core_dart_server .
- Run container:
docker run -d -p 9527:9527 lite_agent_core_dart_server
- Or mount config and log folder to update config and get log info
docker run -d -p 9527:9527 -v ./log:/app/log -v ./config:/app/config lite_agent_core_dart_server