English · 中文
LLM AI Agent
multi sessions service.
- Support pure text agent without JSON Spec.
- Support OpenAPI/OpenRPC/OpenModbus/OpenTool JSON Spec.
- Support LLM Function calling to
HTTP API
/json-rpc 2.0 over HTTP
/Modbus
and more custom tools.
- Some OpenSpec json file, according to
/example/json/open*/*.json
, which is callable. - Run your tool server, which is described in json file.
- Add
.env
file in theexample
folder, and add below content in the.env
file:baseUrl = https://xxx.xxx.com # LLM API BaseURL apiKey = sk-xxxxxxxxxxxxxxxxxxxx # LLM API ApiKey
- Use below method to run agent service.
- According to
/example/service_example
- Support multi agent session via session id.
- Support multi task in the same agent, identify different tasks by
taskId
. After finishing task, task message could be added to session as new task context.
Future<void> main() async {
CapabilityDto capabilityDto = CapabilityDto(
llmConfig: _buildLLMConfig(), // LLM Config
systemPrompt: _buildSystemPrompt(), // System Prompt
openSpecList: await _buildOpenSpecList() // OpenSpec Description String List
);
SessionDto sessionDto = await agentService.initChat(
capabilityDto,
listen // Subscribe AgentMessage, Agent chat with User/Client/LLM/Tools Role
); // Get Session Id
String prompt = "<USER PROMPT, e.g. call any one tool>";
UserTaskDto userTaskDto = UserTaskDto(taskId: "<Identify different tasks, NOT more than 36 chars>", contentList: [UserMessageDto(type: UserMessageDtoType.text, message: prompt)]); // User Content List, support type text/imageUrl
await agentService.startChat(sessionDto.id, userTaskDto);
}
- MultiAgent support
Future<void> main() async {
SessionDto sessionDto1 = await _buildTextAgent();
SessionDto sessionDto2 = await _buildToolAgent();
CapabilityDto capabilityDto = CapabilityDto(llmConfig: llmConfig, systemPrompt: systemPrompt,
sessionList: [sessionDto1, sessionDto2]
);
SessionDto sessionDto = await agentService.initChat(capabilityDto, listen);
UserTaskDto userTaskDto = UserTaskDto(contentList: [UserMessageDto(type: UserMessageDtoType.text, message: prompt)]);
await agentService.startChat(sessionDto.id, userTaskDto);
}
- Reflection support
Future<void> main() async {
CapabilityDto capabilityDto = CapabilityDto(
llmConfig: _buildLLMConfig(),
systemPrompt: _buildSystemPrompt(),
openSpecList: await _buildOpenSpecList(),
/// Add reflection prompt list here
toolReflectionList: await _buildToolReflectionList()
);
SessionDto sessionDto = await agentService.initChat(capabilityDto, listen);
String prompt = "<USER PROMPT, e.g. call any one tool>";
UserTaskDto userTaskDto = UserTaskDto(taskId: "<Identify different tasks, NOT more than 36 chars>", contentList: [UserMessageDto(type: UserMessageDtoType.text, message: prompt)]);
await agentService.startChat(sessionDto.id, userTaskDto);
}
- According to
/example/agent_example
- Pure native calling. Support single session.
- Method 1 AgentService is friendly encapsulation for this.
Future<void> main() async {
ToolAgent toolAgent = ToolAgent(
llmRunner: _buildLLMRunner(),
session: _buildSession(),
toolRunnerList: await _buildToolRunnerList(),
systemPrompt: _buildSystemPrompt()
);
String prompt = "<USER PROMPT, e.g. call any one tool>";
toolAgent.userToAgent(taskId: "<Identify different tasks, NOT more than 36 chars>", [Content(type: ContentType.text, message: prompt)]);
}