We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
aiocqhttp(使用 OneBot 协议接入的)
Windows Server 2022 ,python3.10.11
3.4.0.2
根据文档 已完成NapCat与langbot的ws连接 在此基础上,我配置了本地的ollama(配置了局域网访问),并且使用gemma2:27b。 现在是QQ发送消息,langbot与ollama交互的时候,可发现GPU的cuda是在进行工作回复的,但是回复回来是以下的日志。
[12-08 14:21:49.048] chat.py (95) - [ERROR] : 对话(1)请求失败: 'ChatResponse' object has no attribute 'pop' [12-08 14:21:49.353] controller.py (98) - [ERROR] : 'ChatResponse' object has no attribute 'pop'
报异常前 我对query进行了查看
[12-08 14:21:48.137] chat.py (73) - [INFO] : 看看參數 query_id=1 launcher_type=<LauncherTypes.PERSON: 'person'> launcher_id=111111111 sender_id=111111111 message_event=FriendMessage(message_chain=MessageChain([Source(id=269009979, time=datetime.datetime(2024, 12, 8, 14, 21, 48, 133059)), Plain('你好')]), sender=Friend(id=111111111, nickname='🐠', remark='')) message_chain=MessageChain([Source(id=269009979, time=datetime.datetime(2024, 12, 8, 14, 21, 48, 133059)), Plain('你好')]) adapter=<pkg.platform.sources.aiocqhttp.AiocqhttpAdapter object at 0x0000024D0540EB30> session=Session(launcher_type=<LauncherTypes.PERSON: 'person'>, launcher_id=857082026, sender_id=0, use_prompt_name='default', using_conversation=Conversation(prompt=Prompt(name='default', messages=[Message(role='system', name=None, content='', tool_calls=None, tool_call_id=None)]), messages=[], create_time=datetime.datetime(2024, 12, 8, 14, 20, 56, 917743), update_time=datetime.datetime(2024, 12, 8, 14, 20, 56, 917743), use_model=LLMModelInfo(name='gemma2:27b', model_name=None, token_mgr=<pkg.provider.modelmgr.token.TokenManager object at 0x0000024D053E0970>, requester=<pkg.provider.modelmgr.requesters.ollamachat.OllamaChatCompletions object at 0x0000024D053FCE20>, tool_call_supported=False, vision_supported=False), use_funcs=[]), conversations=[Conversation(prompt=Prompt(name='default', messages=[Message(role='system', name=None, content='', tool_calls=None, tool_call_id=None)]), messages=[], create_time=datetime.datetime(2024, 12, 8, 14, 20, 56, 917743), update_time=datetime.datetime(2024, 12, 8, 14, 20, 56, 917743), use_model=LLMModelInfo(name='gemma2:27b', model_name=None, token_mgr=<pkg.provider.modelmgr.token.TokenManager object at 0x0000024D053E0970>, requester=<pkg.provider.modelmgr.requesters.ollamachat.OllamaChatCompletions object at 0x0000024D053FCE20>, tool_call_supported=False, vision_supported=False), use_funcs=[])], create_time=datetime.datetime(2024, 12, 8, 14, 20, 56, 915741), update_time=datetime.datetime(2024, 12, 8, 14, 20, 56, 915741), semaphore=<asyncio.locks.Semaphore object at 0x0000024D05493F10 [locked]>) messages=[] prompt=Prompt(name='default', messages=[Message(role='system', name=None, content='', tool_calls=None, tool_call_id=None)]) user_message=Message(role='user', name=None, content=[ContentElement(type='text', text='你好', image_url=None)], tool_calls=None, tool_call_id=None) use_model=LLMModelInfo(name='gemma2:27b', model_name=None, token_mgr=<pkg.provider.modelmgr.token.TokenManager object at 0x0000024D053E0970>, requester=<pkg.provider.modelmgr.requesters.ollamachat.OllamaChatCompletions object at 0x0000024D053FCE20>, tool_call_supported=False, vision_supported=False) use_funcs=None resp_messages=[] resp_message_chain=[] current_stage=<pkg.pipeline.stagemgr.StageInstContainer object at 0x0000024D05451720>
我不知道是什么问题。
以下是截图,我确信ollama是正常可访问的。
以下是我的配置 llm-models.json { "name": "gemma2:27b", "requester": "ollama-chat" }
provider.json "ollama-chat": { "base-url": "http://192.168.10.100:11434", "args": {}, "timeout": 600 } }, "model": "gemma2:27b", "prompt-mode": "normal", "prompt": { "default": "" }, "runner": "local-agent" } 我该如何解决这个问题
No response
The text was updated successfully, but these errors were encountered:
确实是这个版本的问题,我使用3.4.0就不会报错
Sorry, something went wrong.
3.4.1遇到同样的问题了,回退到3.4.0闪退
已修复,将在下一版本包含。
No branches or pull requests
消息平台适配器
aiocqhttp(使用 OneBot 协议接入的)
运行环境
Windows Server 2022 ,python3.10.11
LangBot 版本
3.4.0.2
异常情况
根据文档 已完成NapCat与langbot的ws连接
在此基础上,我配置了本地的ollama(配置了局域网访问),并且使用gemma2:27b。
现在是QQ发送消息,langbot与ollama交互的时候,可发现GPU的cuda是在进行工作回复的,但是回复回来是以下的日志。
[12-08 14:21:49.048] chat.py (95) - [ERROR] : 对话(1)请求失败: 'ChatResponse' object has no attribute 'pop'
[12-08 14:21:49.353] controller.py (98) - [ERROR] : 'ChatResponse' object has no attribute 'pop'
报异常前 我对query进行了查看
[12-08 14:21:48.137] chat.py (73) - [INFO] : 看看參數 query_id=1 launcher_type=<LauncherTypes.PERSON: 'person'> launcher_id=111111111 sender_id=111111111 message_event=FriendMessage(message_chain=MessageChain([Source(id=269009979, time=datetime.datetime(2024, 12, 8, 14, 21, 48, 133059)), Plain('你好')]), sender=Friend(id=111111111, nickname='🐠', remark='')) message_chain=MessageChain([Source(id=269009979, time=datetime.datetime(2024, 12, 8, 14, 21, 48, 133059)), Plain('你好')]) adapter=<pkg.platform.sources.aiocqhttp.AiocqhttpAdapter object at 0x0000024D0540EB30> session=Session(launcher_type=<LauncherTypes.PERSON: 'person'>, launcher_id=857082026, sender_id=0, use_prompt_name='default', using_conversation=Conversation(prompt=Prompt(name='default', messages=[Message(role='system', name=None, content='', tool_calls=None, tool_call_id=None)]), messages=[], create_time=datetime.datetime(2024, 12, 8, 14, 20, 56, 917743), update_time=datetime.datetime(2024, 12, 8, 14, 20, 56, 917743), use_model=LLMModelInfo(name='gemma2:27b', model_name=None, token_mgr=<pkg.provider.modelmgr.token.TokenManager object at 0x0000024D053E0970>, requester=<pkg.provider.modelmgr.requesters.ollamachat.OllamaChatCompletions object at 0x0000024D053FCE20>, tool_call_supported=False, vision_supported=False), use_funcs=[]), conversations=[Conversation(prompt=Prompt(name='default', messages=[Message(role='system', name=None, content='', tool_calls=None, tool_call_id=None)]), messages=[], create_time=datetime.datetime(2024, 12, 8, 14, 20, 56, 917743), update_time=datetime.datetime(2024, 12, 8, 14, 20, 56, 917743), use_model=LLMModelInfo(name='gemma2:27b', model_name=None, token_mgr=<pkg.provider.modelmgr.token.TokenManager object at 0x0000024D053E0970>, requester=<pkg.provider.modelmgr.requesters.ollamachat.OllamaChatCompletions object at 0x0000024D053FCE20>, tool_call_supported=False, vision_supported=False), use_funcs=[])], create_time=datetime.datetime(2024, 12, 8, 14, 20, 56, 915741), update_time=datetime.datetime(2024, 12, 8, 14, 20, 56, 915741), semaphore=<asyncio.locks.Semaphore object at 0x0000024D05493F10 [locked]>) messages=[] prompt=Prompt(name='default', messages=[Message(role='system', name=None, content='', tool_calls=None, tool_call_id=None)]) user_message=Message(role='user', name=None, content=[ContentElement(type='text', text='你好', image_url=None)], tool_calls=None, tool_call_id=None) use_model=LLMModelInfo(name='gemma2:27b', model_name=None, token_mgr=<pkg.provider.modelmgr.token.TokenManager object at 0x0000024D053E0970>, requester=<pkg.provider.modelmgr.requesters.ollamachat.OllamaChatCompletions object at 0x0000024D053FCE20>, tool_call_supported=False, vision_supported=False) use_funcs=None resp_messages=[] resp_message_chain=[] current_stage=<pkg.pipeline.stagemgr.StageInstContainer object at 0x0000024D05451720>
我不知道是什么问题。
以下是截图,我确信ollama是正常可访问的。
以下是我的配置
llm-models.json
{
"name": "gemma2:27b",
"requester": "ollama-chat"
}
provider.json
"ollama-chat": {
"base-url": "http://192.168.10.100:11434",
"args": {},
"timeout": 600
}
},
"model": "gemma2:27b",
"prompt-mode": "normal",
"prompt": {
"default": ""
},
"runner": "local-agent"
}
我该如何解决这个问题
启用的插件
No response
The text was updated successfully, but these errors were encountered: