Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: 使用ollama作为模型请求的时候,返回的对话疑似不能正常解析 #945

Closed
Wanglanhua450 opened this issue Dec 8, 2024 · 3 comments
Labels
bug? Bug或Bug修复相关 m: Provider OpenAI API 或其他 LLM 模型相关 pd: 即将发行 pending: 已修复/实现到主分支,将在下一版本包含

Comments

@Wanglanhua450
Copy link

消息平台适配器

aiocqhttp(使用 OneBot 协议接入的)

运行环境

Windows Server 2022 ,python3.10.11

LangBot 版本

3.4.0.2

异常情况

根据文档 已完成NapCat与langbot的ws连接
在此基础上,我配置了本地的ollama(配置了局域网访问),并且使用gemma2:27b。
现在是QQ发送消息,langbot与ollama交互的时候,可发现GPU的cuda是在进行工作回复的,但是回复回来是以下的日志。

[12-08 14:21:49.048] chat.py (95) - [ERROR] : 对话(1)请求失败: 'ChatResponse' object has no attribute 'pop'
[12-08 14:21:49.353] controller.py (98) - [ERROR] : 'ChatResponse' object has no attribute 'pop'

报异常前 我对query进行了查看

[12-08 14:21:48.137] chat.py (73) - [INFO] : 看看參數 query_id=1 launcher_type=<LauncherTypes.PERSON: 'person'> launcher_id=111111111 sender_id=111111111 message_event=FriendMessage(message_chain=MessageChain([Source(id=269009979, time=datetime.datetime(2024, 12, 8, 14, 21, 48, 133059)), Plain('你好')]), sender=Friend(id=111111111, nickname='🐠', remark='')) message_chain=MessageChain([Source(id=269009979, time=datetime.datetime(2024, 12, 8, 14, 21, 48, 133059)), Plain('你好')]) adapter=<pkg.platform.sources.aiocqhttp.AiocqhttpAdapter object at 0x0000024D0540EB30> session=Session(launcher_type=<LauncherTypes.PERSON: 'person'>, launcher_id=857082026, sender_id=0, use_prompt_name='default', using_conversation=Conversation(prompt=Prompt(name='default', messages=[Message(role='system', name=None, content='', tool_calls=None, tool_call_id=None)]), messages=[], create_time=datetime.datetime(2024, 12, 8, 14, 20, 56, 917743), update_time=datetime.datetime(2024, 12, 8, 14, 20, 56, 917743), use_model=LLMModelInfo(name='gemma2:27b', model_name=None, token_mgr=<pkg.provider.modelmgr.token.TokenManager object at 0x0000024D053E0970>, requester=<pkg.provider.modelmgr.requesters.ollamachat.OllamaChatCompletions object at 0x0000024D053FCE20>, tool_call_supported=False, vision_supported=False), use_funcs=[]), conversations=[Conversation(prompt=Prompt(name='default', messages=[Message(role='system', name=None, content='', tool_calls=None, tool_call_id=None)]), messages=[], create_time=datetime.datetime(2024, 12, 8, 14, 20, 56, 917743), update_time=datetime.datetime(2024, 12, 8, 14, 20, 56, 917743), use_model=LLMModelInfo(name='gemma2:27b', model_name=None, token_mgr=<pkg.provider.modelmgr.token.TokenManager object at 0x0000024D053E0970>, requester=<pkg.provider.modelmgr.requesters.ollamachat.OllamaChatCompletions object at 0x0000024D053FCE20>, tool_call_supported=False, vision_supported=False), use_funcs=[])], create_time=datetime.datetime(2024, 12, 8, 14, 20, 56, 915741), update_time=datetime.datetime(2024, 12, 8, 14, 20, 56, 915741), semaphore=<asyncio.locks.Semaphore object at 0x0000024D05493F10 [locked]>) messages=[] prompt=Prompt(name='default', messages=[Message(role='system', name=None, content='', tool_calls=None, tool_call_id=None)]) user_message=Message(role='user', name=None, content=[ContentElement(type='text', text='你好', image_url=None)], tool_calls=None, tool_call_id=None) use_model=LLMModelInfo(name='gemma2:27b', model_name=None, token_mgr=<pkg.provider.modelmgr.token.TokenManager object at 0x0000024D053E0970>, requester=<pkg.provider.modelmgr.requesters.ollamachat.OllamaChatCompletions object at 0x0000024D053FCE20>, tool_call_supported=False, vision_supported=False) use_funcs=None resp_messages=[] resp_message_chain=[] current_stage=<pkg.pipeline.stagemgr.StageInstContainer object at 0x0000024D05451720>

我不知道是什么问题。

以下是截图,我确信ollama是正常可访问的。
222

111

以下是我的配置
llm-models.json
{
"name": "gemma2:27b",
"requester": "ollama-chat"
}

provider.json
"ollama-chat": {
"base-url": "http://192.168.10.100:11434",
"args": {},
"timeout": 600
}
},
"model": "gemma2:27b",
"prompt-mode": "normal",
"prompt": {
"default": ""
},
"runner": "local-agent"
}
我该如何解决这个问题

启用的插件

No response

@Wanglanhua450 Wanglanhua450 added the bug? Bug或Bug修复相关 label Dec 8, 2024
@Wanglanhua450
Copy link
Author

确实是这个版本的问题,我使用3.4.0就不会报错

@RockChinQ RockChinQ added m: Provider OpenAI API 或其他 LLM 模型相关 pd: 待复现 pending: 需要测试以复现的issue,若您遇到相同问题,请提供更多的有价值的信息 labels Dec 9, 2024
@DudeGuuud
Copy link

DudeGuuud commented Dec 14, 2024

3.4.1遇到同样的问题了,回退到3.4.0闪退

@RockChinQ
Copy link
Owner

已修复,将在下一版本包含。

@RockChinQ RockChinQ added pd: 即将发行 pending: 已修复/实现到主分支,将在下一版本包含 and removed pd: 待复现 pending: 需要测试以复现的issue,若您遇到相同问题,请提供更多的有价值的信息 labels Dec 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug? Bug或Bug修复相关 m: Provider OpenAI API 或其他 LLM 模型相关 pd: 即将发行 pending: 已修复/实现到主分支,将在下一版本包含
Projects
None yet
Development

No branches or pull requests

3 participants