-
Notifications
You must be signed in to change notification settings - Fork 60.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] 视觉模式回答不完整(回答中断) #4084
Comments
Title: [Bug] Incomplete answer in visual mode (interrupted answer) |
+1 |
2 similar comments
+1 |
+1 |
@devyujie 请教一下怎么使用视觉模式?在哪里输入图片呢? |
@devyujie Please tell me how to use visual mode? Where do I enter the image? |
切换成一个支持视觉的模型,比如gpt4v或者gemini-vision,就会出现上传图片的图标 |
Switch to a model that supports vision, such as gpt4v or gemini-vision, and the icon for uploading images will appear. |
Same |
same,hot to fix? change max_tokens to 4096,same problean |
you need to use example: |
it doesn't work because default on this repository has been disabled it https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web/blob/main/app/client/platforms/openai.ts#L109 |
Anyways This bug can be easily fixed. However, I don't believe it will be merged into the main branch since the owner has made changes. |
It's really bad, the problem still reproduce even after updated to version 2.11.2. |
Yes, I understand that there's nothing particularly remarkable about the latest version. It would be more beneficial to focus on bug fixes and performance improvements, rather than adding another AI that may not be entirely stable for everyone. |
thank,but I don't see the use max tokens option... |
okok,i got it |
Agree with your point : ) |
Currently GPT4-v has a very low max_token default value, which makes the replies very short and imcomplete. Uncommenting the line and build from source again will pass the max_token value again, override the default and solve the problem. |
To minimize the impact, only the Vision Model is currently configured separately for max_tokens. If you encounter additional problems, please feel free to give feedback |
有一个问题,就是图片太大的时候就报错了,能否上传后自动压缩图片? |
There is a problem, that is, an error is reported when the image is too large. Can the image be automatically compressed after uploading? |
I just solved the same problem; I hope this can help you guys. change the code of isVisionModel make sure your model name is included change the code of visionModel && modelConfig.model.includes("preview") make sure your model name is included |
Bug Description
Steps to Reproduce
。
Expected Behavior
。
Screenshots
No response
Deployment Method
Desktop OS
No response
Desktop Browser
No response
Desktop Browser Version
No response
Smartphone Device
No response
Smartphone OS
No response
Smartphone Browser
No response
Smartphone Browser Version
No response
Additional Logs
No response
The text was updated successfully, but these errors were encountered: