Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] custom model name does not work any more in the latest build version #4007

Closed
3 tasks
lizhe2004 opened this issue Feb 6, 2024 · 21 comments · Fixed by #4010
Closed
3 tasks

[Bug] custom model name does not work any more in the latest build version #4007

lizhe2004 opened this issue Feb 6, 2024 · 21 comments · Fixed by #4010

Comments

@lizhe2004
Copy link

Describe the bug
custom model name does not work any more in the latest build version ,with console log showing "Model xxx not found in DEFAULT_MODELS array"
To Reproduce
Steps to reproduce the behavior:

  1. Go to 'setting page'
  2. config API url and some custom model name list with "," mark
  3. Go to chat page
  4. select one custom model name
    5 send some question in the chat page
    Expected behavior
    I think i should get the response message from the custom model rather than no response with console eror message showing "Model xxx not found in DEFAULT_MODELS array"

I think the bug is introduced by such commit , which lost consideration the custom models configed in user setting page, with only search the DEFAULT_MODELS const value.
a5517a1#diff-87cc040dd1bc561fde5c60b722355d5151ff86b7f70c325dfe403885603ea233R98

Screenshots
If applicable, add screenshots to help explain your problem.
image
image

Deployment

  • Docker
  • Vercel
  • Server

Desktop (please complete the following information):

  • OS: [e.g. windows]
  • Browser [chrome,]
  • Version 121.0.6167.140]
@nextchat-manager
Copy link

Please follow the issue template to update title and description of your issue.

@lizhe2004 lizhe2004 changed the title [custom model name does not work any more in the latest build version ] [Bug] custom model name does not work any more in the latest build version Feb 6, 2024
@H0llyW00dzZ
Copy link
Contributor

At present, custom models for other AI providers are not available, particularly those to which we do not have access (as contributor,devs). It's worth reconsidering how contributors, or any developers for that matter, can address bugs or make improvements to models they cannot access.

@H0llyW00dzZ
Copy link
Contributor

feel free to sponsoring this project as professionalism for example giving they access such as contributor,devs then can easily fix,improve or whatever it is

@lizhe2004
Copy link
Author

i use a service named one-api https://github.com/songquanpeng/one-api to convert some 3rd model provider api into openai compatible api, so the qwen model from aliyun dashscope can privide openai format stream response which is accepted by chatgpt next web application.
so I think the issue is not about 3rd AI provider, it is related with that the modelname validation in client side shoud take the custom model list of setting page into consideration rather than only the pre-config DEFAULT_MODELS array. because the custom model name works in eariler version before you add the DEFAULT_MODELS check.

@H0llyW00dzZ
Copy link
Contributor

i use a service named one-api https://github.com/songquanpeng/one-api to convert some 3rd model provider api into openai compatible api, so the qwen model from aliyun dashscope can privide openai format stream response which is accepted by chatgpt next web application. so I think the issue is not about 3rd AI provider, it is related with that the modelname validation in client side shoud take the custom model list of setting page into consideration rather than only the pre-config DEFAULT_MODELS array. because the custom model name works in eariler version before you add the DEFAULT_MODELS check.

that's not actually ai provider so still
At present, custom models for other AI providers are not available, particularly those to which we do not have access (as contributor,devs). It's worth reconsidering how contributors, or any developers for that matter, can address bugs or make improvements to models they cannot access.

@lizhe2004
Copy link
Author

image

sorry to bother you again.
so,how do you think the goal or purpose of such "custom models" configuration in setting page? did i use this by wrong understanding? i think we can use "OpenAI Endpoint " and "Custom Models" the two configuration to support some unkown new model which is compatible with openai api client, so chatgpt-next-web can only focus on openai format.

@H0llyW00dzZ
Copy link
Contributor

image

sorry to bother you again. so,how do you think the goal or purpose of such "custom models" configuration in setting page? did i use this by wrong understanding? i think we can use "OpenAI Endpoint " and "Custom Models" the two configuration to support some unkown new model which is compatible with openai api client, so chatgpt-next-web can only focus on openai format.

well, still I don't have access to the other ai so what can I do, anyway why should I fix it without testing or anything else when I don't have access lmao

@H0llyW00dzZ
Copy link
Contributor

alternative fix it just disable this inject system prompt since its only for default models such as openai, google ai

image

@Algorithm5838
Copy link
Contributor

Instead of throwing an error, it should not include a knowledge cutoff date if the date is not explicitly provided.

@H0llyW00dzZ
Copy link
Contributor

explicitly

explicitly better

@Algorithm5838
Copy link
Contributor

I believe a more effective implementation would be to have an option to select a default mask for new chats. This selection could be presented in the settings as 'Start chats with a default mask [ ].' Initially, a default ChatGPT mask might be selected, but users should have the capability to change this.

@H0llyW00dzZ
Copy link
Contributor

I believe a more effective implementation would be to have an option to select a default mask for new chats. This selection could be presented in the settings as 'Start chats with a default mask [ ].' Initially, a default ChatGPT mask might be selected, but users should have the capability to change this.

so Start chats with a default mask [ ]. for example is can be customize by users instead of default chatgpt mask ?

@lizhe2004
Copy link
Author

i feel Algorithm5838 and H0llyW00dzZ you are talking about something about mask ,rather than my posted issue.

@Algorithm5838
Copy link
Contributor

@H0llyW00dzZ Yes.

Currently, initiating a new chat requires either a default system message, a ChatGPT mask, or no system message. Although you can create masks, you must select your preferred one each time.

A proposed solution is to allow users to set a default mask for starting chats, which would be the revised setting Start chats with a default mask [ ].. Users could select their preferred mask from the masks settings, which could include Bard, ChatGPT, or any of their creations like DictionaryGPT. This would prevent the raised issue, as users would write the system message for the default mask themselves.


@lizhe2004 It might appear so, but I'm recommending a different implementation for the code that gave rise to this issue. I'm currently using one-api, but I haven't updated my NextChat fork. If I did, I would likely encounter this same problem.

@H0llyW00dzZ
Copy link
Contributor

H0llyW00dzZ commented Feb 6, 2024

@H0llyW00dzZ Yes.

Currently, initiating a new chat requires either a default system message, a ChatGPT mask, or no system message. Although you can create masks, you must select your preferred one each time.

A proposed solution is to allow users to set a default mask for starting chats, which would be the revised setting Start chats with a default mask [ ].. Users could select their preferred mask from the masks settings, which could include Bard, ChatGPT, or any of their creations like DictionaryGPT. This would prevent the raised issue, as users would write the system message for the default mask themselves.


@lizhe2004 It might appear so, but I'm recommending a different implementation for the code that gave rise to this issue. I'm currently using one-api, but I haven't updated my NextChat fork. If I did, I would likely encounter this same problem.

It easy to implement it just like writing a slices byte in go

I might gonna trying it later

@yangbod
Copy link

yangbod commented Feb 6, 2024

v2.10.1 can use one-api ,after update to v2.10.2, it doesn't work

@fred-bf
Copy link
Contributor

fred-bf commented Feb 6, 2024

v2.10.1 can use one-api ,after update to v2.10.2, it doesn't work

Please create an individual issue for your question

@fred-bf
Copy link
Contributor

fred-bf commented Feb 6, 2024

@lizhe2004 could you try the work around in this PR(Preview link)? For the time being, the default system content is set to make OpenAI by default, and the it will be considered to maintain relevant information when registering custom models at server side in the future

#4010

@bbb3n
Copy link

bbb3n commented Feb 7, 2024

i feel Algorithm5838 and H0llyW00dzZ you are talking about something about mask ,rather than my posted issue.

true dude, thats funny
i know what you mean
Screenshot_20240207_091612

My solution is to hardcode the model list and set the provider to OpenAI. However, there are still some issues. For example, if I need to set a model with a custom display name, I must use the environment variable CUSTOM_MODEL.

@lizhe2004
Copy link
Author

@lizhe2004 could you try the work around in this PR(Preview link)? For the time being, the default system content is set to make OpenAI by default, and the it will be considered to maintain relevant information when registering custom models at server side in the future

#4010

hi,fred-bf ,your demo (Preview link) works for custom model name. great.

@QAbot-zh
Copy link

QAbot-zh commented Feb 7, 2024

alternative fix it just disable this inject system prompt since its only for default models such as openai, google ai要解决这个问题,只需禁用此注入系统提示,因为它只适用于默认模型,如 openai 和 google ai。

image

It doesn't work. I disable this inject system prompt, but I still can't get the reply when I try to use non OpenAI and Google model. I use v2.10.1 version.
image
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants