Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update #4779

Closed
wants to merge 10 commits into from
1 change: 1 addition & 0 deletions .env.template
Original file line number Diff line number Diff line change
Expand Up @@ -47,3 +47,4 @@ ENABLE_BALANCE_QUERY=
# If you want to disable parse settings from url, set this value to 1.
DISABLE_FAST_LINK=

CUSTOM_MODELS=+glm-3-turbo,+glm-4,+ERNIE-Bot=文心一言3.5,+ERNIE-Bot-turbo=文心一言3.5turbo,+ERNIE-Bot-4,+BLOOMZ-7B,+Qianfan-Chinese-Llama-2-7B,+Qianfan-Chinese-Llama-2-13B,+ChatGLM2-6B-32K,+AquilaChat-7B,+SQLCoder-7B,+CodeLlama-7B-Instruct,+XuanYuan-70B-Chat-4bit,+ChatLaw,+Yi-34B-Chat,+Mixtral-8x7B-Instruct
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider using a more structured format for CUSTOM_MODELS to enhance maintainability.

- CUSTOM_MODELS=+glm-3-turbo,+glm-4,+ERNIE-Bot=文心一言3.5,+ERNIE-Bot-turbo=文心一言3.5turbo,+ERNIE-Bot-4,+BLOOMZ-7B,+Qianfan-Chinese-Llama-2-7B,+Qianfan-Chinese-Llama-2-13B,+ChatGLM2-6B-32K,+AquilaChat-7B,+SQLCoder-7B,+CodeLlama-7B-Instruct,+XuanYuan-70B-Chat-4bit,+ChatLaw,+Yi-34B-Chat,+Mixtral-8x7B-Instruct
+ CUSTOM_MODELS=glm-3-turbo,glm-4,ERNIE-Bot=文心一言3.5,ERNIE-Bot-turbo=文心一言3.5turbo,ERNIE-Bot-4,BLOOMZ-7B,Qianfan-Chinese-Llama-2-7B,Qianfan-Chinese-Llama-2-13B,ChatGLM2-6B-32K,AquilaChat-7B,SQLCoder-7B,CodeLlama-7B-Instruct,XuanYuan-70B-Chat-4bit,ChatLaw,Yi-34B-Chat,Mixtral-8x7B-Instruct

Committable suggestion was skipped due low confidence.

19 changes: 19 additions & 0 deletions app/constant.ts
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ export const RUNTIME_CONFIG_DOM = "danger-runtime-config";

export const DEFAULT_API_HOST = "https://api.nextchat.dev";
export const OPENAI_BASE_URL = "https://api.openai.com";
export const ZHIPU_BASE_URL = "";

export const GEMINI_BASE_URL = "https://generativelanguage.googleapis.com/";

Expand Down Expand Up @@ -118,6 +119,24 @@ export const KnowledgeCutOffDate: Record<string, string> = {
};

export const DEFAULT_MODELS = [
{
name: "glm-4",
available: true,
provider: {
id: "openai",
providerName: "OpenAI",
providerType: "openai",
},
},
{
name: "glm-3-turbo",
available: true,
provider: {
id: "openai",
providerName: "OpenAI",
providerType: "openai",
},
},
{
name: "gpt-4",
available: true,
Expand Down
3 changes: 2 additions & 1 deletion app/locales/cn.ts
Original file line number Diff line number Diff line change
Expand Up @@ -360,7 +360,8 @@ const cn = {
},
Store: {
DefaultTopic: "新的聊天",
BotHello: "有什么可以帮你的吗",
BotHello:
"有什么可以帮你的吗\nTips:所有内容都将发送到对应外部服务器,注意保密。",
Error: "出错了,稍后重试吧",
Prompt: {
History: (content: string) => "这是历史聊天总结作为前情提要:" + content,
Expand Down
10 changes: 5 additions & 5 deletions app/store/config.ts
Original file line number Diff line number Diff line change
Expand Up @@ -46,15 +46,15 @@ export const DEFAULT_CONFIG = {
models: DEFAULT_MODELS as any as LLMModel[],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Specify a more precise type instead of any to enhance type safety.

- models: DEFAULT_MODELS as any as LLMModel[],
+ models: DEFAULT_MODELS as LLMModel[],

Also applies to: 166-166


Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
models: DEFAULT_MODELS as any as LLMModel[],
models: DEFAULT_MODELS as LLMModel[],


modelConfig: {
model: "gpt-3.5-turbo" as ModelType,
model: "glm-4" as ModelType,
temperature: 0.5,
top_p: 1,
max_tokens: 4000,
top_p: 0.7,
max_tokens: 32768,
presence_penalty: 0,
frequency_penalty: 0,
sendMemory: true,
historyMessageCount: 4,
compressMessageLengthThreshold: 1000,
historyMessageCount: 8,
compressMessageLengthThreshold: 16384,
enableInjectSystemPrompts: true,
template: DEFAULT_INPUT_TEMPLATE,
},
Expand Down
Loading