We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Vercel
all
Windows
Chrome
上图中是错误的理解,正确的是:模型本次可以生成的最多token数 或者说 一次大模型调用生成的令牌数量的上限,包括可见输出token和 推理token。
参见:
https://community.openai.com/t/clarification-for-max-tokens/19576/4
另外 https://platform.openai.com/docs/api-reference/chat/create#chat-create-max_tokens
No response
The text was updated successfully, but these errors were encountered:
Bot detected the issue body's language is not English, translate it automatically.
Title: [Bug] Wrong understanding of max_tokens parameter
Sorry, something went wrong.
@Dogtiti 看看按正确的逻辑,更新一下这里的文案,需要多语言翻译
@Dogtiti Let’s look at the correct logic and update the copy here. Multi-language translation is needed.
DDMeaqua
No branches or pull requests
📦 Deployment Method
Vercel
📌 Version
all
💻 Operating System
Windows
📌 System Version
all
🌐 Browser
Chrome
📌 Browser Version
all
🐛 Bug Description
上图中是错误的理解,正确的是:模型本次可以生成的最多token数 或者说 一次大模型调用生成的令牌数量的上限,包括可见输出token和 推理token。
参见:
https://community.openai.com/t/clarification-for-max-tokens/19576/4
另外
https://platform.openai.com/docs/api-reference/chat/create#chat-create-max_tokens
📷 Recurrence Steps
No response
🚦 Expected Behavior
No response
📝 Additional Information
No response
The text was updated successfully, but these errors were encountered: