We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
978ba3d
Server: Don't ignore llama.cpp params (#8754) * Don't ignore llama.cpp params * Add fallback for max_tokens