Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Alias duplication for llama3 in config template #295

Open
joshbainbridge opened this issue Jun 23, 2024 · 1 comment
Open

Alias duplication for llama3 in config template #295

joshbainbridge opened this issue Jun 23, 2024 · 1 comment
Labels
enhancement New feature or request

Comments

@joshbainbridge
Copy link

In the default template the 'llama3' alias is used twice, once for groq and again for ollama. The ollama appears to take priority and masks groq. Should these have unique identifiers?

Also related to the ollama config, it currently targets the 'llama3:70b' model. I'd propose that this is changed to 'llama3', the default 8b model. Most users won't have a 40GB GPU required to practically run 70b, and will more likely have the default model already installed.

Great project by the way, really appreciate all the hard work.

@caarlos0 caarlos0 added the enhancement New feature or request label Jan 14, 2025
@caarlos0
Copy link
Member

refs #300

if no --api is specified, it'll use the first one that matches.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants