-
Notifications
You must be signed in to change notification settings - Fork 79
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Amplify Gen2 LLama integration error #3058
Comments
on closer inspection, looks like the models typed in Amplify are not available in "eu-central-1" and the models that we do have, are not in the enum used to complete the model. |
Hey,👋 thanks for raising this! I'm going to transfer this over to our API repository for better assistance 🙂 |
i will add that I made it work with llama 3.2 3b available in eu-central-1 by updating the types and model mapping in the library, but I found that this specific one is only usable trough inference profiles. Seems that this is only configurable in conversations. Is there any plan to add it to generation? 10x and sorry for adding another dimension to this issue :) |
@bogris refer to this example aws-amplify/docs#8121 (comment) providing additional information while the team gets a chance to into this issue. |
@ykethan thanks for the update I went trough this example, but it is applicable only for conversations. in the case of generations, that parameter is not available. I am on backend 1.8 that is latest on npm here is the typing for .generation()
|
Environment information
Describe the bug
with this config for a generation in amplify data:
call from the frontend into api.generation.aiCheckSong will throw an error:
the same set-up works with Antropic Claude 3.5 as the model, and I get the correct formatted object.
Looking at the docs I don't see any limitation of the Llama model.
Reproduction steps
deploy a data api with the setup from above
from FE call the api with:
The text was updated successfully, but these errors were encountered: