-
-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bad request? #9
Comments
I tried it today but couldn't reproduce this issue. Bad request is very generic, it could mean you have problem with your "key", or sending an invalid request. Suppose there is no error with |
Hello, I've the exact same error. What is interesting is that I was able to ask exactly once the API with success before anything fails. So I've explain a part of my code for testing, then codegpt started to return 400 no matter what. I've installed it with use-package: (use-package codegpt) Then customized the EDIT: I just tried So I've tried reproducing my error with a generic code that has the same structure and interesting enough is that I couldn't make it fail until I have exactly this block; meaning that removing one line of them four is fixing the issue ... def action
@model.assign_attributes({ attribute_name: 'value', **strong_params})
@model.status = if @model.attribute_id == @other_model.relation.attribute_id # random comment
'some_value'
else
'other_value'
end
@model.save!
redirect_to :action_name
end |
Stacktrace:
Note that my now revoked key was included in the above message (it wasn't revoked at the time I tried it).
I also asked it to improve my code. Explaining ( From the error messages, I see
This seems like it should be well under 4097 tokens. |
I think once I have encountered something similar to this, and my best guess was escaping as well. But I eventually move on since I couldn't pin down the culprit...
Thanks for posting your code here. I will give it a try, and see what I can do to resolve this!
I am not 100% sure how OpenAI calculates their tokens, I've tried it today, but the token "count" seems to be a bit odd. 🤔 |
I also get the "peculiar error: 400" with well under 1000 tokens, pretty much for everything. |
I suspect it has to do with having quotation marks in my prompt. |
It looks good to me, so I have no idea why this ends up Bad request 400. 😕 |
What did you run to see the request? |
I printed it in the code, so there isn't a way by default. However, I've added a debug flag, so you can see it by |
According to: https://platform.openai.com/tokenizer I didn't realize I could expand the
So seems it is 236 tokens for the prompt using text-davinci-003. What does the "4000 for the completion" come from? |
If anyone wants to stare at the backtrace:
|
Ah, okay. Then I think the line is the culprit? Line 66 in a8a8026
Can you try to tweak the value down, and see if it works? Everything kinda make sense now. 🤔 |
Thanks, I think that fixed it. It printed a version with updated comments. |
For examplem, running
M-x codegpt-improve
, I get in*Messages*
:Perhaps I have misconfigured codegpt and/or openai?
Except of course
mysecretkey
andmyemailaddress
are the key from openai and the email address of the account, respectively.400 suggests a client side problem, making this look like an issue on my side?
The text was updated successfully, but these errors were encountered: