Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bad request? #9

Closed
chriselrod opened this issue Mar 16, 2023 · 13 comments
Closed

Bad request? #9

chriselrod opened this issue Mar 16, 2023 · 13 comments

Comments

@chriselrod
Copy link

chriselrod commented Mar 16, 2023

For examplem, running M-x codegpt-improve, I get in *Messages*:

[error] request--callback: peculiar error: 400
error in process sentinel: openai--handle-error: 400 - Bad request.  Please check error message and your parameters
error in process sentinel: 400 - Bad request.  Please check error message and your parameters

Perhaps I have misconfigured codegpt and/or openai?

(use-package openai
  :straight (openai :type git :host github :repo "emacs-openai/openai")
  :custom
  (openai-key "mysecretkey")
  (openai-user "myemailaddress"))

(use-package codegpt
  :straight (codegpt :type git :host github :repo "emacs-openai/codegpt"))

Except of course mysecretkey and myemailaddress are the key from openai and the email address of the account, respectively.

400 suggests a client side problem, making this look like an issue on my side?

@jcs090218
Copy link
Member

I tried it today but couldn't reproduce this issue. Bad request is very generic, it could mean you have problem with your "key", or sending an invalid request. Suppose there is no error with async library, and I am not able to reproduce this. I suggest you to check your environment, like key, network connection, etc.

@esilvert
Copy link

esilvert commented Mar 16, 2023

Hello, I've the exact same error. What is interesting is that I was able to ask exactly once the API with success before anything fails. So I've explain a part of my code for testing, then codegpt started to return 400 no matter what.

I've installed it with use-package:

(use-package codegpt)

Then customized the Openai Key of the openai group to set my Secret Key.

EDIT: I just tried codegpt-custom asking a random question and it worked. Afterward, codegpt-explain kept working until I ask to improve my ruby code. Maybe there lack some escaping ?

So I've tried reproducing my error with a generic code that has the same structure and interesting enough is that I couldn't make it fail until I have exactly this block; meaning that removing one line of them four is fixing the issue ...

  def action
    @model.assign_attributes({ attribute_name: 'value', **strong_params})

    @model.status = if @model.attribute_id == @other_model.relation.attribute_id # random comment
                      'some_value'
                    else
                      'other_value'
                    end

    @model.save!

    redirect_to :action_name
  end

@chriselrod
Copy link
Author

chriselrod commented Mar 16, 2023

Stacktrace:

Debugger entered--Lisp error: (error "400 - Bad request.  Please check error message and...")
  signal(error ("400 - Bad request.  Please check error message and..."))
  error("400 - Bad request.  Please check error message and...")
  openai--handle-error(#s(request-response :status-code 400 :history nil :data ((error (message . "This model's maximum context length is 4097 tokens...") (type . "invalid_request_error") (param) (code))) :error-thrown (error http 400) :symbol-status error :url "https://api.openai.com/v1/completions" :done-p nil :settings (:error #f(compiled-function (&rest rest) #<bytecode 0xd9a45aac3277384>) :type "POST" :headers (("Content-Type" . "application/json") ("Authorization" . "Bearer sk-wwJZt6VXW8EqHjxEtSb0T3BlbkFJYQWJ2umf7rEF...")) :data "{\"model\":\"text-davinci-003\",\"prompt\":\"Please impro..." :parser json-read :success #f(compiled-function (&rest rest) #<bytecode -0x15797ceab376fcd9>) :url "https://api.openai.com/v1/completions" :response #1 :encoding utf-8) :-buffer #<killed buffer> :-raw-header "HTTP/2 400 \ndate: Thu, 16 Mar 2023 13:03:51 GMT\nco..." :-timer nil :-backend curl))
  #f(compiled-function (&rest rest) #<bytecode 0xd9a45aac3277384>)(:data ((error (message . "This model's maximum context length is 4097 tokens...") (type . "invalid_request_error") (param) (code))) :symbol-status error :error-thrown (error http 400) :response #s(request-response :status-code 400 :history nil :data ((error (message . "This model's maximum context length is 4097 tokens...") (type . "invalid_request_error") (param) (code))) :error-thrown (error http 400) :symbol-status error :url "https://api.openai.com/v1/completions" :done-p nil :settings (:error #f(compiled-function (&rest rest) #<bytecode 0xd9a45aac3277384>) :type "POST" :headers (("Content-Type" . "application/json") ("Authorization" . "Bearer sk-wwJZt6VXW8EqHjxEtSb0T3BlbkFJYQWJ2umf7rEF...")) :data "{\"model\":\"text-davinci-003\",\"prompt\":\"Please impro..." :parser json-read :success #f(compiled-function (&rest rest) #<bytecode -0x15797ceab376fcd9>) :url "https://api.openai.com/v1/completions" :response #8 :encoding utf-8) :-buffer #<killed buffer> :-raw-header "HTTP/2 400 \ndate: Thu, 16 Mar 2023 13:03:51 GMT\nco..." :-timer nil :-backend curl))
  apply(#f(compiled-function (&rest rest) #<bytecode 0xd9a45aac3277384>) (:data ((error (message . "This model's maximum context length is 4097 tokens...") (type . "invalid_request_error") (param) (code))) :symbol-status error :error-thrown (error http 400) :response #s(request-response :status-code 400 :history nil :data ((error (message . "This model's maximum context length is 4097 tokens...") (type . "invalid_request_error") (param) (code))) :error-thrown (error http 400) :symbol-status error :url "https://api.openai.com/v1/completions" :done-p nil :settings (:error #f(compiled-function (&rest rest) #<bytecode 0xd9a45aac3277384>) :type "POST" :headers (("Content-Type" . "application/json") ("Authorization" . "Bearer sk-wwJZt6VXW8EqHjxEtSb0T3BlbkFJYQWJ2umf7rEF...")) :data "{\"model\":\"text-davinci-003\",\"prompt\":\"Please impro..." :parser json-read :success #f(compiled-function (&rest rest) #<bytecode -0x15797ceab376fcd9>) :url "https://api.openai.com/v1/completions" :response #10 :encoding utf-8) :-buffer #<killed buffer> :-raw-header "HTTP/2 400 \ndate: Thu, 16 Mar 2023 13:03:51 GMT\nco..." :-timer nil :-backend curl)))
  request--callback(#<killed buffer> :error #f(compiled-function (&rest rest) #<bytecode 0xd9a45aac3277384>) :type "POST" :headers (("Content-Type" . "application/json") ("Authorization" . "Bearer sk-wwJZt6VXW8EqHjxEtSb0T3BlbkFJYQWJ2umf7rEF...")) :data "{\"model\":\"text-davinci-003\",\"prompt\":\"Please impro..." :parser json-read :success #f(compiled-function (&rest rest) #<bytecode -0x15797ceab376fcd9>) :url "https://api.openai.com/v1/completions" :response #s(request-response :status-code 400 :history nil :data ((error (message . "This model's maximum context length is 4097 tokens...") (type . "invalid_request_error") (param) (code))) :error-thrown (error http 400) :symbol-status error :url "https://api.openai.com/v1/completions" :done-p nil :settings (:error #f(compiled-function (&rest rest) #<bytecode 0xd9a45aac3277384>) :type "POST" :headers (("Content-Type" . "application/json") ("Authorization" . "Bearer sk-wwJZt6VXW8EqHjxEtSb0T3BlbkFJYQWJ2umf7rEF...")) :data "{\"model\":\"text-davinci-003\",\"prompt\":\"Please impro..." :parser json-read :success #f(compiled-function (&rest rest) #<bytecode -0x15797ceab376fcd9>) :url "https://api.openai.com/v1/completions" :response #17 :encoding utf-8) :-buffer #<killed buffer> :-raw-header "HTTP/2 400 \ndate: Thu, 16 Mar 2023 13:03:51 GMT\nco..." :-timer nil :-backend curl) :encoding utf-8)
  apply(request--callback #<killed buffer> (:error #f(compiled-function (&rest rest) #<bytecode 0xd9a45aac3277384>) :type "POST" :headers (("Content-Type" . "application/json") ("Authorization" . "Bearer sk-wwJZt6VXW8EqHjxEtSb0T3BlbkFJYQWJ2umf7rEF...")) :data "{\"model\":\"text-davinci-003\",\"prompt\":\"Please impro..." :parser json-read :success #f(compiled-function (&rest rest) #<bytecode -0x15797ceab376fcd9>) :url "https://api.openai.com/v1/completions" :response #s(request-response :status-code 400 :history nil :data ((error (message . "This model's maximum context length is 4097 tokens...") (type . "invalid_request_error") (param) (code))) :error-thrown (error http 400) :symbol-status error :url "https://api.openai.com/v1/completions" :done-p nil :settings #3 :-buffer #<killed buffer> :-raw-header "HTTP/2 400 \ndate: Thu, 16 Mar 2023 13:03:51 GMT\nco..." :-timer nil :-backend curl) :encoding utf-8))
  request--curl-callback("https://api.openai.com/v1/completions" #<process request curl> "finished\n")
  apply(request--curl-callback ("https://api.openai.com/v1/completions" #<process request curl> "finished\n"))
  #f(compiled-function (&rest args2) #<bytecode 0x18e231ee82eb3ddd>)(#<process request curl> "finished\n")

Note that my now revoked key was included in the above message (it wasn't revoked at the time I tried it).

*CodeGPT*

Please improve the following.

constexpr auto getMaxDigits(PtrMatrix<Rational> A) -> Vector<size_t> {
  size_t M = size_t(A.numRow());
  size_t N = size_t(A.numCol());
  Vector<size_t> maxDigits{unsigned(N), 0};
  invariant(size_t(maxDigits.size()), N);
  // this is slow, because we count the digits of every element
  // we could optimize this by reducing the number of calls to countDigits
  for (Row i = 0; i < M; i++) {
    for (size_t j = 0; j < N; j++) {
      size_t c = countDigits(A(i, j));
      maxDigits[j] = std::max(maxDigits[j], c);
    }
  }
  return maxDigits;
}

I also asked it to improve my code. Explaining (What is the following? didn't work either.

From the error messages, I see

"This model's maximum context length is 4097 tokens..."

This seems like it should be well under 4097 tokens.
Is it passing a large amount of additional context?

@jcs090218
Copy link
Member

EDIT: I just tried codegpt-custom asking a random question and it worked. Afterward, codegpt-explain kept working until I ask to improve my ruby code. Maybe there lack some escaping ?

I think once I have encountered something similar to this, and my best guess was escaping as well. But I eventually move on since I couldn't pin down the culprit...

So I've tried reproducing my error with a generic code that has the same structure and interesting enough is that I couldn't make it fail until I have exactly this block; meaning that removing one line of them four is fixing the issue ...

Thanks for posting your code here. I will give it a try, and see what I can do to resolve this!

This seems like it should be well under 4097 tokens.
Is it passing a large amount of additional context?

I am not 100% sure how OpenAI calculates their tokens, I've tried it today, but the token "count" seems to be a bit odd. 🤔

@johanvts
Copy link

I also get the "peculiar error: 400" with well under 1000 tokens, pretty much for everything.

@johanvts
Copy link

I suspect it has to do with having quotation marks in my prompt.

@jcs090218
Copy link
Member

openai.el uses json-encode to encode the value, here is a sample result from the ruby code above #9 (comment).

{"model":"text-davinci-003","prompt":"Please improve the following.\n\ndef action\n  @model.assign_attributes({ attribute_name: 'value', **strong_params})\n\n  @model.status = if @model.attribute_id == @other_model.relation.attribute_id # random comment\n                    'some_value'\n                  else\n                    'other_value'\n                  end\n\n  @model.save!\n\n  redirect_to :action_name\nend\n\n","max_tokens":4000,"temperature":1.0}

It looks good to me, so I have no idea why this ends up Bad request 400. 😕

@chriselrod
Copy link
Author

What did you run to see the request?

@jcs090218
Copy link
Member

I printed it in the code, so there isn't a way by default. However, I've added a debug flag, so you can see it by (setq openai--show-log t). Make sure you update to the latest version!

@chriselrod
Copy link
Author

chriselrod commented Mar 18, 2023

[ENCODED]: {"model":"text-davinci-003","prompt":"Please improve the following.\n\n/// \\brief Returns the maximum number of digits per column of a matrix.\nconstexpr auto getMaxDigits(PtrMatrix<Rational> A) -> Vector<size_t> {\n  size_t M = size_t(A.numRow());\n  size_t N = size_t(A.numCol());\n  Vector<size_t> maxDigits{unsigned(N), 0};\n  invariant(size_t(maxDigits.size()), N);\n  // this is slow, because we count the digits of every element\n  // we could optimize this by reducing the number of calls to countDigits\n  for (Row i = 0; i < M; i++) {\n    for (size_t j = 0; j < N; j++) {\n      size_t c = countDigits(A(i, j));\n      maxDigits[j] = std::max(maxDigits[j], c);\n    }\n  }\n  return maxDigits;\n}\n\n","max_tokens":4000,"temperature":1.0,"user":"[email protected]"}
[error] request--callback: peculiar error: 400
openai--handle-error: 400 - Bad request.  Please check error message and your parameters

According to: https://platform.openai.com/tokenizer
This corresponds to 265 tokens for GPT-3?

I didn't realize I could expand the ... in the error mesages.
The full error message says:

"This model's maximum context length is 4097 tokens, however you requested 4236 tokens (236 in your prompt; 4000 for the completion). Please reduce your prompt; or completion length."

So seems it is 236 tokens for the prompt using text-davinci-003.

What does the "4000 for the completion" come from?
I see we're setting "max_tokens":4000, but your query has this, too, and is also at least 97 tokens so it isn't prompt + 4000 vs 4097.

@chriselrod
Copy link
Author

If anyone wants to stare at the backtrace:

Debugger entered--Lisp error: (error "400 - Bad request.  Please check error message and...")
  error("400 - Bad request.  Please check error message and your parameters")
  openai--handle-error(#s(request-response :status-code 400 :history nil :data ((error (message . "This model's maximum context length is 4097 tokens, however you requested 4236 tokens (236 in your prompt; 4000 for the completion). Please reduce your prompt; or completion length.") (type . "invalid_request_error") (param) (code))) :error-thrown (error http 400) :symbol-status error :url "https://api.openai.com/v1/completions" :done-p nil :settings (:error #<subr F616e6f6e796d6f75732d6c616d626461_anonymous_lambda_11> :type "POST" :headers (("Content-Type" . "application/json") ("Authorization" . "Bearer sk-mykey...")) :data "{\"model\":\"text-davinci-003\",\"prompt\":\"Please impro..." :parser json-read :success #f(compiled-function (&rest rest) #<bytecode -0x160b54aee9a256d9>) :url "https://api.openai.com/v1/completions" :response #1 :encoding utf-8) :-buffer #<killed buffer> :-raw-header "HTTP/2 400 \ndate: Sat, 18 Mar 2023 04:24:24 GMT\ncontent-type: application/json\ncontent-length: 294\naccess-control-allow-origin: *\nopenai-model: text-davinci-003\nopenai-organization: user-kfjwl04tenq80dxhlhnwmto6\nopenai-processing-ms: 3\nopenai-version: 2020-10-01\nstrict-transport-security: max-age=15724800; includeSubDomains\nx-ratelimit-limit-requests: 60\nx-ratelimit-limit-tokens: 150000\nx-ratelimit-remaining-requests: 59\nx-ratelimit-remaining-tokens: 146000\nx-ratelimit-reset-requests: 1s\nx-ratelimit-reset-tokens: 1.6s\nx-request-id: 7bef8a7e775e32e65ed6632c41e77ce7\n" :-timer nil :-backend curl))
  #<subr F616e6f6e796d6f75732d6c616d626461_anonymous_lambda_11>(:data ((error (message . "This model's maximum context length is 4097 tokens...") (type . "invalid_request_error") (param) (code))) :symbol-status error :error-thrown (error http 400) :response #s(request-response :status-code 400 :history nil :data ((error (message . "This model's maximum context length is 4097 tokens...") (type . "invalid_request_error") (param) (code))) :error-thrown (error http 400) :symbol-status error :url "https://api.openai.com/v1/completions" :done-p nil :settings (:error #<subr F616e6f6e796d6f75732d6c616d626461_anonymous_lambda_11> :type "POST" :headers (("Content-Type" . "application/json") ("Authorization" . "Bearer sk-mykey...")) :data "{\"model\":\"text-davinci-003\",\"prompt\":\"Please impro..." :parser json-read :success #f(compiled-function (&rest rest) #<bytecode -0x160b54aee9a256d9>) :url "https://api.openai.com/v1/completions" :response #8 :encoding utf-8) :-buffer #<killed buffer> :-raw-header "HTTP/2 400 \ndate: Sat, 18 Mar 2023 04:24:24 GMT\nco..." :-timer nil :-backend curl))
  apply(#<subr F616e6f6e796d6f75732d6c616d626461_anonymous_lambda_11> (:data ((error (message . "This model's maximum context length is 4097 tokens...") (type . "invalid_request_error") (param) (code))) :symbol-status error :error-thrown (error http 400) :response #s(request-response :status-code 400 :history nil :data ((error (message . "This model's maximum context length is 4097 tokens...") (type . "invalid_request_error") (param) (code))) :error-thrown (error http 400) :symbol-status error :url "https://api.openai.com/v1/completions" :done-p nil :settings (:error #<subr F616e6f6e796d6f75732d6c616d626461_anonymous_lambda_11> :type "POST" :headers (("Content-Type" . "application/json") ("Authorization" . "Bearer sk-mykey...")) :data "{\"model\":\"text-davinci-003\",\"prompt\":\"Please impro..." :parser json-read :success #f(compiled-function (&rest rest) #<bytecode -0x160b54aee9a256d9>) :url "https://api.openai.com/v1/completions" :response #10 :encoding utf-8) :-buffer #<killed buffer> :-raw-header "HTTP/2 400 \ndate: Sat, 18 Mar 2023 04:24:24 GMT\nco..." :-timer nil :-backend curl)))
  request--callback(#<killed buffer> :error #<subr F616e6f6e796d6f75732d6c616d626461_anonymous_lambda_11> :type "POST" :headers (("Content-Type" . "application/json") ("Authorization" . "Bearer sk-mykey...")) :data "{\"model\":\"text-davinci-003\",\"prompt\":\"Please impro..." :parser json-read :success #f(compiled-function (&rest rest) #<bytecode -0x160b54aee9a256d9>) :url "https://api.openai.com/v1/completions" :response #s(request-response :status-code 400 :history nil :data ((error (message . "This model's maximum context length is 4097 tokens...") (type . "invalid_request_error") (param) (code))) :error-thrown (error http 400) :symbol-status error :url "https://api.openai.com/v1/completions" :done-p nil :settings (:error #<subr F616e6f6e796d6f75732d6c616d626461_anonymous_lambda_11> :type "POST" :headers (("Content-Type" . "application/json") ("Authorization" . "Bearer sk-mykey...")) :data "{\"model\":\"text-davinci-003\",\"prompt\":\"Please improve the following.\\n\\n/// \\\\brief Returns the maximum number of digits per column of a matrix.\\nconstexpr auto getMaxDigits(PtrMatrix<Rational> A) -> Vector<size_t> {\\n  size_t M = size_t(A.numRow());\\n  size_t N = size_t(A.numCol());\\n  Vector<size_t..." :parser json-read :success #f(compiled-function (&rest rest) #<bytecode -0x160b54aee9a256d9>) :url "https://api.openai.com/v1/completions" :response #17 :encoding utf-8) :-buffer #<killed buffer> :-raw-header "HTTP/2 400 \ndate: Sat, 18 Mar 2023 04:24:24 GMT\nco..." :-timer nil :-backend curl) :encoding utf-8)
  apply(request--callback #<killed buffer> (:error #<subr F616e6f6e796d6f75732d6c616d626461_anonymous_lambda_11> :type "POST" :headers (("Content-Type" . "application/json") ("Authorization" . "Bearer sk-mykey...")) :data "{\"model\":\"text-davinci-003\",\"prompt\":\"Please improve the following.\\n\\n/// \\\\brief Returns the maximum number of digits per column of a matrix.\\nconstexpr auto getMaxDigits(PtrMatrix<Rational> A) -> Vector<size_t> {\\n  size_t M = size_t(A.numRow());\\n  size_t N = size_t(A.numCol());\\n  Vector<size_t> maxDigits{unsigned(N), 0};\\n  invariant(size_t(maxDigits.size()), N);\\n  // this is slow, because we count the digits of every element\\n  // we could optimize this by reducing the number of calls to countDigits\\n  for (Row i = 0; i < M; i++) {\\n    for (size_t j = 0; j < N; j++) {\\n      size_t c = countDigits(A(i, j));\\n      maxDigits[j] = std::max(maxDigits[j], c);\\n    }\\n  }\\n  return maxDigits;\\n}\\n\\n\",\"max_tokens\":4000,\"temperature\":1.0,\"user\":\"[email protected]\"}" :parser json-read :success #f(compiled-function (&rest rest) #<bytecode -0x160b54aee9a256d9>) :url "https://api.openai.com/v1/completions" :response #s(request-response :status-code 400 :history nil :data ((error (message . "This model's maximum context length is 4097 tokens, however you requested 4236 tokens (236 in your prompt; 4000 for the completion). Please reduce your prompt; or completion length.") (type . "invalid_request_error") (param) (code))) :error-thrown (error http 400) :symbol-status error :url "https://api.openai.com/v1/completions" :done-p nil :settings #3 :-buffer #<killed buffer> :-raw-header "HTTP/2 400 \ndate: Sat, 18 Mar 2023 04:24:24 GMT\ncontent-type: application/json\ncontent-length: 294\naccess-control-allow-origin: *\nopenai-model: text-davinci-003\nopenai-organization: user-kfjwl04tenq80dxhlhnwmto6\nopenai-processing-ms: 3\nopenai-version: 2020-10-01\nstrict-transport-security: max-age=15724800; includeSubDomains\nx-ratelimit-limit-requests: 60\nx-ratelimit-limit-tokens: 150000\nx-ratelimit-remaining-requests: 59\nx-ratelimit-remaining-tokens: 146000\nx-ratelimit-reset-requests: 1s\nx-ratelimit-reset-tokens: 1.6s\nx-request-id: 7bef8a7e775e32e65ed6632c41e77ce7\n" :-timer nil :-backend curl) :encoding utf-8))
  request--curl-callback("https://api.openai.com/v1/completions" #<process request curl> "finished\n")
  apply(request--curl-callback ("https://api.openai.com/v1/completions" #<process request curl> "finished\n"))
  #f(compiled-function (&rest args2) #<bytecode 0x18e21520d85a7ddd>)(#<process request curl> "finished\n")

@jcs090218
Copy link
Member

Ah, okay. Then I think the line is the culprit?

(defcustom codegpt-max-tokens 4000

Can you try to tweak the value down, and see if it works? Everything kinda make sense now. 🤔

@chriselrod
Copy link
Author

chriselrod commented Mar 18, 2023

Thanks, I think that fixed it. It printed a version with updated comments.

@jcs090218 jcs090218 pinned this issue Mar 20, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants