Skip to content
This repository has been archived by the owner on May 17, 2024. It is now read-only.

Fix max_token=2 for togetherAI #89

Open
yujonglee opened this issue Sep 7, 2023 · 1 comment
Open

Fix max_token=2 for togetherAI #89

yujonglee opened this issue Sep 7, 2023 · 1 comment

Comments

@yujonglee
Copy link
Owner

tests/evaluation/test_with_yelp_review.py::test_llm_grading_head[togethercomputer/llama-2-70b-chat-references1]
  /Users/yujonglee/dev/fastrepl/fastrepl/fastrepl/warnings.py:24: UnknownLLMExceptionWarning: ValueError: Traceback (most recent call last):
    File "/Users/yujonglee/dev/fastrepl/fastrepl/.venv/lib/python3.11/site-packages/litellm/utils.py", line 1755, in exception_type
      error_response = json.loads(error_str)
                       ^^^^^^^^^^^^^^^^^^^^^
    File "/Users/yujonglee/.pyenv/versions/3.11.3/lib/python3.11/json/__init__.py", line 346, in loads
      return _default_decoder.decode(s)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "/Users/yujonglee/.pyenv/versions/3.11.3/lib/python3.11/json/decoder.py", line 337, in decode
      obj, end = self.raw_decode(s, idx=_w(s, 0).end())
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "/Users/yujonglee/.pyenv/versions/3.11.3/lib/python3.11/json/decoder.py", line 355, in raw_decode
      raise JSONDecodeError("Expecting value", s, err.value) from None
  json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

  During handling of the above exception, another exception occurred:

  Traceback (most recent call last):
    File "/Users/yujonglee/dev/fastrepl/fastrepl/fastrepl/llm.py", line 138, in _completion
      result = litellm.gpt_cache.completion(  # pragma: no cover
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "/Users/yujonglee/dev/fastrepl/fastrepl/.venv/lib/python3.11/site-packages/gptcache/adapter/openai.py", line 100, in create
      return adapt(
             ^^^^^^
    File "/Users/yujonglee/dev/fastrepl/fastrepl/.venv/lib/python3.11/site-packages/gptcache/adapter/adapter.py", line 238, in adapt
      llm_data = time_cal(
                 ^^^^^^^^^
    File "/Users/yujonglee/dev/fastrepl/fastrepl/.venv/lib/python3.11/site-packages/gptcache/utils/time.py", line 9, in inner
      res = func(*args, **kwargs)
            ^^^^^^^^^^^^^^^^^^^^^
    File "/Users/yujonglee/dev/fastrepl/fastrepl/.venv/lib/python3.11/site-packages/litellm/gpt_cache.py", line 12, in _llm_handler
      return litellm.completion(*llm_args, **llm_kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "/Users/yujonglee/dev/fastrepl/fastrepl/.venv/lib/python3.11/site-packages/litellm/utils.py", line 565, in wrapper
      raise e
    File "/Users/yujonglee/dev/fastrepl/fastrepl/.venv/lib/python3.11/site-packages/litellm/utils.py", line 526, in wrapper
      result = original_function(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "/Users/yujonglee/dev/fastrepl/fastrepl/.venv/lib/python3.11/site-packages/litellm/timeout.py", line 44, in wrapper
      result = future.result(timeout=local_timeout_duration)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "/Users/yujonglee/.pyenv/versions/3.11.3/lib/python3.11/concurrent/futures/_base.py", line 456, in result
      return self.__get_result()
             ^^^^^^^^^^^^^^^^^^^
    File "/Users/yujonglee/.pyenv/versions/3.11.3/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
      raise self._exception
    File "/Users/yujonglee/dev/fastrepl/fastrepl/.venv/lib/python3.11/site-packages/litellm/timeout.py", line 33, in async_func
      return func(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^
    File "/Users/yujonglee/dev/fastrepl/fastrepl/.venv/lib/python3.11/site-packages/litellm/main.py", line 825, in completion
      raise exception_type(
            ^^^^^^^^^^^^^^^
    File "/Users/yujonglee/dev/fastrepl/fastrepl/.venv/lib/python3.11/site-packages/litellm/utils.py", line 1826, in exception_type
      raise original_exception
    File "/Users/yujonglee/dev/fastrepl/fastrepl/.venv/lib/python3.11/site-packages/litellm/main.py", line 559, in completion
      model_response = together_ai.completion(
                       ^^^^^^^^^^^^^^^^^^^^^^^
    File "/Users/yujonglee/dev/fastrepl/fastrepl/.venv/lib/python3.11/site-packages/litellm/llms/together_ai.py", line 110, in completion
      model_response["choices"][0]["message"]["content"] = completion_response
      ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^
    File "/Users/yujonglee/dev/fastrepl/fastrepl/.venv/lib/python3.11/site-packages/openai/openai_object.py", line 71, in __setitem__
      raise ValueError(
  ValueError: You cannot set content to an empty string. We interpret empty strings as None in requests.You may set {
    "content": "default",
    "role": "assistant",
    "logprobs": null
  }.content = None to delete the property
   | https://docs.fastrepl.com/miscellaneous/warnings_and_errors#unknownllmexception

This is not problem with LiteLLM side.

@ishaan-jaff
Copy link

this is because the response from tg ai was empty btw

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants