Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

'The model text-davinci-003 has been deprecated #502

Open
jerronl opened this issue Jun 18, 2024 · 2 comments
Open

'The model text-davinci-003 has been deprecated #502

jerronl opened this issue Jun 18, 2024 · 2 comments

Comments

@jerronl
Copy link

jerronl commented Jun 18, 2024

I tired the demo code in https://lablab.ai/t/ai-agents-tutorial-how-to-use-and-create-them and got following error:

llm_chain.run("What is lablab.ai")
LangChainDeprecationWarning: The method `Chain.run` was deprecated in langchain 0.1.0 and will be removed in 0.3.0. Use invoke instead.
  warn_deprecated(

---------------------------------------------------------------------------
NotFoundError                             Traceback (most recent call last)
Cell In[6], [line 1](vscode-notebook-cell:?execution_count=6&line=1)
----> [1](vscode-notebook-cell:?execution_count=6&line=1) llm_chain.run("What is lablab.ai")

File c:\Users\jerron\.conda\envs\autogenstudio\Lib\site-packages\langchain_core\_api\deprecation.py:168, in deprecated.<locals>.deprecate.<locals>.warning_emitting_wrapper(*args, **kwargs)
    [166](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/_api/deprecation.py:166)     warned = True
    [167](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/_api/deprecation.py:167)     emit_warning()
--> [168](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/_api/deprecation.py:168) return wrapped(*args, **kwargs)

File c:\Users\jerron\.conda\envs\autogenstudio\Lib\site-packages\langchain\chains\base.py:600, in Chain.run(self, callbacks, tags, metadata, *args, **kwargs)
    [598](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/base.py:598)     if len(args) != 1:
    [599](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/base.py:599)         raise ValueError("`run` supports only one positional argument.")
--> [600](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/base.py:600)     return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
    [601](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/base.py:601)         _output_key
    [602](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/base.py:602)     ]
    [604](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/base.py:604) if kwargs and not args:
    [605](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/base.py:605)     return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
    [606](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/base.py:606)         _output_key
    [607](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/base.py:607)     ]

File c:\Users\jerron\.conda\envs\autogenstudio\Lib\site-packages\langchain_core\_api\deprecation.py:168, in deprecated.<locals>.deprecate.<locals>.warning_emitting_wrapper(*args, **kwargs)
    [166](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/_api/deprecation.py:166)     warned = True
    [167](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/_api/deprecation.py:167)     emit_warning()
--> [168](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/_api/deprecation.py:168) return wrapped(*args, **kwargs)

File c:\Users\jerron\.conda\envs\autogenstudio\Lib\site-packages\langchain\chains\base.py:383, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
    [351](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/base.py:351) """Execute the chain.
    [352](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/base.py:352) 
    [353](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/base.py:353) Args:
   (...)
    [374](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/base.py:374)         `Chain.output_keys`.
    [375](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/base.py:375) """
    [376](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/base.py:376) config = {
    [377](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/base.py:377)     "callbacks": callbacks,
    [378](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/base.py:378)     "tags": tags,
    [379](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/base.py:379)     "metadata": metadata,
    [380](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/base.py:380)     "run_name": run_name,
    [381](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/base.py:381) }
--> [383](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/base.py:383) return self.invoke(
    [384](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/base.py:384)     inputs,
    [385](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/base.py:385)     cast(RunnableConfig, {k: v for k, v in config.items() if v is not None}),
    [386](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/base.py:386)     return_only_outputs=return_only_outputs,
    [387](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/base.py:387)     include_run_info=include_run_info,
    [388](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/base.py:388) )

File c:\Users\jerron\.conda\envs\autogenstudio\Lib\site-packages\langchain\chains\base.py:166, in Chain.invoke(self, input, config, **kwargs)
    [164](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/base.py:164) except BaseException as e:
    [165](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/base.py:165)     run_manager.on_chain_error(e)
--> [166](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/base.py:166)     raise e
    [167](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/base.py:167) run_manager.on_chain_end(outputs)
    [169](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/base.py:169) if include_run_info:

File c:\Users\jerron\.conda\envs\autogenstudio\Lib\site-packages\langchain\chains\base.py:156, in Chain.invoke(self, input, config, **kwargs)
    [153](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/base.py:153) try:
    [154](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/base.py:154)     self._validate_inputs(inputs)
    [155](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/base.py:155)     outputs = (
--> [156](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/base.py:156)         self._call(inputs, run_manager=run_manager)
    [157](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/base.py:157)         if new_arg_supported
    [158](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/base.py:158)         else self._call(inputs)
    [159](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/base.py:159)     )
    [161](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/base.py:161)     final_outputs: Dict[str, Any] = self.prep_outputs(
    [162](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/base.py:162)         inputs, outputs, return_only_outputs
    [163](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/base.py:163)     )
    [164](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/base.py:164) except BaseException as e:

File c:\Users\jerron\.conda\envs\autogenstudio\Lib\site-packages\langchain\chains\llm.py:126, in LLMChain._call(self, inputs, run_manager)
    [121](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/llm.py:121) def _call(
    [122](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/llm.py:122)     self,
    [123](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/llm.py:123)     inputs: Dict[str, Any],
    [124](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/llm.py:124)     run_manager: Optional[CallbackManagerForChainRun] = None,
    [125](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/llm.py:125) ) -> Dict[str, str]:
--> [126](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/llm.py:126)     response = self.generate([inputs], run_manager=run_manager)
    [127](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/llm.py:127)     return self.create_outputs(response)[0]

File c:\Users\jerron\.conda\envs\autogenstudio\Lib\site-packages\langchain\chains\llm.py:138, in LLMChain.generate(self, input_list, run_manager)
    [136](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/llm.py:136) callbacks = run_manager.get_child() if run_manager else None
    [137](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/llm.py:137) if isinstance(self.llm, BaseLanguageModel):
--> [138](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/llm.py:138)     return self.llm.generate_prompt(
    [139](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/llm.py:139)         prompts,
    [140](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/llm.py:140)         stop,
    [141](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/llm.py:141)         callbacks=callbacks,
    [142](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/llm.py:142)         **self.llm_kwargs,
    [143](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/llm.py:143)     )
    [144](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/llm.py:144) else:
    [145](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/llm.py:145)     results = self.llm.bind(stop=stop, **self.llm_kwargs).batch(
    [146](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/llm.py:146)         cast(List, prompts), {"callbacks": callbacks}
    [147](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain/chains/llm.py:147)     )

File c:\Users\jerron\.conda\envs\autogenstudio\Lib\site-packages\langchain_core\language_models\llms.py:633, in BaseLLM.generate_prompt(self, prompts, stop, callbacks, **kwargs)
    [625](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/language_models/llms.py:625) def generate_prompt(
    [626](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/language_models/llms.py:626)     self,
    [627](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/language_models/llms.py:627)     prompts: List[PromptValue],
   (...)
    [630](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/language_models/llms.py:630)     **kwargs: Any,
    [631](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/language_models/llms.py:631) ) -> LLMResult:
    [632](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/language_models/llms.py:632)     prompt_strings = [p.to_string() for p in prompts]
--> [633](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/language_models/llms.py:633)     return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)

File c:\Users\jerron\.conda\envs\autogenstudio\Lib\site-packages\langchain_core\language_models\llms.py:803, in BaseLLM.generate(self, prompts, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
    [788](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/language_models/llms.py:788) if (self.cache is None and get_llm_cache() is None) or self.cache is False:
    [789](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/language_models/llms.py:789)     run_managers = [
    [790](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/language_models/llms.py:790)         callback_manager.on_llm_start(
    [791](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/language_models/llms.py:791)             dumpd(self),
   (...)
    [801](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/language_models/llms.py:801)         )
    [802](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/language_models/llms.py:802)     ]
--> [803](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/language_models/llms.py:803)     output = self._generate_helper(
    [804](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/language_models/llms.py:804)         prompts, stop, run_managers, bool(new_arg_supported), **kwargs
    [805](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/language_models/llms.py:805)     )
    [806](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/language_models/llms.py:806)     return output
    [807](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/language_models/llms.py:807) if len(missing_prompts) > 0:

File c:\Users\jerron\.conda\envs\autogenstudio\Lib\site-packages\langchain_core\language_models\llms.py:670, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
    [668](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/language_models/llms.py:668)     for run_manager in run_managers:
    [669](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/language_models/llms.py:669)         run_manager.on_llm_error(e, response=LLMResult(generations=[]))
--> [670](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/language_models/llms.py:670)     raise e
    [671](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/language_models/llms.py:671) flattened_outputs = output.flatten()
    [672](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/language_models/llms.py:672) for manager, flattened_output in zip(run_managers, flattened_outputs):

File c:\Users\jerron\.conda\envs\autogenstudio\Lib\site-packages\langchain_core\language_models\llms.py:657, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
    [647](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/language_models/llms.py:647) def _generate_helper(
    [648](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/language_models/llms.py:648)     self,
    [649](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/language_models/llms.py:649)     prompts: List[str],
   (...)
    [653](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/language_models/llms.py:653)     **kwargs: Any,
    [654](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/language_models/llms.py:654) ) -> LLMResult:
    [655](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/language_models/llms.py:655)     try:
    [656](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/language_models/llms.py:656)         output = (
--> [657](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/language_models/llms.py:657)             self._generate(
    [658](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/language_models/llms.py:658)                 prompts,
    [659](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/language_models/llms.py:659)                 stop=stop,
    [660](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/language_models/llms.py:660)                 # TODO: support multiple run managers
    [661](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/language_models/llms.py:661)                 run_manager=run_managers[0] if run_managers else None,
    [662](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/language_models/llms.py:662)                 **kwargs,
    [663](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/language_models/llms.py:663)             )
    [664](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/language_models/llms.py:664)             if new_arg_supported
    [665](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/language_models/llms.py:665)             else self._generate(prompts, stop=stop)
    [666](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/language_models/llms.py:666)         )
    [667](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/language_models/llms.py:667)     except BaseException as e:
    [668](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_core/language_models/llms.py:668)         for run_manager in run_managers:

File c:\Users\jerron\.conda\envs\autogenstudio\Lib\site-packages\langchain_community\llms\openai.py:460, in BaseOpenAI._generate(self, prompts, stop, run_manager, **kwargs)
    [448](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_community/llms/openai.py:448)     choices.append(
    [449](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_community/llms/openai.py:449)         {
    [450](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_community/llms/openai.py:450)             "text": generation.text,
   (...)
    [457](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_community/llms/openai.py:457)         }
    [458](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_community/llms/openai.py:458)     )
    [459](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_community/llms/openai.py:459) else:
--> [460](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_community/llms/openai.py:460)     response = completion_with_retry(
    [461](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_community/llms/openai.py:461)         self, prompt=_prompts, run_manager=run_manager, **params
    [462](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_community/llms/openai.py:462)     )
    [463](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_community/llms/openai.py:463)     if not isinstance(response, dict):
    [464](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_community/llms/openai.py:464)         # V1 client returns the response in an PyDantic object instead of
    [465](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_community/llms/openai.py:465)         # dict. For the transition period, we deep convert it to dict.
    [466](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_community/llms/openai.py:466)         response = response.dict()

File c:\Users\jerron\.conda\envs\autogenstudio\Lib\site-packages\langchain_community\llms\openai.py:115, in completion_with_retry(llm, run_manager, **kwargs)
    [113](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_community/llms/openai.py:113) """Use tenacity to retry the completion call."""
    [114](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_community/llms/openai.py:114) if is_openai_v1():
--> [115](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_community/llms/openai.py:115)     return llm.client.create(**kwargs)
    [117](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_community/llms/openai.py:117) retry_decorator = _create_retry_decorator(llm, run_manager=run_manager)
    [119](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_community/llms/openai.py:119) @retry_decorator
    [120](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/langchain_community/llms/openai.py:120) def _completion_with_retry(**kwargs: Any) -> Any:

File c:\Users\jerron\.conda\envs\autogenstudio\Lib\site-packages\openai\_utils\_utils.py:277, in required_args.<locals>.inner.<locals>.wrapper(*args, **kwargs)
    [275](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/_utils/_utils.py:275)             msg = f"Missing required argument: {quote(missing[0])}"
    [276](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/_utils/_utils.py:276)     raise TypeError(msg)
--> [277](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/_utils/_utils.py:277) return func(*args, **kwargs)

File c:\Users\jerron\.conda\envs\autogenstudio\Lib\site-packages\openai\resources\completions.py:528, in Completions.create(self, model, prompt, best_of, echo, frequency_penalty, logit_bias, logprobs, max_tokens, n, presence_penalty, seed, stop, stream, stream_options, suffix, temperature, top_p, user, extra_headers, extra_query, extra_body, timeout)
    [499](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/resources/completions.py:499) @required_args(["model", "prompt"], ["model", "prompt", "stream"])
    [500](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/resources/completions.py:500) def create(
    [501](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/resources/completions.py:501)     self,
   (...)
    [526](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/resources/completions.py:526)     timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
    [527](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/resources/completions.py:527) ) -> Completion | Stream[Completion]:
--> [528](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/resources/completions.py:528)     return self._post(
    [529](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/resources/completions.py:529)         "/completions",
    [530](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/resources/completions.py:530)         body=maybe_transform(
    [531](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/resources/completions.py:531)             {
    [532](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/resources/completions.py:532)                 "model": model,
    [533](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/resources/completions.py:533)                 "prompt": prompt,
    [534](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/resources/completions.py:534)                 "best_of": best_of,
    [535](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/resources/completions.py:535)                 "echo": echo,
    [536](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/resources/completions.py:536)                 "frequency_penalty": frequency_penalty,
    [537](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/resources/completions.py:537)                 "logit_bias": logit_bias,
    [538](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/resources/completions.py:538)                 "logprobs": logprobs,
    [539](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/resources/completions.py:539)                 "max_tokens": max_tokens,
    [540](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/resources/completions.py:540)                 "n": n,
    [541](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/resources/completions.py:541)                 "presence_penalty": presence_penalty,
    [542](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/resources/completions.py:542)                 "seed": seed,
    [543](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/resources/completions.py:543)                 "stop": stop,
    [544](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/resources/completions.py:544)                 "stream": stream,
    [545](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/resources/completions.py:545)                 "stream_options": stream_options,
    [546](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/resources/completions.py:546)                 "suffix": suffix,
    [547](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/resources/completions.py:547)                 "temperature": temperature,
    [548](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/resources/completions.py:548)                 "top_p": top_p,
    [549](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/resources/completions.py:549)                 "user": user,
    [550](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/resources/completions.py:550)             },
    [551](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/resources/completions.py:551)             completion_create_params.CompletionCreateParams,
    [552](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/resources/completions.py:552)         ),
    [553](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/resources/completions.py:553)         options=make_request_options(
    [554](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/resources/completions.py:554)             extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout
    [555](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/resources/completions.py:555)         ),
    [556](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/resources/completions.py:556)         cast_to=Completion,
    [557](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/resources/completions.py:557)         stream=stream or False,
    [558](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/resources/completions.py:558)         stream_cls=Stream[Completion],
    [559](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/resources/completions.py:559)     )

File c:\Users\jerron\.conda\envs\autogenstudio\Lib\site-packages\openai\_base_client.py:1240, in SyncAPIClient.post(self, path, cast_to, body, options, files, stream, stream_cls)
   [1226](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/_base_client.py:1226) def post(
   [1227](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/_base_client.py:1227)     self,
   [1228](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/_base_client.py:1228)     path: str,
   (...)
   [1235](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/_base_client.py:1235)     stream_cls: type[_StreamT] | None = None,
   [1236](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/_base_client.py:1236) ) -> ResponseT | _StreamT:
   [1237](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/_base_client.py:1237)     opts = FinalRequestOptions.construct(
   [1238](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/_base_client.py:1238)         method="post", url=path, json_data=body, files=to_httpx_files(files), **options
   [1239](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/_base_client.py:1239)     )
-> [1240](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/_base_client.py:1240)     return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))

File c:\Users\jerron\.conda\envs\autogenstudio\Lib\site-packages\openai\_base_client.py:921, in SyncAPIClient.request(self, cast_to, options, remaining_retries, stream, stream_cls)
    [912](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/_base_client.py:912) def request(
    [913](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/_base_client.py:913)     self,
    [914](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/_base_client.py:914)     cast_to: Type[ResponseT],
   (...)
    [919](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/_base_client.py:919)     stream_cls: type[_StreamT] | None = None,
    [920](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/_base_client.py:920) ) -> ResponseT | _StreamT:
--> [921](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/_base_client.py:921)     return self._request(
    [922](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/_base_client.py:922)         cast_to=cast_to,
    [923](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/_base_client.py:923)         options=options,
    [924](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/_base_client.py:924)         stream=stream,
    [925](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/_base_client.py:925)         stream_cls=stream_cls,
    [926](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/_base_client.py:926)         remaining_retries=remaining_retries,
    [927](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/_base_client.py:927)     )

File c:\Users\jerron\.conda\envs\autogenstudio\Lib\site-packages\openai\_base_client.py:1020, in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls)
   [1017](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/_base_client.py:1017)         err.response.read()
   [1019](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/_base_client.py:1019)     log.debug("Re-raising status error")
-> [1020](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/_base_client.py:1020)     raise self._make_status_error_from_response(err.response) from None
   [1022](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/_base_client.py:1022) return self._process_response(
   [1023](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/_base_client.py:1023)     cast_to=cast_to,
   [1024](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/_base_client.py:1024)     options=options,
   (...)
   [1027](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/_base_client.py:1027)     stream_cls=stream_cls,
   [1028](file:///C:/Users/jerron/.conda/envs/autogenstudio/Lib/site-packages/openai/_base_client.py:1028) )

NotFoundError: Error code: 404 - {'error': {'message': 'The model `text-davinci-003` has been deprecated, learn more here: https://platform.openai.com/docs/deprecations', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}}
@abdibrokhim
Copy link
Contributor

it says, u cant use text-davinci-003 use another model , also instead of writing Chain.run use Chain.invoke.

@jerronl
Copy link
Author

jerronl commented Jun 19, 2024

it says, u cant use text-davinci-003 use another model , also instead of writing Chain.run use Chain.invoke.

Yes, I can fix this issue by replacing the deprecated model. Do you indicate that I can do it myself and submit the pull request, instead of notifying the original author of this post?

But after this is fixed, we would meet the real issue: the "new answer" from the agents are no more valid. Here is what I got:


> Entering new AgentExecutor chain...
 I should use Web Search since this is a general topic.
Action: Web Search
Action Input: lablab.ai
Observation: Welcome to Shap-e, an AI-based 3D model generator that converts natural language descriptions into stunning 3D shapes. With its unique, user-friendly interface, Shap-e makes the process of turning ideas into 3D objects a piece of cake. Shap-e is more than just a 3D model generator. Its advanced algorithm understands the context of your ... LabLab Next Hackathon. 🔧 Spend 10 days launching your startup! 🚀 Improve your prototype previously developed at hackathons. 🤜 Collaborate with peers or work independently throughout the Hackathon. 🤖 Gain complimentary access to learning resources about Generative AI models, AI Agents, and beyond. Explore lablab.ai/blog for more resources and support designed to help founders develop these essential skills. Join our community at lablab NEXT and equip yourself with the tools you need to transform your entrepreneurial mindset into one of your strongest assets. References. Dweck, C. S. (2006). Mindset: The new psychology of success. Lablab.ai is an initiative designed to support and stimulate the modern artificial intelligence ecosystem and the innovators behind it. Our collaborative approach with leading AI labs, Open source initiatives & technology companies helps unlock the state-of-the-art AI technologies and infrastructure. KoboldAI is an open-source project that allows users to run AI models locally on their own hardware. It is a client-server setup where the client is a web interface and the server runs the AI model. The client and server communicate with each other over a network connection. The project is designed to be user-friendly and easy to set up, even ...
Thought: This looks like a platform for AI enthusiasts and entrepreneurs.
Action: Web Search
Action Input: AI platform for entrepreneurs
Observation: Manatal: AI recruitment software. Motion: AI for productivity. Lumen5: AI for video creation. Otter AI: AI meeting assistant. 1. Upmetrics—AI Business Plan Generator. Upmetrics is the #1 AI business plan generator that helps startups, entrepreneurs, and small business owners write and create business plans in no time. Lovo.ai is an award-winning AI-based voice generator and text-to-speech platform. It is one of the most robust and easiest platform to use that produces voices that resemble the real human voice. Lovo.ai has provided a wide range of voices, servicing several industries, including entertainment, banking, education, gaming, documentary, news, etc ... Pictory pops up a transcript and lets you edit the video simply by deleting and moving text. The AI technology can also automatically remove silences and filler words like "um" and "uh.". The service offers a free trial for three video projects of up to 10 minutes each, and paid plans start at $19 per month. Feedly AI. Feedly is a news aggregator app that compiles news feeds from a variety of online sources to customize and share with others. "I use Feedly to simplify and narrow my daily research ... Here is a quick summary of the best AI startup tools: Bubble.io - No code app builder. Process Street - AI workflow management software. Taskade - AI project management software. Mixo.io - AI website builder for startups. AdCreative - AI ad generator. CustomGPT - ChatGPT chatbot for businesses.
Thought: These are various AI tools for different purposes.
Final Answer: Lablab.ai is a platform for AI enthusiasts and entrepreneurs, while Lovo.ai, Pictory, and Feedly are AI-based tools for voice/text-to-speech, video creation, and news aggregation respectively. Other useful AI tools for startups include Bubble.io, Process Street, Taskade, Mixo.io, AdCreative, and CustomGPT.

> Finished chain.
Final answer: Lablab.ai is a platform for AI enthusiasts and entrepreneurs, while Lovo.ai, Pictory, and Feedly are AI-based tools for voice/text-to-speech, video creation, and news aggregation respectively. Other useful AI tools for startups include Bubble.io, Process Street, Taskade, Mixo.io, AdCreative, and CustomGPT.


> Entering new AgentExecutor chain...
 I need to find the proper math tool to solve this problem.
Action: Wolfram|Alpha API
Action Input: integral of x*(log(x)^2)
Observation: x^2/4 + 1/2 x^2 log^2(x) - 1/2 x^2 log(x)
Thought: This seems to be the correct answer, but I should double check.
Action: Web Search
Action Input: "Integral of x * (log(x)^2)"
Observation: He provides courses for Maths, Science, Social Science, Physics, Chemistry, Computer Science at Teachoo. Ex 7.2, 2 Integrate the function: (log⁡𝑥 )^2/𝑥 Step 1: Let log 𝑥= 𝑡 Differentiate 𝑤.𝑟.𝑡.𝑥 𝑑𝑡/𝑑𝑥= 1/𝑥 𝑑𝑡.𝑥=𝑑𝑥 𝑑𝑥=𝑑𝑡.𝑥 Step 2: Integrating function ∫1 (log⁡𝑥 )^2 ... Figure 7.1.1 7.1. 1: (a) When x > 1 x > 1, the natural logarithm is the area under the curve y = 1/t y = 1 / t from 1 1 to x x. (b) When x < 1 x < 1, the natural logarithm is the negative of the area under the curve from x x to 1 1. Notice that ln 1 = 0 ln. ⁡. Consider the logarithmic integral $\operatorname{Li}(x):=\int_2^x \frac{dt}{\log t}.$. Then I found a result stating that we have $\operatorname{Li}(x)=x/\log x+O(x/\log^2(x))$ and another integration by parts gives $\operatorname{Li}(x)=x/\log x + x/\log^2 (x)+O(x/\log^3 (x))$.. I can see that integration by parts gives $\int_2^x dt/ \log t=x / \log x - 2/ \log x +\int_2^x dt/ log^2(t)$. Ex7.6, 6 𝑥2 log�𝑥� �� 𝑥�2� log�𝑥�� 𝑑𝑥 Hence, by ILATE First function :- 𝑓 𝑥�= log�𝑥� Second function :- g 𝑥�= 𝑥�2� �� 𝑥�2� log� 𝑥��� 𝑑𝑥 = �� log�𝑥�� 𝑥�2��𝑑𝑥 = log�𝑥� �� 𝑥�2��𝑑𝑥− �� 𝑑 log�𝑥���𝑑𝑥� �� 𝑥�2�.𝑑𝑥 ... Background: $$ \mbox{I am trying to solve the following integral:}\quad \int_{1}^{\infty}\frac{\log\left(x\right)}{x^{2}\left(x^{2} - 1\right)}\,{\rm d}x $$ This is a ...
Thought: This is a very detailed and thorough explanation.
Action: Web Search
Action Input: "Integrate the function: x * (log(x)^2)"
Observation: He provides courses for Maths, Science, Social Science, Physics, Chemistry, Computer Science at Teachoo. Ex 7.2, 2 Integrate the function: (log⁡𝑥 )^2/𝑥 Step 1: Let log 𝑥= 𝑡 Differentiate 𝑤.𝑟.𝑡.𝑥 𝑑𝑡/𝑑𝑥= 1/𝑥 𝑑𝑡.𝑥=𝑑𝑥 𝑑𝑥=𝑑𝑡.𝑥 Step 2: Integrating function ∫1 (log⁡𝑥 )^2 ... Hint: Apply ILATE Method: Integrate in the order from Inverse Logarithmic Algebra Trigonometry and exponential functions, so, integrate by taking logarithmic function as first function and algebraic function as second. Complete step by step answer: Given: the given function is \[x{(\log x)^2}\]. Given function with integration sign can be written as \[\int {x{{(\log x)}^2}dx} \]. Figure 7.1.1 7.1. 1: (a) When x > 1 x > 1, the natural logarithm is the area under the curve y = 1/t y = 1 / t from 1 1 to x x. (b) When x < 1 x < 1, the natural logarithm is the negative of the area under the curve from x x to 1 1. Notice that ln 1 = 0 ln. ⁡. In this case, ln(x) is a logarithmic function, making it our 'u'. The remaining part, x dx, becomes our 'dv'. Step 2: Differentiate u and Integrate dv. We differentiate 'u' and integrate 'dv': u = ln(x) ⇒ du = (1/x) dx; dv = x dx ⇒ v = (x^2)/2; Step 3: Apply the Integration by Parts Formula. The integration by parts formula ... Intergrating ln x | Integration | Integration by parts | Integrating logarithmic functionsSubTopics Covered:1. Integration of logarithmic functions2. Integra...
Thought:

> Finished chain.
Final answer: Agent stopped due to iteration limit or time limit.

So I guess it would be better to let the original author to make sure this tutorial would still give the expected results?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants