Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

need support for inhouse hosted models #607

Open
vasujammula opened this issue Sep 25, 2024 · 4 comments
Open

need support for inhouse hosted models #607

vasujammula opened this issue Sep 25, 2024 · 4 comments

Comments

@vasujammula
Copy link

Is your feature request related to a problem? Please describe.
we are exploring around using LaVague for accomplishing web automation but the limitation is using public facing models. can we support LaVague to allow inhouse hosted models to eliminate the cost constraints

Describe the solution you'd like
need LaVague should support custom inhouse deployed models

Describe alternatives you've considered
NA

Additional context
NA

@dhuynh95
Copy link
Collaborator

Hi there,
Can you tell me more about what model / infra would work best for you? Then I can suggest the best option

@vasujammula
Copy link
Author

vasujammula commented Sep 26, 2024

we are planning to deploy LLaVA on our private cloud hardware, so is it possible to use that model as the reference instead we interact with chatgpt(GPT-4o)

@dscain
Copy link

dscain commented Oct 17, 2024

In my case, when initializing

from lavague.core import WorldModel, ActionEngine
from lavague.core.agents import WebAgent
from lavague.drivers.selenium import SeleniumDriver
from llama_index.multi_modal_llms.huggingface import HuggingFaceMultiModal
from llama_index.embeddings.huggingface import HuggingFaceEmbedding

and running
action_engine = ActionEngine(driver=selenium_driver, llm=llm, embedding=embed_model)
I get

Traceback (most recent call last):
  File "./test1.py", line 26, in <module>
    action_engine = ActionEngine(driver=selenium_driver, llm=llm, embedding=embed_model)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "./venv/lib/python3.12/site-packages/lavague/core/action_engine.py", line 84, in __init__
    python_engine = PythonEngine(driver, llm, embedding)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "./venv/lib/python3.12/site-packages/lavague/core/python_engine.py", line 66, in __init__
    self.ocr_mm_llm = ocr_mm_llm or OpenAIMultiModal(
                                    ^^^^^^^^^^^^^^^^^
  File "./venv/lib/python3.12/site-packages/llama_index/multi_modal_llms/openai/base.py", line 107, in __init__
    self._messages_to_prompt = messages_to_prompt or generic_messages_to_prompt
    ^^^^^^^^^^^^^^^^^^^^^^^^
  File "./venv/lib/python3.12/site-packages/pydantic/main.py", line 865, in __setattr__
    if self.__pydantic_private__ is None or name not in self.__private_attributes__:
       ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "./venv/lib/python3.12/site-packages/pydantic/main.py", line 853, in __getattr__
    return super().__getattribute__(item)  # Raises AttributeError if appropriate
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'OpenAIMultiModal' object has no attribute '__pydantic_private__'. Did you mean: '__pydantic_complete__'?

So it initializes OpenAIMultiModal even when using local models.

Is there a working example non-OpenAI that I could using local models? Thank you in advance!

@dscain
Copy link

dscain commented Oct 17, 2024

This issue seems related: #565

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants