feat: adding llama support via replicate and ollama #345
Annotations
10 errors
qa:
examples/chat-llama-ollama.php#L29
Binary operation "." between PhpLlm\LlmChain\Response\ResponseInterface and "\n"|"\r\n" results in an error.
|
qa:
examples/chat-llama-replicate.php#L29
Binary operation "." between PhpLlm\LlmChain\Response\ResponseInterface and "\n"|"\r\n" results in an error.
|
qa:
examples/image-describer-binary.php#L21
Instantiated class PhpLlm\LlmChain\OpenAI\Platform\OpenAI not found.
|
qa:
examples/image-describer-binary.php#L22
Call to static method gpt4oMini() on an unknown class PhpLlm\LlmChain\OpenAI\Model\Gpt\Version.
|
qa:
examples/image-describer-binary.php#L22
Instantiated class PhpLlm\LlmChain\OpenAI\Model\Gpt not found.
|
qa:
examples/image-describer-binary.php#L24
Parameter #1 $llm of class PhpLlm\LlmChain\Chain constructor expects PhpLlm\LlmChain\LanguageModel, PhpLlm\LlmChain\OpenAI\Model\Gpt given.
|
qa:
examples/structured-output-clock.php#L30
Instantiated class PhpLlm\LlmChain\OpenAI\Platform\OpenAI not found.
|
qa:
examples/structured-output-clock.php#L31
Call to static method gpt4oMini() on an unknown class PhpLlm\LlmChain\OpenAI\Model\Gpt\Version.
|
qa:
examples/structured-output-clock.php#L31
Instantiated class PhpLlm\LlmChain\OpenAI\Model\Gpt not found.
|
qa:
examples/structured-output-clock.php#L38
Parameter #1 $llm of class PhpLlm\LlmChain\Chain constructor expects PhpLlm\LlmChain\LanguageModel, PhpLlm\LlmChain\OpenAI\Model\Gpt given.
|