Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can i make it run? #79

Open
Mamlesh18 opened this issue Oct 29, 2024 · 1 comment
Open

How can i make it run? #79

Mamlesh18 opened this issue Oct 29, 2024 · 1 comment

Comments

@Mamlesh18
Copy link

I am using microsoft azure openai to implement it.

I am running the code with changing the azure keys, but i am getting a error

Thanks in advance to help me out.

code:
from pyzerox import zerox
import os
import json
import asyncio
custom_system_prompt = None

###################### Example for Azure OpenAI ######################
model = "gpt-35-turbo" ## "azure/<your_deployment_name>" -> format /
os.environ["AZURE_API_KEY"] = "" # "your-azure-api-key"
os.environ["AZURE_API_BASE"] = "" # "https://example-endpoint.openai.azure.com"
os.environ["AZURE_API_VERSION"] = "" # "2023-05-15"

Placeholder for additional model kwargs (none needed here for OpenAI)

kwargs = {}

async def main():
file_path = "https://omni-demo-data.s3.amazonaws.com/test/cs101.pdf" # Local filepath and file URL supported

# Process all pages (or specify select_pages as a list of page numbers, e.g., select_pages = [1, 2])
select_pages = None  

output_dir = "./output_test"  # Directory to save the consolidated markdown file
result = await zerox(
    file_path=file_path, 
    model=model, 
    output_dir=output_dir,
    custom_system_prompt=custom_system_prompt,
    select_pages=select_pages, 
    **kwargs
)
return result

Run the main function

result = asyncio.run(main())

Print markdown result

print(result)

ERROR:
raise MissingEnvironmentVariables(extra_info=env_config)
pyzerox.errors.exceptions.MissingEnvironmentVariables:
Required environment variable (keys) from the model are Missing. Please set the required environment variables for the model provider.
Refer: https://docs.litellm.ai/docs/providers
(Extra Info: {'keys_in_environment': False, 'missing_keys': []})

@pradhyumna85
Copy link
Contributor

@Mamlesh18, the model variable should be your model deployment name in azure openai with a prefix “azure/“. Eg for deployment name “gpt-4o-mini” the model variable should be “azure/gpt-4o-mini”. Note only gpt 4o and gpt 4o mini models are supported in azure OpenAI.

Also have the 3 environment variables correctly set as per your azure OpenAI configuration:
model = "azure/gpt-4o-mini" ## "azure/<your_deployment_name>" -> format /
os.environ["AZURE_API_KEY"] = "" # "your-azure-api-key"
os.environ["AZURE_API_BASE"] = "" # "https://example-endpoint.openai.azure.com"
os.environ["AZURE_API_VERSION"] = "" # "2023-05-15”

This assumes you have static api keys instead of service principle access. For service principle access you’ll need to use fresh bearer token instead of api key in the api key environment variable above.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants