Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

examples : Rewrite pydantic_models_to_grammar_examples.py #8493

Merged
merged 1 commit into from
Jul 21, 2024

Conversation

maruel
Copy link
Contributor

@maruel maruel commented Jul 15, 2024

Changes:

  • Move each example into its own function. This makes the code much easier to read and understand.
  • Make the program easy to only run one test by commenting out function calls in main().
  • Make the output easy to parse by indenting the output for each example.
  • Add shebang and +x bit to make it clear it's an executable.
  • Make the host configurable via --host with a default 127.0.0.1:8080.
  • Make the code look in the tools list to call the registered tool, instead of hardcoding the returned values. This makes the code more copy-pastable.
  • Add error checking, so that the program exits 1 if the LLM didn't returned expected values. It's super useful to check for correctness.

Testing:

  • Tested with Mistral-7B-Instruct-v0.3 in F16 and Q5_K_M and Meta-Llama-3-8B-Instruct in F16 and Q5_K_M.
    • I did not observe a failure even once in Mistral-7B-Instruct-v0.3.
    • Llama-3 failed about a third of the time in example_concurrent: it only returned one call instead of 3. Even for F16.

Potential follow ups:

  • Do not fix the prompt encoding yet. Surprisingly it mostly works even if the prompt encoding is not model optimized.
  • Add chained answer and response.

Test only change.

@maruel maruel changed the title Rewrite examples/pydantic_models_to_grammar_examples.py examples: Rewrite pydantic_models_to_grammar_examples.py Jul 15, 2024
@maruel maruel changed the title examples: Rewrite pydantic_models_to_grammar_examples.py examples : Rewrite pydantic_models_to_grammar_examples.py Jul 15, 2024
@maruel
Copy link
Contributor Author

maruel commented Jul 15, 2024

@compilade what do you think?

@github-actions github-actions bot added examples python python script changes labels Jul 15, 2024
Copy link
Collaborator

@compilade compilade left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This makes the pydantic example much nicer than before.

examples/pydantic_models_to_grammar_examples.py Outdated Show resolved Hide resolved
examples/pydantic_models_to_grammar_examples.py Outdated Show resolved Hide resolved
@mofosyne mofosyne added the Review Complexity : Low Trivial changes to code that most beginner devs (or those who want a break) can tackle. e.g. UI fix label Jul 19, 2024
Changes:

- Move each example into its own function. This makes the code much
  easier to read and understand.
- Make the program easy to only run one test by commenting out function
  calls in main().
- Make the output easy to parse by indenting the output for each example.
- Add shebang and +x bit to make it clear it's an executable.
- Make the host configurable via --host with a default 127.0.0.1:8080.
- Make the code look in the tools list to call the registered tool,
  instead of hardcoding the returned values. This makes the code more
  copy-pastable.
- Add error checking, so that the program exits 1 if the LLM didn't
  returned expected values. It's super useful to check for correctness.

Testing:

- Tested with Mistral-7B-Instruct-v0.3 in F16 and Q5_K_M and
  Meta-Llama-3-8B-Instruct in F16 and Q5_K_M.
  - I did not observe a failure even once in Mistral-7B-Instruct-v0.3.
  - Llama-3 failed about a third of the time in example_concurrent: it
    only returned one call instead of 3. Even for F16.

Potential follow ups:

- Do not fix the prompt encoding yet. Surprisingly it mostly works even
  if the prompt encoding is not model optimized.
- Add chained answer and response.

Test only change.
@maruel
Copy link
Contributor Author

maruel commented Jul 20, 2024

Cool the checks passed. Thanks @compilade for the review. Excited to get this merged.

@compilade compilade merged commit 22f281a into ggerganov:master Jul 21, 2024
8 checks passed
@maruel maruel deleted the tool_example branch July 21, 2024 14:22
arthw pushed a commit to arthw/llama.cpp that referenced this pull request Jul 27, 2024
…8493)

Changes:

- Move each example into its own function. This makes the code much
  easier to read and understand.
- Make the program easy to only run one test by commenting out function
  calls in main().
- Make the output easy to parse by indenting the output for each example.
- Add shebang and +x bit to make it clear it's an executable.
- Make the host configurable via --host with a default 127.0.0.1:8080.
- Make the code look in the tools list to call the registered tool,
  instead of hardcoding the returned values. This makes the code more
  copy-pastable.
- Add error checking, so that the program exits 1 if the LLM didn't
  returned expected values. It's super useful to check for correctness.

Testing:

- Tested with Mistral-7B-Instruct-v0.3 in F16 and Q5_K_M and
  Meta-Llama-3-8B-Instruct in F16 and Q5_K_M.
  - I did not observe a failure even once in Mistral-7B-Instruct-v0.3.
  - Llama-3 failed about a third of the time in example_concurrent: it
    only returned one call instead of 3. Even for F16.

Potential follow ups:

- Do not fix the prompt encoding yet. Surprisingly it mostly works even
  if the prompt encoding is not model optimized.
- Add chained answer and response.

Test only change.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
examples python python script changes Review Complexity : Low Trivial changes to code that most beginner devs (or those who want a break) can tackle. e.g. UI fix
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants