This directory contains a collection of examples demonstrating various prompting techniques for use with the Gemini API. Each notebook focuses on a specific approach to guide the model's output and achieve desired results.
Here's a breakdown of the notebooks available and the concepts they cover:
-
Adding_context_information.ipynb
: Shows how to provide context for more relevant and accurate responses. -
Basic_Classification.ipynb
: Demonstrates how to use prompting for classification tasks, categorizing content. -
Basic_Code_Generation.ipynb
: Demonstrates basic code generation, including error handling and generating code snippets. -
Basic_Evaluation.ipynb
: Shows how to use the LLM for evaluation, providing feedback and grading of text. -
Basic_Information_Extraction.ipynb
: Demonstrates extracting information from text and returning it in a defined structure. -
Basic_Reasoning.ipynb
: Demonstrates instructing the model to solve reasoning problems. -
Chain_of_thought_prompting.ipynb
: Guides the model through intermediate reasoning steps for complex problems. -
Few_shot_prompting.ipynb
: Provides a few input-output examples to guide the model. -
Providing_base_cases.ipynb
: Shows how providing base cases can influence the output. -
Role_prompting.ipynb
: Demonstrates how to assign a specific role to the model to influence responses. -
Self_ask_prompting.ipynb
: Demonstrates a technique where the model asks itself questions to help answer the query. -
Zero_shot_prompting.ipynb
: Demonstrates prompting the model without any specific examples.
- Experiment: The best way to learn prompting is to experiment.
- Iterate: Prompt engineering is often an iterative process.
- Explore Other Techniques: The "Next steps" section of many of these notebooks encourages exploring other examples in the repository.