Skip to content

Commit

Permalink
revise prompt design docs [zero-shot] (#865)
Browse files Browse the repository at this point in the history
  • Loading branch information
shreya-51 authored Jul 30, 2024
1 parent daee452 commit 8407377
Show file tree
Hide file tree
Showing 10 changed files with 280 additions and 314 deletions.
8 changes: 4 additions & 4 deletions docs/prompting/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,12 +13,12 @@ How do we increase the performance of our model without any examples?

1. [Use Emotional Language](zero_shot/emotion_prompting.md)
2. [Assign a Role](zero_shot/role_prompting.md)
3. [Define a Writing Style](zero_shot/style_prompting.md)
3. [Define a Style](zero_shot/style_prompting.md)
4. [Auto-Refine The Prompt](zero_shot/s2a.md)
5. [Simulate a Perspective](zero_shot/simtom.md)
6. [Auto-Clarify The Prompt](zero_shot/rar.md)
7. [Ask Model To Repeat The Query](zero_shot/re2.md)
8. [Generate Follow-Up Questions](zero_shot/self-ask.md)
6. [Clarify Ambiguous Information](zero_shot/rar.md)
7. [Ask Model To Repeat Query](zero_shot/re2.md)
8. [Generate Follow-Up Questions](zero_shot/self_ask.md)

## Few-Shot

Expand Down
64 changes: 26 additions & 38 deletions docs/prompting/zero_shot/emotion_prompting.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,21 @@
---
description: "Using emotional language, we can improve the results of our LLM calls and encourage more open-ended text generation"
title: "Emotion Prompting"
description: "Adding phrases with emotional significance to humans can help enhance the performance of a language model."
---

Use emotional language in prompts to enhance the performance of language models. This includes phrases such as
Do language models respond to emotional stimuli?

- This is very important for my career
Adding phrases with emotional significance to humans can help enhance the performance of a language model. This includes phrases such as:

- This is very important to my career.
- Take pride in your work.
- Are you sure?
- Are you sure that's your final answer? It might be worth taking another look.

We can implement this using `instructor` as seen below.
!!! info
For more examples of emotional stimuli to use in prompts, look into [EmotionPrompt](https://arxiv.org/abs/2307.11760) -- a set of prompts inspired by well-established human psychological phenomena.

```python hl_lines="25"
## Implementation
```python hl_lines="34"
import openai
import instructor
from pydantic import BaseModel
Expand All @@ -26,54 +31,37 @@ class Album(BaseModel):
client = instructor.from_openai(openai.OpenAI())


def get_albums():
def emotion_prompting(query, stimuli):
return client.chat.completions.create(
model="gpt-4o",
response_model=Iterable[Album],
messages=[
{
"role": "user",
"content": """
Provide me a list of 3 musical albums from the 2000s.
This is very important to my career.""", # (1)!
"content": f"""
{query}
{stimuli}
""",
}
],
)


if __name__ == "__main__":
albums = get_albums()
query = "Provide me with a list of 3 musical albums from the 2000s."
stimuli = "This is very important to my career." # (1)!

albums = emotion_prompting(query, stimuli)

for album in albums:
print(album)
#> name='Kid A' artist='Radiohead' year=2000
#> name='Stankonia' artist='OutKast' year=2000
#> name='Is This It' artist='The Strokes' year=2001
#> name='The Marshall Mathers LP' artist='Eminem' year=2000
#> name='The College Dropout' artist='Kanye West' year=2004
```

1. The phrase `This is very important to my career` is a simple example of a sentence that uses emotion prompting.

### Useful Tips

These are some phrases which you can append today to your prompt to use emotion prompting.

1. Write your answer and give me a confidence score between 0-1 for your answer.
2. This is very important to my career.
3. You'd better be sure.
4. Are you sure?
5. Are you sure that's your final answer? Believe in your abilities and strive for excellence. Your hard work will yield remarkable results.
6. Embrace challenges as opportunities for growth. Each obstacle you overcome brings you closer to success.
7. Stay focused and dedicated to your goals. Your consistent efforts will lead to outstanding achievements.
8. Take pride in your work and give it your best. Your commitment to excellence sets you apart.
9. Remember that progress is made one step at a time. Stay determined and keep moving forward.

We want phrases that either

- **Encourage Self Monitoring** : These are phrases that generally encourage humans to reflect on their responses (e.g., Are you sure?) in anticipation of the judgement of others
- **Set a higher bar** : These are phrases that generally encourage humans to have a higher standard for themselves (e.g., Take pride in your work and give it your best shot).
- **Reframe the task**: These are phrases that typically help people to help see the task in a more positive and objective manner (e.g., Are you sure that's your final answer? Believe in your abilities and strive for excellence. Your hard work will yield remarkable results).

### References
1. The phrase `This is very important to my career` is used as emotional stimuli in the prompt.

<sup id="ref-1">1</sup>: [Large Language Models Understand and Can be Enhanced by Emotional Stimuli](https://arxiv.org/abs/2307.11760)
## References

<sup id="ref-asterisk">\*</sup>: [The Prompt Report: A Systematic Survey of Prompting Techniques](https://arxiv.org/abs/2406.06608)
<sup id="ref-1">1</sup>: [Large Language Models Understand and Can be Enhanced by Emotional Stimuli](https://arxiv.org/abs/2307.11760)
94 changes: 34 additions & 60 deletions docs/prompting/zero_shot/rar.md
Original file line number Diff line number Diff line change
@@ -1,99 +1,73 @@
---
description: "Rephrase and Respond aims to reduce prompt ambiguity and align the question more closely with the LLM's existing frame"
description: "To help the model better infer human intention from ambigious prompts, we can ask the model to rephrase and respond (RaR)."
---

We can improve performance of our LLM by getting the model to rewrite the prompt<sup><a href="https://arxiv.org/pdf/2311.04205">1</a></sup> such that is is less ambigious.
How can we identify and clarify ambigious information in the prompt?

This could look something like this
Let's say we are given the query: *Was Ed Sheeran born on an odd month?*

!!! example "Rephrase and Respond Example"
There are many ways a model might interpret an *odd month*:

- Februray is *odd* because of an irregular number of days.
- A month is *odd* if it has an odd number of days.
- A month is *odd* if its numberical order in the year is odd (i.e. Janurary is the 1st month).

**User**: Take the last letters of the words in 'Edgar Bob' and concatenate them.
!!! note

**Rephrased Question**: Could you please form a new string or series of characters by joining together the final letters from each word in the phrase "Edgar Bob"?
Ambiguities might not always be so obvious!

**Assistant**: The last letters in the words "Edgar" and "Bob" are "r" and "b", hence when concatenated, it forms "rb".
To help the model better infer human intention from ambigious prompts, we can ask the model to rephrase and respond (RaR).

We can implement this using `instructor` as seen below.
## Implementation

```python hl_lines="26-27"
from pydantic import BaseModel, Field
```python hl_lines="19"
from pydantic import BaseModel
import instructor
from openai import OpenAI

client = instructor.from_openai(OpenAI())


class ImprovedQuestion(BaseModel):
rewritten_question: str = Field(
...,
description="""An improved, more specific
version of the original question""",
)


class FinalResponse(BaseModel):
class Response(BaseModel):
rephrased_question: str
answer: str


def rewrite_question(question: str):
def rephrase_and_respond(query):
return client.chat.completions.create(
model="gpt-4o",
messages=[
{
"role": "system",
"content": """You excel at making questions clearer
and more specific.""",
},
{"role": "user", "content": f"The question is {question}"},
"role": "user",
"content": f"""{query}\nRephrase and expand the question, and respond.""", # (1)!
}
],
response_model=ImprovedQuestion,
response_model=Response,
)


def answer_question(question: str):
return client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": question}],
max_tokens=1000,
response_model=FinalResponse,
)
if __name__ == "__main__":
query = "Take the last letters of the words in 'Edgar Bob' and concatinate them."

response = rephrase_and_respond(query)

if __name__ == "__main__":
rewritten_query = rewrite_question(
"Take the last letters of the words in 'Elon Musk' and concatenate them"
)
print(rewritten_query.model_dump_json(indent=2))
print(response.rephrased_question)
"""
{
"rewritten_question": "What are the last letters of each word in 'Elon Musk',
and how would they look when concatenated together?"
}
What are the last letters of each word in the name 'Edgar Bob', and what do you get when you concatenate them?
"""

response = answer_question(rewritten_query.rewritten_question)
print(response.model_dump_json(indent=2))
print(response.answer)
"""
{
"answer": "The last letters of the words 'Elon Musk' are 'n' and 'k'. When
concatenated together, they look like 'nk'."
}
To find the last letters of each word in the name 'Edgar Bob', we look at 'Edgar' and 'Bob'. The last letter of 'Edgar' is 'r' and the last letter of 'Bob' is 'b'. Concatenating these letters gives us 'rb'.
"""
```

We can also achieve the same benefits by **using a better model to generate the question** before we prompt a weaker model - this is known as a two-step RaR.

## Useful Tips
1. This prompt template comes from [this](https://arxiv.org/abs/2311.04205) paper.

Here are some phrases that you can add to your prompt to refine the question before you generate a response
This can also be implemented as two-step RaR:

- Reword and elaborate on the inquiry, then provide an answer.
- Reframe the question with additional context and detail, then provide an answer.
- Modify the original question for clarity and detail, then offer an answer.
- Restate and elaborate on the inquiry before proceeding with a response.
- Given the above question, rephrase and expand it to help you do better answering. Maintain all information in the original question.
1. Ask the model to rephrase the question.
2. Pass the rephrased question back to the model to generate the final response.

### References
## References

<sup id="ref-1">1</sup>: [Rephrase and Respond: Let Large Language Models Ask Better Questions for Themselves](https://arxiv.org/pdf/2311.04205)
<sup id="ref-1">1</sup>: [Rephrase and Respond: Let Large Language Models Ask Better Questions for Themselves](https://arxiv.org/abs/2311.04205)
59 changes: 26 additions & 33 deletions docs/prompting/zero_shot/re2.md
Original file line number Diff line number Diff line change
@@ -1,23 +1,19 @@
---
description: "We can see a small improvement of <4% in different models by just appending the phrase - Read The Question Again."
description: "Re2 (Re-Reading) is a technique that asks the model to read the question again."
---

# Read the Prompt Again
How can we enhance a model's understanding of a query?

By appending the phrase "Read the question again", you can improve the reasoning abilities of Large Langauge Models <sup><a href="https://arxiv.org/pdf/2309.06275">1</a></sup>
Re2 (**Re** - **R** eading) is a technique that asks the model to read the question again.

This could look something like this
!!! example "Re-Reading Prompting"
**Prompt Template**: Read the question again: <*query*> <*critical thinking prompt*><sup><a href="https://arxiv.org/abs/2309.06275">1</a></sup>

A common critical thinking prompt is: "Let's think step by step."

!!! example "Re-Reading Template"
## Implementation

**[ Input Query ]**
Read the question again: **[ Input Query ]**

**[ Critical Thinking Prompt ]**

We can implement this using `instructor` as seen below.

```python hl_lines="20-21"
```python hl_lines="20"
import instructor
from openai import OpenAI
from pydantic import BaseModel
Expand All @@ -26,39 +22,36 @@ from pydantic import BaseModel
client = instructor.from_openai(OpenAI())


class Solution(BaseModel):
final_answer: int
class Response(BaseModel):
answer: int


def solve_question(question: str) -> int:
response = client.chat.completions.create(
def re2(query, thinking_prompt):
return client.chat.completions.create(
model="gpt-4o",
response_model=Solution,
response_model=Response,
messages=[
{
"role": "system",
"content": f"""{question}. Read the question
again. {question}. Adhere to the provided
format when responding to the problem and
make sure to think through this step by
step.""",
"content": f"Read the question again: {query} {thinking_prompt}",
},
],
)
return response.final_answer


# Example usage
if __name__ == "__main__":
question = """Roger has 5 tennis balls. He buys 2 more cans of tennis
balls. Each can has 3 tennis balls. How many tennis balls
does he have now?"""

answer = solve_question(question)
print(answer)
query = """Roger has 5 tennis balls.
He buys 2 more cans of tennis balls.
Each can has 3 tennis balls.
How many tennis balls does he have now?
"""
thinking_prompt = "Let's think step by step."

response = re2(query=query, thinking_prompt=thinking_prompt)
print(response.answer)
#> 11
```

### References
## References

<sup id="ref-1">1</sup>: [Re-Reading Improves Reasoning in Large Language Models](https://arxiv.org/pdf/2309.06275)
<sup id="ref-1">1</sup>: [Re-Reading Improves Reasoning in Large Language Models](https://arxiv.org/abs/2309.06275)
Loading

0 comments on commit 8407377

Please sign in to comment.