Skip to content

Commit

Permalink
prompt engineering post small edits
Browse files Browse the repository at this point in the history
  • Loading branch information
lilianweng committed Mar 20, 2023
1 parent 0986539 commit be66057
Show file tree
Hide file tree
Showing 16 changed files with 28 additions and 41 deletions.
3 changes: 1 addition & 2 deletions index.html
Original file line number Diff line number Diff line change
Expand Up @@ -267,8 +267,7 @@ <h2>Prompt Engineering
</header>
<section class="entry-content">
<p>Prompt Engineering, also known as In-Context Prompting, refers to methods for how to communicate with LLM to steer its behavior for desired outcomes without updating the model weights. It is an empirical science and the effect of prompt engineering methods can vary a lot among models, thus requiring heavy experimentation and heuristics.
Useful resources:
OpenAI Cookbook has many in-depth examples for how to utilize LLM efficiently. Prompt Engineering Guide repo contains a pretty comprehensive collection of education materials on prompt engineering....</p>
This post only focuses on prompt engineering for autoregressive language models, so nothing with Cloze tests, image generation or multimodality models....</p>
</section>
<footer class="entry-footer"><span title='2023-03-15 00:00:00 +0000 UTC'>March 15, 2023</span>&nbsp;·&nbsp;21 min&nbsp;·&nbsp;Lilian Weng</footer>
<a class="entry-link" aria-label="post link to Prompt Engineering" href="https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/"></a>
Expand Down
2 changes: 1 addition & 1 deletion index.json

Large diffs are not rendered by default.

3 changes: 1 addition & 2 deletions index.xml
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,7 @@

<guid>https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/</guid>
<description>Prompt Engineering, also known as In-Context Prompting, refers to methods for how to communicate with LLM to steer its behavior for desired outcomes without updating the model weights. It is an empirical science and the effect of prompt engineering methods can vary a lot among models, thus requiring heavy experimentation and heuristics.
Useful resources:
OpenAI Cookbook has many in-depth examples for how to utilize LLM efficiently. Prompt Engineering Guide repo contains a pretty comprehensive collection of education materials on prompt engineering.</description>
This post only focuses on prompt engineering for autoregressive language models, so nothing with Cloze tests, image generation or multimodality models.</description>
</item>

<item>
Expand Down
25 changes: 13 additions & 12 deletions posts/2023-03-15-prompt-engineering/index.html

Large diffs are not rendered by default.

3 changes: 1 addition & 2 deletions posts/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -202,8 +202,7 @@ <h2>Prompt Engineering
</header>
<section class="entry-content">
<p>Prompt Engineering, also known as In-Context Prompting, refers to methods for how to communicate with LLM to steer its behavior for desired outcomes without updating the model weights. It is an empirical science and the effect of prompt engineering methods can vary a lot among models, thus requiring heavy experimentation and heuristics.
Useful resources:
OpenAI Cookbook has many in-depth examples for how to utilize LLM efficiently. Prompt Engineering Guide repo contains a pretty comprehensive collection of education materials on prompt engineering....</p>
This post only focuses on prompt engineering for autoregressive language models, so nothing with Cloze tests, image generation or multimodality models....</p>
</section>
<footer class="entry-footer"><span title='2023-03-15 00:00:00 +0000 UTC'>March 15, 2023</span>&nbsp;·&nbsp;21 min&nbsp;·&nbsp;Lilian Weng</footer>
<a class="entry-link" aria-label="post link to Prompt Engineering" href="https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/"></a>
Expand Down
3 changes: 1 addition & 2 deletions posts/index.xml
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,7 @@

<guid>https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/</guid>
<description>Prompt Engineering, also known as In-Context Prompting, refers to methods for how to communicate with LLM to steer its behavior for desired outcomes without updating the model weights. It is an empirical science and the effect of prompt engineering methods can vary a lot among models, thus requiring heavy experimentation and heuristics.
Useful resources:
OpenAI Cookbook has many in-depth examples for how to utilize LLM efficiently. Prompt Engineering Guide repo contains a pretty comprehensive collection of education materials on prompt engineering.</description>
This post only focuses on prompt engineering for autoregressive language models, so nothing with Cloze tests, image generation or multimodality models.</description>
</item>

<item>
Expand Down
3 changes: 1 addition & 2 deletions tags/alignment/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -187,8 +187,7 @@ <h2>Prompt Engineering
</header>
<section class="entry-content">
<p>Prompt Engineering, also known as In-Context Prompting, refers to methods for how to communicate with LLM to steer its behavior for desired outcomes without updating the model weights. It is an empirical science and the effect of prompt engineering methods can vary a lot among models, thus requiring heavy experimentation and heuristics.
Useful resources:
OpenAI Cookbook has many in-depth examples for how to utilize LLM efficiently. Prompt Engineering Guide repo contains a pretty comprehensive collection of education materials on prompt engineering....</p>
This post only focuses on prompt engineering for autoregressive language models, so nothing with Cloze tests, image generation or multimodality models....</p>
</section>
<footer class="entry-footer"><span title='2023-03-15 00:00:00 +0000 UTC'>March 15, 2023</span>&nbsp;·&nbsp;21 min&nbsp;·&nbsp;Lilian Weng</footer>
<a class="entry-link" aria-label="post link to Prompt Engineering" href="https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/"></a>
Expand Down
3 changes: 1 addition & 2 deletions tags/alignment/index.xml
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,7 @@

<guid>https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/</guid>
<description>Prompt Engineering, also known as In-Context Prompting, refers to methods for how to communicate with LLM to steer its behavior for desired outcomes without updating the model weights. It is an empirical science and the effect of prompt engineering methods can vary a lot among models, thus requiring heavy experimentation and heuristics.
Useful resources:
OpenAI Cookbook has many in-depth examples for how to utilize LLM efficiently. Prompt Engineering Guide repo contains a pretty comprehensive collection of education materials on prompt engineering.</description>
This post only focuses on prompt engineering for autoregressive language models, so nothing with Cloze tests, image generation or multimodality models.</description>
</item>

<item>
Expand Down
3 changes: 1 addition & 2 deletions tags/language-model/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -187,8 +187,7 @@ <h2>Prompt Engineering
</header>
<section class="entry-content">
<p>Prompt Engineering, also known as In-Context Prompting, refers to methods for how to communicate with LLM to steer its behavior for desired outcomes without updating the model weights. It is an empirical science and the effect of prompt engineering methods can vary a lot among models, thus requiring heavy experimentation and heuristics.
Useful resources:
OpenAI Cookbook has many in-depth examples for how to utilize LLM efficiently. Prompt Engineering Guide repo contains a pretty comprehensive collection of education materials on prompt engineering....</p>
This post only focuses on prompt engineering for autoregressive language models, so nothing with Cloze tests, image generation or multimodality models....</p>
</section>
<footer class="entry-footer"><span title='2023-03-15 00:00:00 +0000 UTC'>March 15, 2023</span>&nbsp;·&nbsp;21 min&nbsp;·&nbsp;Lilian Weng</footer>
<a class="entry-link" aria-label="post link to Prompt Engineering" href="https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/"></a>
Expand Down
3 changes: 1 addition & 2 deletions tags/language-model/index.xml
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,7 @@

<guid>https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/</guid>
<description>Prompt Engineering, also known as In-Context Prompting, refers to methods for how to communicate with LLM to steer its behavior for desired outcomes without updating the model weights. It is an empirical science and the effect of prompt engineering methods can vary a lot among models, thus requiring heavy experimentation and heuristics.
Useful resources:
OpenAI Cookbook has many in-depth examples for how to utilize LLM efficiently. Prompt Engineering Guide repo contains a pretty comprehensive collection of education materials on prompt engineering.</description>
This post only focuses on prompt engineering for autoregressive language models, so nothing with Cloze tests, image generation or multimodality models.</description>
</item>

<item>
Expand Down
3 changes: 1 addition & 2 deletions tags/nlp/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -187,8 +187,7 @@ <h2>Prompt Engineering
</header>
<section class="entry-content">
<p>Prompt Engineering, also known as In-Context Prompting, refers to methods for how to communicate with LLM to steer its behavior for desired outcomes without updating the model weights. It is an empirical science and the effect of prompt engineering methods can vary a lot among models, thus requiring heavy experimentation and heuristics.
Useful resources:
OpenAI Cookbook has many in-depth examples for how to utilize LLM efficiently. Prompt Engineering Guide repo contains a pretty comprehensive collection of education materials on prompt engineering....</p>
This post only focuses on prompt engineering for autoregressive language models, so nothing with Cloze tests, image generation or multimodality models....</p>
</section>
<footer class="entry-footer"><span title='2023-03-15 00:00:00 +0000 UTC'>March 15, 2023</span>&nbsp;·&nbsp;21 min&nbsp;·&nbsp;Lilian Weng</footer>
<a class="entry-link" aria-label="post link to Prompt Engineering" href="https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/"></a>
Expand Down
3 changes: 1 addition & 2 deletions tags/nlp/index.xml
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,7 @@

<guid>https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/</guid>
<description>Prompt Engineering, also known as In-Context Prompting, refers to methods for how to communicate with LLM to steer its behavior for desired outcomes without updating the model weights. It is an empirical science and the effect of prompt engineering methods can vary a lot among models, thus requiring heavy experimentation and heuristics.
Useful resources:
OpenAI Cookbook has many in-depth examples for how to utilize LLM efficiently. Prompt Engineering Guide repo contains a pretty comprehensive collection of education materials on prompt engineering.</description>
This post only focuses on prompt engineering for autoregressive language models, so nothing with Cloze tests, image generation or multimodality models.</description>
</item>

<item>
Expand Down
3 changes: 1 addition & 2 deletions tags/prompting/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -187,8 +187,7 @@ <h2>Prompt Engineering
</header>
<section class="entry-content">
<p>Prompt Engineering, also known as In-Context Prompting, refers to methods for how to communicate with LLM to steer its behavior for desired outcomes without updating the model weights. It is an empirical science and the effect of prompt engineering methods can vary a lot among models, thus requiring heavy experimentation and heuristics.
Useful resources:
OpenAI Cookbook has many in-depth examples for how to utilize LLM efficiently. Prompt Engineering Guide repo contains a pretty comprehensive collection of education materials on prompt engineering....</p>
This post only focuses on prompt engineering for autoregressive language models, so nothing with Cloze tests, image generation or multimodality models....</p>
</section>
<footer class="entry-footer"><span title='2023-03-15 00:00:00 +0000 UTC'>March 15, 2023</span>&nbsp;·&nbsp;21 min&nbsp;·&nbsp;Lilian Weng</footer>
<a class="entry-link" aria-label="post link to Prompt Engineering" href="https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/"></a>
Expand Down
3 changes: 1 addition & 2 deletions tags/prompting/index.xml
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,7 @@

<guid>https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/</guid>
<description>Prompt Engineering, also known as In-Context Prompting, refers to methods for how to communicate with LLM to steer its behavior for desired outcomes without updating the model weights. It is an empirical science and the effect of prompt engineering methods can vary a lot among models, thus requiring heavy experimentation and heuristics.
Useful resources:
OpenAI Cookbook has many in-depth examples for how to utilize LLM efficiently. Prompt Engineering Guide repo contains a pretty comprehensive collection of education materials on prompt engineering.</description>
This post only focuses on prompt engineering for autoregressive language models, so nothing with Cloze tests, image generation or multimodality models.</description>
</item>

</channel>
Expand Down
3 changes: 1 addition & 2 deletions tags/steerability/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -187,8 +187,7 @@ <h2>Prompt Engineering
</header>
<section class="entry-content">
<p>Prompt Engineering, also known as In-Context Prompting, refers to methods for how to communicate with LLM to steer its behavior for desired outcomes without updating the model weights. It is an empirical science and the effect of prompt engineering methods can vary a lot among models, thus requiring heavy experimentation and heuristics.
Useful resources:
OpenAI Cookbook has many in-depth examples for how to utilize LLM efficiently. Prompt Engineering Guide repo contains a pretty comprehensive collection of education materials on prompt engineering....</p>
This post only focuses on prompt engineering for autoregressive language models, so nothing with Cloze tests, image generation or multimodality models....</p>
</section>
<footer class="entry-footer"><span title='2023-03-15 00:00:00 +0000 UTC'>March 15, 2023</span>&nbsp;·&nbsp;21 min&nbsp;·&nbsp;Lilian Weng</footer>
<a class="entry-link" aria-label="post link to Prompt Engineering" href="https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/"></a>
Expand Down
3 changes: 1 addition & 2 deletions tags/steerability/index.xml
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,7 @@

<guid>https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/</guid>
<description>Prompt Engineering, also known as In-Context Prompting, refers to methods for how to communicate with LLM to steer its behavior for desired outcomes without updating the model weights. It is an empirical science and the effect of prompt engineering methods can vary a lot among models, thus requiring heavy experimentation and heuristics.
Useful resources:
OpenAI Cookbook has many in-depth examples for how to utilize LLM efficiently. Prompt Engineering Guide repo contains a pretty comprehensive collection of education materials on prompt engineering.</description>
This post only focuses on prompt engineering for autoregressive language models, so nothing with Cloze tests, image generation or multimodality models.</description>
</item>

<item>
Expand Down

0 comments on commit be66057

Please sign in to comment.