Skip to content

Latest commit

 

History

History
392 lines (262 loc) · 12.2 KB

quickstart.mdx

File metadata and controls

392 lines (262 loc) · 12.2 KB
title icon
Quickstart
flag-checkered

Get started using PromptLayer in minutes.

  • Create and version prompts visually
  • Programmatically retrieve and iterate on prompts
  • Run prompts automatically with OpenAI, Anthropic, and all major models
  • Log and evaulate prompt versions

First, make sure you have signed up for an account at promptlayer.com to begin 🍰

PromptLayer is available in Python, Javascript, and REST.

Create your first prompt template

PromptLayer makes creating, versioning, and collaborating on prompt templates easy. They are model-agnostic and auditable.

  1. From PromptLayer's home screen, navigate to the prompt registry (read more).
  2. Click "Create Template" to create a new prompt template.

Find Registry

For this example, let's create a new prompt template called "ai-poet". Name it in the 'Title' field.

Next, paste the following snippet into the "SYSTEM" field:

You are a skilled poet specializing in haiku. 

Your task is to write a haiku based on a topic provided by the user. 

The haiku must have 17 syllables.
Structured in three lines of 5, 7, and 5 syllables respectively. 

Next, add a new message with an input variable {topic}. This will be filled out by user input when interacting with the AI.

Haiku Prompt

Finally, choose a model to run the prompt on by clicking "Parameters"

🎉 You're all set!

**Save the prompt template,** and we are good to go! 🍰

Set Up PromptLayer Locally

Install PromptLayer & OpenAI

Install the required packages:

pip install promptlayer
pip install openai
npm install promptlayer
npm install openai

Ensure you have the latest versions if already installed.

API Key Env Var Setup

  1. Retrieve your OpenAI API key
  2. Get a PromptLayer API key from settings (click the cog on the top right)
  3. Replace <your_openai_api_key> and <your_promptlayer_api_key> in the code snippets below with your actual keys
For better security, you can use environment variables from a .env file:
OPENAI_API_KEY=sk-<your_openai_api_key>
PROMPTLAYER_API_KEY=<your_promptlayer_api_key>

For more on setting up environment variables in Python, refer to this guide.

If you are using Python, we recommend using python-dotenv:

pip install python-dotenv
npm install dotenv

Create a Python file called app.py and load the environment variables:

from dotenv import load_dotenv

load_dotenv()  # Load env vars from .env file
require('dotenv').config();

Import PromptLayer

This quickstart uses OpenAI, but PromptLayer supports most major LLM providers, including Anthropic, LLama, Google, Cohere, and Mistral.

First, import PromptLayer and create a PromptLayer client. We'll use this client to call OpenAI.

Behind the scenes, the PromptLayer library is a wrapper around OpenAI's SDK. This means that all LLM requests are still made locally from your machine. PromptLayer functions as a sidecar, logging the response after the request is made.

# Make sure to `pip install promptlayer`
import os
os.environ["OPENAI_API_KEY"] = "sk-<your_openai_api_key>"

from promptlayer import PromptLayer
promptlayer_client = PromptLayer(api_key="<your_promptlayer_api_key>")

# Swap out your 'from openai import OpenAI'
OpenAI = promptlayer_client.openai.OpenAI
client = OpenAI()
// Make sure to `npm install promptlayer`
import { PromptLayer } from "promptlayer";
const promptLayerClient = new PromptLayer();

// Make sure you have openai installed with `npm install openai`
const OpenAI = promptLayerClient.OpenAI;
const openai = new OpenAI();

Retrieve & Run the Prompt

Remember the {topic} user input variable we made to specify the haiku topic in our prompt? PromptLayer can format the f-string or Jinja2 template using this variable.

We will use PromptLayer to fetch the latest version of the prompt, inject input variables, and run it.

input_variables = {
    "topic": "The American Revolution"
}

response = promptlayer_client.run(
    prompt_name="ai-poet", input_variables=input_variables)

# Using OpenAI format
print(response["raw_response"].choices[0].message.content)
const input_variables = {
  topic: "The American Revolution"
};

const response = await promptLayerClient.run({
  promptName: "ai-poet", 
  inputVariables: input_variables,
});

// Using OpenAI format
console.log(response["raw_response"].choices[0].message.content);
For simplicity, the code snippet above will work with OpenAI only. Use the [prompt blueprint](/quickstart-part-two#prompt-blueprint) return item for a model-agnostic response format. ```python print(response["prompt_blueprint"]["prompt_template"]["messages"][-1]["content"]) ```

Amazing! promptlayer_client.run will automatically format your prompt template into the format needed by OpenAI, Anthropic, and all other major model providers.

The LLM request runs locally on your machine.

**Running the code:** If you're new to running Python locally, save all the code above in a file called `app.py` and run it from your terminal with: ```bash python app.py ``` Make sure you're in the same directory as your `app.py` file when running this command.

Logs

In the background, PromptLayer will save the output of every request you make. Open the dashboard to see the logs.

Opening Log

Try opening the log in Playground!

No more redeploys!

Every time you update a prompt in the dashboard, simply re-run the code and PromptLayer will grab the latest version. No eng redeploy is needed.

Redeploy Gif

Decoupling prompts from your code and using a prompt CMS will speed up development. Read more about prompt management best practices on our blog.

For large-scale production deployments, we recommend using webhooks to maintain a caching layer.

Release Labels

PromptLayer allows you to use release labels as a way to manage prod/staging environments.

Back in the PromptLayer dashboard, hover over a version and click "Add Release Label". Name the release label "prod". Now you can add prompt_release_label to the run call.

Release Label

Alternatively, use prompt_version to specify a version number directly.

input_variables = {
    "topic": "The American Revolution"
}

response = promptlayer_client.run(
  prompt_name="ai-poet", input_variables=input_variables, 
  prompt_release_label="prod") # The release label

print(response["raw_response"].choices[0].message.content)
const input_variables = {
  topic: "The American Revolution"
};

const response = await promptLayerClient.run({
    promptName: "ai-poet", 
    inputVariables: input_variables,
    promptReleaseLabel: "prod" // The release label
});

console.log(response["raw_response"].choices[0].message.content);

Logging Metadata

PromptLayer can be used to monitor user behavior, beta deployments, and much more.

Add a metadata object to each log, tags, and a score to better track requests:

input_variables = {
    "topic": "The American Revolution"
}

response = promptlayer_client.run(
  prompt_name="ai-poet", input_variables=input_variables, 
  prompt_release_label="prod",
  tags=["quickstart_tutorial"], # Add tags
  metadata={"user_id": "abc123"}) # Add metadata

print(response["raw_response"].choices[0].message.content)

# Add a score to the log
promptlayer_client.track.score(
    request_id=response["request_id"],
    score=100,
)
const input_variables = {
  topic: "The American Revolution"
};

const response = await promptLayerClient.run({
    promptName: "ai-poet", 
    inputVariables: input_variables,
    promptReleaseLabel: "prod",
    tags: ["quickstart_tutorial"], // Add tags
    metadata: {"user_id": "abc123"} // Add metadata
});

console.log(response["raw_response"].choices[0].message.content);

// Add a score to the log
await promptLayerClient.track.score({
  request_id: response["request_id"],
  score: 100
});

This will help you triage through error logs when using advanced search, creating datasets, or fine-tuning.

Analytics

Visit the dashboard to see analytics. Learn more

Analytics Screenshot

You can also take advantage of our advanced search by using metadata to search in the sidebar. This is useful for triaging errors by user ID, execution ID, or by score.

Evaluations

Each time you edit the prompt version, you can run an evaluation. Most teams use historical backtests and regression tests to prompt engineer more effectively.

You can build these eval datasets using the request logs we captured above. Learn more here.

Eval Scores

Tracing and Spans

PromptLayer provides powerful tracing capabilities to help you monitor and analyze the execution flow of your applications. With tracing enabled, you can visualize function calls, track LLM requests, measure durations, and inspect inputs and outputs.

Trace Details

To enable tracing, simply initialize the PromptLayer client with enable_tracing set to True:

```python Python from promptlayer import PromptLayer promptlayer_client = PromptLayer(enable_tracing=True) ```
import { PromptLayer } from "promptlayer";
  const promptlayer = new PromptLayer({
  apiKey: process.env.PROMPTLAYER_API_KEY,
  enableTracing: true,
});

You can then use the @promptlayer_client.traceable decorator in Python or wrapWithSpan in JavaScript to trace custom functions.

Read more about tracing here.

Continue reading with Quickstart Part 2 ➡️

Further reading 📖

From the docs:

Articles on prompt engineering:

Technical tutorials: