Skip to content

Commit

Permalink
Allow assistant-role prompts and multiple prompts
Browse files Browse the repository at this point in the history
Closes #28. See also #7.
  • Loading branch information
domenic committed Aug 14, 2024
1 parent 6371b94 commit eb6ec5d
Showing 1 changed file with 72 additions and 5 deletions.
77 changes: 72 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -109,11 +109,70 @@ const result1 = await predictEmoji("Back to the drawing board");
const result2 = await predictEmoji("This code is so good you should get promoted");
```

(Note that merely creating a session does not cause any new responses from the assistant. We need to call `prompt()` or `promptStreaming()` to get a response.)

Some details on error cases:

* Using both `systemPrompt` and a `{ role: "system" }` prompt in `initialPrompts`, or using multiple `{ role: "system" }` prompts, or placing the `{ role: "system" }` prompt anywhere besides at the 0th position in `initialPrompts`, will reject with a `TypeError`.
* If the combined token length of all the initial prompts (including the separate `systemPrompt`, if provided) is too large, then the promise will be rejected with a `"QuotaExceededError"` `DOMException`.

### Customizing the role per prompt

Our examples so far have provided `prompt()` and `promptStreaming()` with a single string. Such cases assume messages will come from the user role. These methods can also take in objects in the `{ role, content }` format, or arrays of such objects, in case you want to provide multiple user or assistant messages before getting another assistant message:

```js
const multiUserSession = await ai.assistant.create({
systemPrompt: "You are a mediator in a discussion between two departments."
});

const result = await multiUserSession.prompt([
{ "role": "user", content: "Marketing: We need more budget for advertising campaigns." },
{ "role": "user", content: "Finance: We need to cut costs and advertising is on the list." },
{ "role": "assistant", "content": "Let's explore a compromise that satisfies both departments." }
]);

// `result` will contain a compromise proposal from the assistant.
```

Because of their special behavior of being preserved on context window overflow, system prompts cannot be provided this way.

### Emulating tool use or function-calling via assistant-role prompts

A special case of the above is using the assistant role to emulate tool use or function-calling, by marking a response as coming from the assistant side of the conversation:

```js
const session = await ai.assistant.create({
systemPrompt: `
You are a helpful assistant. You have access to the following tools:
- calculator: A calculator. To use it, write "CALCULATOR: <expression>" where <expression> is a valid mathematical expression.
`
});

async function promptWithCalculator(prompt) {
const result = await session.prompt("What is 2 + 2?");

// Check if the assistant wants to use the calculator tool.
const match = /^CALCULATOR: (.*)$/.exec(result);
if (match) {
const expression = match[1];
const mathResult = evaluateMathExpression(expression);

// Add the result to the session so it's in context going forward.
await session.prompt({ role: "assistant", content: mathResult });

// Return it as if that's what the assistant said to the user.
return mathResult;
}

// The assistant didn't want to use the calculator. Just return its response.
return result;
}

console.log(await promptWithCalculator("What is 2 + 2?"));
```

We'll likely explore more specific APIs for tool- and function-calling in the future; follow along in [issue #7](https://github.com/explainers-by-googlers/prompt-api/issues/7).

### Configuration of per-session options

In addition to the `systemPrompt` and `initialPrompts` options shown above, the currently-configurable options are [temperature](https://huggingface.co/blog/how-to-generate#sampling) and [top-K](https://huggingface.co/blog/how-to-generate#top-k-sampling). More information about the values for these parameters can be found using the `capabilities()` API explained [below](#capabilities-detection).
Expand Down Expand Up @@ -330,10 +389,10 @@ interface AIAssistantFactory {
[Exposed=(Window,Worker)]
interface AIAssistant : EventTarget {
Promise<DOMString> prompt(DOMString input, optional AIAssistantPromptOptions options = {});
ReadableStream promptStreaming(DOMString input, optional AIAssistantPromptOptions options = {});
Promise<DOMString> prompt(AIAssistantPromptInput input, optional AIAssistantPromptOptions options = {});
ReadableStream promptStreaming(AIAssistantPromptInput input, optional AIAssistantPromptOptions options = {});
Promise<unsigned long long> countPromptTokens(DOMString input, optional AIAssistantPromptOptions options = {});
Promise<unsigned long long> countPromptTokens(AIAssistantPromptInput input, optional AIAssistantPromptOptions options = {});
readonly attribute unsigned long long maxTokens;
readonly attribute unsigned long long tokensSoFar;
readonly attribute unsigned long long tokensLeft;
Expand Down Expand Up @@ -364,11 +423,16 @@ dictionary AIAssistantCreateOptions {
AICreateMonitorCallback monitor;
DOMString systemPrompt;
sequence<AIAssistantPrompt> initialPrompts;
sequence<AIAssistantInitialPrompt> initialPrompts;
[EnforceRange] unsigned long topK;
float temperature;
};
dictionary AIAssistantInitialPrompt {
AIAssistantInitialPromptRole role;
DOMString content;
};
dictionary AIAssistantPrompt {
AIAssistantPromptRole role;
DOMString content;
Expand All @@ -378,7 +442,10 @@ dictionary AIAssistantPromptOptions {
AbortSignal signal;
};
enum AIAssistantPromptRole { "system", "user", "assistant" };
typedef (DOMString or AIAssistantPrompt or sequence<AIAssistantPrompt>) AIAssistantPromptInput;
enum AIAssistantInitialPromptRole { "system", "user", "assistant" };
enum AIAssistantPromptRole { "user", "assistant" };
```

### Instruction-tuned versus base models
Expand Down

0 comments on commit eb6ec5d

Please sign in to comment.