diff --git a/docs/api/authentication.mdx b/docs/api/authentication.mdx new file mode 100644 index 00000000..969d131e --- /dev/null +++ b/docs/api/authentication.mdx @@ -0,0 +1,23 @@ +# Authentication + +Every Superface account has an authentication token assigned to it. This token is used to authenticate API calls, but also to determine which set of tools and their associated functions should be returned for use by your agent. + +## API token + +You can find the API token in the [Hub API](https://pod.superface.ai/hub/api) section of your Superface account. + +![The authentication token section of the Hub API in Superface](/img/api/hub-api-auth-token.png) + +## Authentication setup + +The Superface API uses `Bearer` authentication, and expects this as part of the headers that as passed with each request for every endpoint. + +``` +Authorization: Bearer +``` + +For example: + +```curl +curl -H "Authorization: Bearer " https://pod.superface.ai/api/hub/fd +``` diff --git a/docs/api/endpoints.mdx b/docs/api/endpoints.mdx new file mode 100644 index 00000000..4fcd9d35 --- /dev/null +++ b/docs/api/endpoints.mdx @@ -0,0 +1,144 @@ +# API Endpoints + +In order to use the tools that Superface offers in your own Agent you don't have to use lots of different endpoints. In fact, using Superface significantly cut down the amount of code you need to write to communicate with external APIs. + +## /fd + +`GET https://pod.superface.ai/api/hub/fd` + +Returns a list of the functions currently available for use by users. The API token that is used to authenticate this endpoint will be used to determine which Superface account is used. + +### Example + +```curl +curl -H "Authorization: Bearer " https://pod.superface.ai/api/hub/fd +``` + +### Response + +The response will be an array of function objects similar to this example for retrieving the current weather from Wttr.in. + +```json +[ + { + "type": "function", + "function": { + "name": "weather__current-weather__CurrentWeather", + "description": "Retrieve current weather information for a specified location.\n", + "parameters": { + "type": "object", + "required": ["city"], + "properties": { + "city": { + "type": "string", + "nullable": false, + "description": "Name of the city including state and country, e.g.: \"Prague, Czech Republic\" or \"New York City, NY, USA\"", + "title": "city" + }, + "units": { + "enum": ["C", "F", "K"], + "description": "Units used to represent temperature - Fahrenheit, Celsius, Kelvin\nCelsius by default", + "title": "units" + } + }, + "nullable": true + } + } + } +] +``` + +## /session + +`POST https://pod.superface.ai/api/hub/session` + +Users need to configure their own access credentials for the tools that you offer. In order to do this, we provide a temporary URL that you can use to prompt your users to set up their access. + +This URL will expire 15 minutes after generation. + +In order to ensure that users can configure, edit or remove access at any time. You need to assign them an ID and use it when calling `/session`. We recommend that your user IDs are formatted as follows: `your_agent_name|unique_user_id` and that you store this for your users so they can access their configuration in future. + +### Example + +```curl +curl -X POST \ + -H "Content-Type: application/json" \ + -H "Authorization: Bearer " \ + -H "x-superface-user-id: " \ + https://pod.superface.ai/api/hub/session +``` + +### Response + +```json +{ + "status": "success", + "configuration_url": "https://pod.superface.ai/api/hub/session/psxis99ux9", + "assistant_hint": "Tell user to go to URL at 'configuration_url' to configure to open configuration. Always show the whole URL to the user. The 'configuration_url' expires in 15 minutes." +} +``` + +## /perform + +`POST https://pod.superface.ai/api/hub/perform/` + +Calls a specific function by the name defined in the function description. At a minimum this endpoint expects the body object to contain any parameters that are required by this tool and function. Those parameters are also listed in the function description. + +The `x-superface-user-id` header is also required so Superface knows which user's configuration to use when performing the functions. + +### Example + +```curl +curl -X POST \ + -H "Content-Type: application/json" \ + -H "Authorization: Bearer " \ + -H "x-superface-user-id: " \ + -d '{"city": "prague, cz"}' \ + https://pod.superface.ai/api/hub/perform/ +``` + +### Response + +A successful response object will look something similar to the JSON shown below. An `assistant_hint` is provided to help your agent understand how to process this response. + +```json +{ + "status": "success", + "assistant_hint": "Format the result in 'result' field to the user. If the user asked for a specific format, respect it", + "result": { + "description": "What a sense of achievement!", + "end": { + "dateTime": "2024-04-03T16:54:47+02:00", + "timeZone": "Europe/Prague" + }, + "kind": "calendar#event", + "organizer": { + "email": "martyn.davies@superface.ai", + "self": true + }, + "reminders": { + "useDefault": true + }, + "sequence": 0, + "start": { + "dateTime": "2024-04-03T16:39:47+02:00", + "timeZone": "Europe/Prague" + }, + "status": "confirmed", + "summary": "Feel successful with Superface" + } +} +``` + +If the user has not configured access to the tool they are trying to use, or the credentials they entered have now expired, the response body will be a prompt for the user that will include a new `action_url`. + +```json +{ + "status": "requires_action", + "assistant_hint": "Tell user to go to URL at 'action_url' to configure access to 'google-calendar'. Then try calling tool again. Always show the whole URL to the user. The 'action_url' expires in 15 minutes.", + "action_type": "configure_access", + "action_url": "https://pod.superface.ai/api/hub/session/9uzs6qz3t8" +} +``` + +Once this step has been completed, the `/perform` action can be run again to complete the task. diff --git a/docs/api/examples/anthropic.mdx b/docs/api/examples/anthropic.mdx new file mode 100644 index 00000000..001d6d29 --- /dev/null +++ b/docs/api/examples/anthropic.mdx @@ -0,0 +1,225 @@ +# Anthropic + +Anthropic are building a series of Large Language Models under the name Claude. In this example we show how to use their approach to [function calls and tools](https://docs.anthropic.com/claude/docs/tool-use) (currenly in Beta). + +The tool function definitions and execution of the API calls for the selected tool(s) is handled by Superface using the [Hub API endpoints](../endpoints). The choice of which tool to use, and other decision making is handled by Claude. + +You can download this example as a [runnable .ipynb notebook](/notebooks/superface_hub_api_anthropic_weather_example.ipynb). + +## Prerequisites + +The Anthropic Python SDK is required for this example. + +```python +pip install anthropic +``` + +## Setup + +Import the dependencies and configure the required constants. Some of these, such as those relating to the `SUPERFACE_USER_ID` are for example purposes. In a production environment you would handle this differenly, as outlined in the [Session Management guide](../sessions). + +```python +import anthropic +import json +import requests as r +from IPython.display import display, Markdown + +# Set a random number of your choice, but don't change it +# once you have run the notebook, otherwise you will create another user. +SUPERFACE_USER_ID_CONSTANT = + +# Use the number to create a unique ID +SUPERFACE_USER_ID = "sfoaidemo|" + str(SUPERFACE_USER_ID_CONSTANT) + +# Default URL for Superface +SUPERFACE_BASE_URL = "https://pod.superface.ai/api/hub" + +# Set the Superface authentication token +SUPERFACE_API_TOKEN=" +To get the current weather forecast for Prague, I should use the weather__current-weather__CurrentWeather function. Let's check if I have the required parameters: + +city: The user provided "Prague" as the city. To be more precise, I'll specify "Prague, Czech Republic". +units: This is an optional parameter. The user did not specify units, so I can omit this and the function will use the default of Celsius. + +I have the required city parameter, so I can proceed with calling the function. + +``` + +Additionally, as part of the response Claude will provide the name of the selected tool, and the required inputs so that a call to the Superface Hub API can be made. + +## Perform function call + +If Claude wants to use a tool (which is this case is true), add the last response to the message history, then extract the function name, and the inputs and use the `perform_action()` helper to pass them to the Hub API. + +```python +if (response.content[1] and response.content[1].type == "tool_use"): + claude_response = response.content[1] + messages.append({ + "role": "assistant", + "content": [ + { + "type": "text", + "text": response.content[0].text + }, + { + "type": claude_response.type, + "id": claude_response.id, + "name": claude_response.name, + "input": claude_response.input + } + ] + }) + + function_name = claude_response.name + function_inputs = claude_response.input + tool_use_id = claude_response.id + + superface_response = perform_action(function_name, function_inputs) + +superface_response +``` + +The response from Superface in this instance will look similar to this: + +```json +{ + "status": "success", + "assistant_hint": "Format the result in 'result' field to the user. If the user asked for a specific format, respect it", + "result": { + "description": "Partly cloudy", + "feelsLike": 16, + "temperature": 16 + } +} +``` + +## Final response + +Now that the Hub API has executed the function and returned a result, this needs to be added to the message history and sent back to Claude to determine a final response. + +```python +tool_response_content = [{ + "type": "tool_result", + "tool_use_id": tool_use_id, + "content": superface_response + +}] +claude_response = talk_to_claude("user", tool_response_content) +``` + +Claude will reply with the final response which can also be added to the message history: + +```python +messages.append({"role": "assistant", "content": claude_response.content[0].text}) +``` + +In notebook form, this can be displayed nicely to the user: + +```python +display(Markdown(claude_response.content[0].text)) +``` + +```text +Current weather in Prague, Czech Republic: Temperature: 16°C Feels like: 16°C Description: Partly cloudy +``` + +## Summary + +This example builds on the function calling example that Anthropic use, however, with Superface's Hub API you can access many varied APIs, including your own custom tools. + +The approach to implementing those would be similar, especialy when you consider that just a single function is required to execute any of the functions an LLM selects and Superface will handle it from there. + +For more information on how to implement function calling with Anthropic Claude3, take a look at their [documentation}(https://docs.anthropic.com/claude/docs/tool-use). diff --git a/docs/api/examples/index.mdx b/docs/api/examples/index.mdx new file mode 100644 index 00000000..6da34e71 --- /dev/null +++ b/docs/api/examples/index.mdx @@ -0,0 +1,9 @@ +import DocCardList from '@theme/DocCardList'; + +# Examples + +In order to best demonstrate how Superface's API works with LLMs we have created two basic implementations using function calling. + +Both examples are written in Python and can be downloaded as `.ipynb` notebooks that you can use to test using your own credentials. + + diff --git a/docs/api/examples/langchain.mdx b/docs/api/examples/langchain.mdx new file mode 100644 index 00000000..1862116e --- /dev/null +++ b/docs/api/examples/langchain.mdx @@ -0,0 +1,125 @@ +# LangChain + +LangChain is a framework for developing applications that are powered by language models. The LangChain ecosystem is always growing and has a vibrant community that is constantly providing new updates for the latest models and tools. + +In this example we will focus on building a simple agent that can consume and use the functions definitions provided by Superface using the [LangChain.js library](https://js.langchain.com/docs/get_started/introduction), OpenAI and Node.js. + +You can get all the code for this example on [GitHub](https://github.com/superfaceai/hubapi-example-langchain) + +## Example breakdown + +This example expands on [LangChain's OpenAI Tool Calling](https://js.langchain.com/docs/integrations/chat/openai#tool-calling) example. Their hardcoded tool for weather has been removed, and replaced with real API calls to the Superface Hub API to get the weather in real time. + +We will call out the additions that we made below. + +```javascript +const { ChatOpenAI } = require('@langchain/openai'); +const { ToolMessage } = require('@langchain/core/messages'); +const axios = require('axios'); + +const OPENAI_API_KEY = ''; +const SUPERFACE_AUTH_TOKEN = ''; +const SUPERFACE_BASE_URL = 'https://pod.superface.ai/api/hub'; +const PROMPT = "What's the weather like in Prague and in Kosice?"; + +(async () => { +``` + +Below we define two helper functions. The first retrieves the list of avaiable tools from the Superface account attached to the `SUPERFACE_AUTH_TOKEN` that was used. + +The second function handles calling the API with the specific function and required payload. + +```javascript +async function getSuperfaceTools() { + try { + const response = await axios.get(`${SUPERFACE_BASE_URL}/fd`, { + headers: { + Authorization: `Bearer ${SUPERFACE_AUTH_TOKEN}`, + }, + }); + return response.data; + } catch (error) { + console.error(error); + } +} + +async function performAction(functionName, toolCallArguments) { + try { + const actionResponse = await axios.post( + `${SUPERFACE_BASE_URL}/perform/${functionName}`, + toolCallArguments, + { + headers: { + Authorization: `Bearer ${SUPERFACE_AUTH_TOKEN}`, + 'Content-Type': 'application/json', + 'x-superface-user-id': 'sflangchainexample|123', + }, + } + ); + + let result = JSON.stringify(actionResponse.data); + console.log(`SUPERFACE RESPONSE: ${result}`); + return result; + } catch (error) { + console.error(`PERFORM ERROR: ${error.response}`); + return error.response.data; + } +} +``` + +Then we're back to setting up LangChain's OpenAI bindings. Below we set up the model we want to use as well as ensuring that the latest tools are loaded in via the `getSuperfaceTools()` helper function. + +```javascript +// Bind function to the model as a tool +const chat = new ChatOpenAI({ + modelName: 'gpt-4-1106-preview', + maxTokens: 128, + openAIApiKey: OPENAI_API_KEY, +}).bind({ + tools: await getSuperfaceTools(), + tool_choice: 'auto', +}); +``` + +Create an initial prompt from a "human". In this case the human wants to know about the weather. + +```javascript +// Ask initial question that requires multiple tool calls +const res = await chat.invoke([['human', PROMPT]]); +``` + +OpenAI will choose a tool that is most appropriate for the prompt that was submitted, this could require more than one tool call so the code below handles this, and passes each one over to the `performAction()` helper function. + +```javascript + // Format the results from calling the tool calls back to OpenAI as ToolMessages + const toolMessages = res.additional_kwargs.tool_calls?.map(async toolCall => { + const toolCallResult = await performAction( + toolCall.function.name, + JSON.parse(toolCall.function.arguments) + ); +``` + +Each response is re-formatted as a `ToolMessage` + +```javascript + return new ToolMessage({ + tool_call_id: toolCall.id, + name: toolCall.function.name, + content: toolCallResult, + }); + }); +``` + +Finally, all the messages, and the responses from the Superface Hub API for the selected tool are passed back to OpenAI so it can determine and present the final result for the submitted prompt. + +```javascript + // Send the results back as the next step in the conversation + const finalResponse = await chat.invoke([ + ['human', PROMPT], + res, + ...(await Promise.all(toolMessages ?? [])), + ]); + + console.log(finalResponse.content); +})(); +``` diff --git a/docs/api/examples/mistral.mdx b/docs/api/examples/mistral.mdx new file mode 100644 index 00000000..1ae2de4c --- /dev/null +++ b/docs/api/examples/mistral.mdx @@ -0,0 +1,167 @@ +# MistralAI + +The below example demonstrates how to use the Superface API to provide access to external tools in a MinstralAI powered agent. + +If you want to run this example for yourself as a Jupyter Notebook, you can [download the `.ipynb` file](/notebooks/superface_agent_hub_mistralai_example.ipynb). + +## Prerequisities + +Install the following dependencies + +```bash +pip install pandas "mistralai>=0.1.2" +``` + +## Setup + +```python +import json +import random +import requests as r +from mistralai.client import MistralClient +from mistralai.models.chat_completion import ChatMessage +from IPython.display import display, Markdown + +# Set a random number of your choice, but don't change it +# once you have run the notebook, otherwise you will create another user. +SUPERFACE_USER_ID_CONSTANT = 123456789 + +# Use the number to create a unique ID +SUPERFACE_USER_ID = "sfmstrlaidemo|" + str(SUPERFACE_USER_ID_CONSTANT) + +# Default URL for Superface +SUPERFACE_BASE_URL = "https://pod.superface.ai/api/hub" + +# Set the Superface authentication token +SUPERFACE_AUTH_TOKEN="" + +# Mistral API Key +MISTRAL_API_KEY = "" + +# A new array for the user, system and LLM messages to be stored +messages = [] +``` + +## MistralAI Setup + +Using the MistralAI SDK set up the client and the model + +```python +# Setup MistralAI +model = "mistral-large-latest" +client = MistralClient(api_key="MISTRAL_API_KEY") +``` + +## Helper functions + +Below, we have defined two helper functions. + +- The first gets the list of available tools from your Superface account using the [`/fd`](../endpoints) endpoint. +- The second performs the action that the LLM selects using the [`/perform`](../endpoints#perform) endpoint. + +These helpers are for example purposes and you are welcome to build different ways to approach these in whatever manner you choose. + +```python +# Helper function to return the tool function descriptors +def get_superface_tools(): + headers = {"Authorization": "Bearer "+ SUPERFACE_AUTH_TOKEN} + tools = r.get(SUPERFACE_BASE_URL + "/fd", headers=headers) + return tools.json() + +# Helper function to perform the action for all the functions. +# This is the only API call required regardless of what the function is. +def perform_action(tool_name=None, tool_body=None): + headers = {"Authorization": "Bearer "+ SUPERFACE_AUTH_TOKEN, "x-superface-user-id": SUPERFACE_USER_ID} + perform = r.post(SUPERFACE_BASE_URL + "/perform/" + tool_name, headers=headers, json=tool_body) + return json.dumps(perform.json()) +``` + +## Prompt + +```python +# User prompt - The weather tool requires no authentication +prompt = "What is the weather in Prague?" +``` + +## Passing functions to MistralAI + +The code below represents starting a new chat session. + +- The initial user prompt is defined first +- The model, the user prompt, and the list of function definitions from Superface are passed over to the LLM. The `tool_choice` is set to auto so the LLM has freedom to decide for itself what it wants to use. + +```python +messages = [ + ChatMessage(role="user", content=prompt) +] + +response = client.chat( + model=model, + messages=messages, + tools=get_superface_tools(), + tool_choice="auto" +) + +# Output the response so we see what Mistral is doing +response + +# Add the response message to the message history so it stays in context +messages.append(response.choices[0].message) +``` + +## Perform function call + +From the user prompt and list of function defintions above, Mistral will make a decision about which function definition it wants to use, and what the expected parameters are to complete the task. + +These can be extracted from the message history, and used to call the API to get the result using the `perform_action` helper function. + +The response is then added to the `messages` array, so that the Mistral LLM can determine a final result to present to the user. + +```python +# Extract tool intents and params from the assistant response +tool_call = response.choices[0].message.tool_calls[0] +function_name = tool_call.function.name +function_params = json.loads(tool_call.function.arguments) + +# Pass the function name and arguments to Superface +run_function = perform_action(function_name, function_params) + +messages.append(ChatMessage(role="tool", name=function_name, content=run_function)) + +# Show the complete message history so far +messages +``` + +## Final response + +All of the information required by Mistral's LLM is now in place, so we pass the full `messages` array back to the model so that it can determine the output to the prompt the user submitted. + +```python +response = client.chat( + model=model, + messages=messages +) + +# Prettify the response as Markdown +display(Markdown(response.choices[0].message.content)) +``` + +The output for this example will look something like this: + +```text +The current weather in Prague, Czech Republic is sunny with a temperature of 13°C. It feels like 13°C. +``` + +## Summary + +The code shown here is basic example of how to set up MistralAI to accept a prompt and, using the supplied function defintions, decide which function is appropriate to use. + +It's up to you how you want to approach building the right elements into your agent. + +At a minimum, you need the following ways to interface with Superface's API: + +- A way to retrieve the function definitions from the `/fd` endpoint +- A way to create a user session using the `/session` endpoint +- A way to execute the selected function using the `/perform` endpoint + +For more detail on how Function Calling works with MistralAI, [check out their documentation](https://docs.mistral.ai/guides/function-calling/). diff --git a/docs/api/examples/openai.mdx b/docs/api/examples/openai.mdx new file mode 100644 index 00000000..19f1c8b7 --- /dev/null +++ b/docs/api/examples/openai.mdx @@ -0,0 +1,185 @@ +# OpenAI + +The following example outlines how to use OpenAI Function Calling and the Superface API. + +If you want to run this example for yourself as a Jupyter Notebook, you can [download the `.ipynb` file](/notebooks/superface_agent_hub_openai_example.ipynb). + +## Prerequisites + +```python +pip install openai +``` + +## Setup + +Import the dependencies, and setup the required constants. + +```python +import openai +import json +import random +import requests as r +from openai import OpenAI +from IPython.display import display, Markdown + +# Set a random number of your choice, but don't change it +# once you have run the notebook, otherwise you will create another user. +SUPERFACE_USER_ID_CONSTANT = + +# Use the number to create a unique ID +SUPERFACE_USER_ID = "sfoaidemo|" + str(SUPERFACE_USER_ID_CONSTANT) + +# Default URL for Superface +SUPERFACE_BASE_URL = "https://pod.superface.ai/api/hub" + +# Set the Superface authentication token +SUPERFACE_AUTH_TOKEN="" + +# Set the OpenAI API Key +OPENAI_API_KEY="" +``` + +## OpenAI setup + +Next, set up the basis of the OpenAI SDK including which model to use. The system prompt below can be changed, but it is worth including if you have space for it to ensure that the prompts returned by Superface's API are handled correctly in context. + +```python +# OpenAI Config +client = OpenAI(api_key=OPENAI_API_KEY) +GPT_MODEL = "gpt-4-turbo-preview" +INIT_INSTRUCTIONS = """ +You are a helpful assistant. +Respond to the following prompt by using function_call and then summarize actions. +Ask for clarification if a user request is ambiguous. +Display the agent_hint from the response to the user if it is present. +""" +``` + +## Helper functions + +Below, we have defined two helper functions. + +- The first gets the list of available tools from your Superface account using the [`/fd`](../endpoints) endpoint. +- The second performs the action that the LLM selects using the [`/perform`](../endpoints#perform) endpoint. + +These helpers are for example purposes and you are welcome to build different ways to approach these in whatever manner you choose. + +```python +# Helper function to return the tool function descriptors +def get_superface_tools(): + headers = {"Authorization": "Bearer "+ SUPERFACE_AUTH_TOKEN} + tools = r.get(SUPERFACE_BASE_URL + "/fd", headers=headers) + return tools.json() + +# Helper function to perform the action for all the functions. +# This is the only API call required regardless of what the function is. +def perform_action(tool_name=None, tool_body=None): + headers = {"Authorization": "Bearer "+ SUPERFACE_AUTH_TOKEN, "x-superface-user-id": SUPERFACE_USER_ID} + perform = r.post(SUPERFACE_BASE_URL + "/perform/" + tool_name, headers=headers, json=tool_body) + return json.dumps(perform.json()) + +# Helper function for calling the OpenAI Chat Completions API +def perform_chat_request(messages, tools=None, tool_choice=None, model=GPT_MODEL): + try: + response = client.chat.completions.create( + model=model, + messages=messages, + tools=tools, + tool_choice=tool_choice, + ) + return response + except Exception as e: + print("Unable to generate ChatCompletion response") + print(f"Exception: {e}") + return e +``` + +## Prompt + +```python +# User prompt - The weather tool requires no authentication +prompt = "What is the weather in Prague?" +``` + +## Passing functions to OpenAI + +The code below represents starting a new chat session, similar to when you first load ChatGPT. + +- The `system` prompt is loaded first to instruct the LLM on how to handle any further messages. +- The initial user prompt is defined as the subsequent message +- The system prompt, the user prompt, and the list of function definitions from Superface are passed over to the LLM + +```python +messages = [] +messages.append({"role": "system", "content": INIT_INSTRUCTIONS}) + +messages.append({ + "role": "user", + "content": prompt +}) + +chat_response = perform_chat_request( + messages, tools=get_superface_tools() +) +``` + +## Perform function call + +The OpenAI LLM responds with a decision that it needs to use a function to complete the task in the user's prompt. Good news though, you passed in a list of function definitions and it has selected one (or perhaps multiple) that seems appropriate to do it. + +```python +assistant_message = chat_response.choices[0].message +messages.append(assistant_message) + +# Uncomment assistant_message if you want to see the response from OpenAI +#assistant_message + +tool_calls = assistant_message.tool_calls + +if (assistant_message.tool_calls): + +# Assistant wants to call a tool, run it + +for tool_call in tool_calls: +function_name = tool_call.function.name +function_args = json.loads(tool_call.function.arguments) +function_response = perform_action(function_name, function_args) +#print(function_response) + + messages.append( + { + "tool_call_id": tool_call.id, + "role": "tool", + "name": function_name, + "content": function_response, + } + ) + + second_chat_response = perform_chat_request(messages, tools=get_superface_tools()) + #print(second_chat_response) + + if second_chat_response.choices[0].message.content: + display(Markdown(second_chat_response.choices[0].message.content)) +``` + +## Final response + +The final response will be something similar to: + +```text +The current weather in Prague, Czech Republic is partly cloudy with a temperature of 13°C, and it feels like 13°C. +``` + +## Summary + +The code shown here is basic example of how to set up OpenAI to accept a prompt and, using the supplied function defintions, decide which function is appropriate to use. + +Almost all of the code could be written differently, so it's up to you how you want to approach building the right elements into your agent. + +At a minimum, you need: + +- A way to retrieve the function definitions from the `/fd` endpoint +- A way to create a user session using the `/session` endpoint +- A way to execute the selected function using the `/perform` endpoint + +For more information on how to work with Function Calling with OpenAI, see the [OpenAI Function Calling](https://platform.openai.com/docs/guides/function-calling) documentation. diff --git a/docs/api/index.mdx b/docs/api/index.mdx new file mode 100644 index 00000000..53b8ea65 --- /dev/null +++ b/docs/api/index.mdx @@ -0,0 +1,57 @@ +# Overview + +Superface's Hub API enables you to connect to and use the tools added to your account via API. + +This is particularly useful for developers who are building their own agents, and want to offer users the ability to connect to external APIs and services but don't want to build those integrations for themselves. + +## How it works + +LLMs such as [OpenAI](https://openai.com), [MistralAI](https://mistral.ai), [Anthropic](https://anthropic.com) and projects like [LangChain](https://langchain.com) allow developers to define and use functions that extend the functionality and connectivity of LLMs by providing user speciifc, real-time, or more context specific data in response to user prompts. + +### Example function + +These tools are defined and represented as JSON objects that outline what the tool is, what it does, and what parameters and value types are required in order to use it correctly. + +For example, a simple tool to get the current weather from the service Wttr.in when defined in this way looks like this: + +```json +{ + "type": "function", + "function": { + "name": "weather__current-weather__CurrentWeather", + "description": "Retrieve current weather information for a specified location.\n", + "parameters": { + "type": "object", + "required": ["city"], + "properties": { + "city": { + "type": "string", + "nullable": false, + "description": "Name of the city including state and country, e.g.: \"Prague, Czech Republic\" or \"New York City, NY, USA\"", + "title": "city" + }, + "units": { + "enum": ["C", "F", "K"], + "description": "Units used to represent temperature - Fahrenheit, Celsius, Kelvin\nCelsius by default", + "title": "units" + } + }, + "nullable": true + } + } +} +``` + +At a minimum, this decription can be used by an LLM to determine which information it should gather before executing an API call to a specific service. Better still, it can use it to determine _which_ tool, or tools, should be used to complete the prompt a user entered. + +## Superface Hub API + +Superface provides the ability to connect various tools such as the Google Suite, Notion, Jira, Todoist, HubSpot, Salesforce and [many, many, more](./tools/available-tools) to custom agents via our API. + +Any tools that are added to your account are made available to your end users, giving your agent a host of additional connectivity in just a few APIs calls. + +Using the Superface API you can: + +- Add function descriptions for all available tools with a single API call +- Allow users to securely authenticate their own accounts with any tools your agent offers +- Add, remove, or build custom tools and make them available to your agent at any time diff --git a/docs/api/sessions.mdx b/docs/api/sessions.mdx new file mode 100644 index 00000000..93d9d3c2 --- /dev/null +++ b/docs/api/sessions.mdx @@ -0,0 +1,72 @@ +# Session Management + +Superface provides a user interface that allows users to securely configure the accounts they will use with any of the tools you are providing via your agent. + +In order to manage this, we handle sessions in a way that allows users access to their configuration at any time, whilst providing your agent with the correct prompts to help the user get set up. + +## Identifying users + +In order to ensure that your users are able to access their tool configuration at any time, and to ensure Superface is able to identify which of your users is which you must create a unique ID for each user. + +This ID is used when creating a session via the `/session` endpoint, and also when calling the `/perform` endpoint to execute a particular function on behalf of a user. + +### Unique ID format + +How you format the unique ID for your users is up to you as long as the same ID is always used to identify a particular user. You may already have a unique IDs assigned to your users and you are welcome to use them with Superface as well. + +For example, an alphanumeric string such as `youragentname|unique_user_id` would be sufficient. + +:::tip Store the unique ID +Remember to store the unique ID as part of your user's profile. Superface's API will need it to continue to identify their tool configurations. +::: + +## Create a new session + +To create a new session via the API, use the [`/session`](./endpoints#session) endpoint making sure to include the `x-superface-user-id` header, with your user's unique ID as the value. + +Depending on the purpose of your agent you can choose when to create a session for a user, but it needs to be done at the point you require a user to configure their access to a tool (or tools), or if you want to provide them with a URL to modify or revoke access. + +```curl +curl -X POST \ + -H "Content-Type: application/json" \ + -H "Authorization: Bearer " \ + -H "x-superface-user-id: " \ + https://pod.superface.ai/api/hub/session +``` + +The response to this request will look similar to this: + +```json +{ + "status": "success", + "configuration_url": "https://pod.superface.ai/api/hub/session/psxis99ux9", + "assistant_hint": "Tell user to go to URL at 'configuration_url' to configure to open configuration. Always show the whole URL to the user. The 'configuration_url' expires in 15 minutes." +} +``` + +The `configuration_url` is provided by Superface and allows the user to securely authenticate their own credentials for any of the tools you have made available. + +An `assistant_hint` is provided as a helper for your agent so that it can understand the context of the response to this request. + +### URL expiry + +The `configuration_url` will expire after 15 minutes. However, as long as `x-superface-user-id` is inculded in the headers of your request to `/session` a new URL can be generated that will allow the user to access to their tool authentications at any time. + +## Session Prompts + +The API will respond with prompts that enable to user to set up, or re-authenticate tools in the following scenarios: + +- The `/perform` endpoint is called, but the user's authentication for the tool required is not set up, or has expired. +- No session exists for the user identified in the `x-superface-user-id` header. + +In these cases the response will default to: + +```json +{ + "status": "success", + "configuration_url": "https://pod.superface.ai/api/hub/session/psxis99ux9", + "assistant_hint": "Tell user to go to URL at 'configuration_url' to configure to open configuration. Always show the whole URL to the user. The 'configuration_url' expires in 15 minutes." +} +``` + +The `assistant_hint` will change depending on the context. For example if a user's credentials have expired for a particular service, such as Google, this detail will be included so your agent can present the most appropriate context to the user. diff --git a/docs/api/setup.mdx b/docs/api/setup.mdx new file mode 100644 index 00000000..4f2dbc58 --- /dev/null +++ b/docs/api/setup.mdx @@ -0,0 +1,45 @@ +# Setup a Hub + +The following steps will get you setup to use tools via the Hub API. + +## Step 1: Sign up + +Start by registering a new account for [Superface](https://pod.superface.ai). + +## Step 2: Add tools + +Think about which tools you want to be able to access via the Hub API. You can add any tools that Superface has already built from the Tools section. + +If you want to add your own tools, you can do this by using our Tool Authoring system. You can see more about how to do this in the [Create Tools documentation](../tools/create-tools). + +If you are expecting that users will provide their own credentials for these tools in order to use them in your agent, or application, you don't need to configure them any further. + +![The tools choice in the Superface UI](/img/api/hub-api-tool-choice.png) + +## Step 3: Check your function definitions + +After adding some tools (you'll have the Wttr.in tool by default). They will show up in your Tool Function Defintions schema. This schema is used by LLMs to understand which tools are available and what they can do. + +You can find yours by clicking on Hub, and selecting Hub API. + +![The Hub API main screen](/img/api/hub-api-function-definitions.png) + +You can see what the defintions look like by clicking on _Function Definitions JSON_. This will show you the JSON schema that will be returned from the `/fd` endpoint when called. + +## Step 4: Set your authentication + +There are two ways to handle how tools are authenticated when using Hub API. You must choose the option that best suits your end use case. + +![The user authentication choices](/img/api/hub-api-authentication.png) + +**Use my credentials**: If you plan on only using your own accounts, or private APIs that you and your colleagues have access to then _Use my credentials_ is fine. You will need to configure these tools from the Tools section if you choose this. + +**Users provide their own credentials**: If you are going to offer public access to an agent, or application, and require that users provide their own account credentials for the tools on offer, choose _Users provide their own credentials_. This will ensure that they are prompted to authenticate securely before a tool can be used. + +## Step 5: Start building + +That's all the setup that is required in the Superface interface. Your Hub now has tools and is ready for you to start building your implementation. + +:::note Are multiple hubs possible? +Superface does not currently offer the ability to manage multiple Hub configurations under one account. If you want to build multiple projects, or use both a development and production hub, you will need to create an account for each project. +::: diff --git a/docs/index.mdx b/docs/index.mdx index fd15babe..494ac132 100644 --- a/docs/index.mdx +++ b/docs/index.mdx @@ -7,7 +7,7 @@ import DocCardList from '@theme/DocCardList'; # Welcome to Superface -**Superface allows GPT builders to easily connect to any API, enabling them to create, retrieve, and manage data from external platforms.** +**Superface allows GPT and agent builders to easily connect to any API, enabling them to create, retrieve, and manage data from external platforms.** ![A diagram that represents Superface as the connection between OpenAI Custom GPT and other tools](/img/HubIllustration2.webp) @@ -23,16 +23,25 @@ import DocCardList from '@theme/DocCardList';
+
=0.1.2\" --quiet" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "d5kgBfH2b55h" + }, + "outputs": [], + "source": [ + "import json\n", + "import requests as r\n", + "from mistralai.client import MistralClient\n", + "from mistralai.models.chat_completion import ChatMessage\n", + "from IPython.display import display, Markdown\n", + "\n", + "# Set a random number of your choice, but don't change it\n", + "# once you have run the notebook, otherwise you will create another user.\n", + "SUPERFACE_USER_ID_CONSTANT = \n", + "\n", + "# Use the number to create a unique ID\n", + "SUPERFACE_USER_ID = \"sfoaidemo|\" + str(SUPERFACE_USER_ID_CONSTANT)\n", + "\n", + "# Default URL for Superface\n", + "SUPERFACE_BASE_URL = \"https://pod.superface.ai/api/hub\"\n", + "\n", + "# Set the Superface authentication token\n", + "SUPERFACE_AUTH_TOKEN=\"\"\n", + "\n", + "# Mistral API Key\n", + "MISTRAL_API_KEY = \"\"\n", + "\n", + "messages = []" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Setup MistralAI\n", + "model = \"mistral-large-latest\"\n", + "client = MistralClient(api_key=\"MISTRAL_API_KEY\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "dNtjQSYgEktu" + }, + "outputs": [], + "source": [ + "# Define the helper functions used to get and perform functions\n", + "\n", + "def get_superface_tools():\n", + " headers = {\"Authorization\": \"Bearer \"+ SUPERFACE_AUTH_TOKEN}\n", + " tools = r.get(SUPERFACE_BASE_URL + \"/fd\", headers=headers)\n", + " return tools.json()\n", + "\n", + "# Helper function to perform the action for all the functions.\n", + "# This is the only API call required regardless of what the function is.\n", + "def perform_action(tool_name=None, tool_body=None):\n", + " headers = {\"Authorization\": \"Bearer \"+ SUPERFACE_AUTH_TOKEN, \"x-superface-user-id\": SUPERFACE_USER_ID}\n", + " perform_result = r.post(SUPERFACE_BASE_URL + \"/perform/\" + tool_name, headers=headers, json=tool_body)\n", + " return json.dumps(perform_result.json())" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "AarnYwkJhQFs" + }, + "outputs": [], + "source": [ + "prompt = \"What is the weather in Prague?\" # @param {type:\"string\"}\n", + "\n", + "messages = [\n", + " ChatMessage(role=\"user\", content=prompt)\n", + "]\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "mjaSrj0BhneA", + "outputId": "f13a9772-1c5b-4bcc-eb97-1042a1ee3246" + }, + "outputs": [ + { + "data": { + "text/plain": [ + "ChatCompletionResponse(id='7253f94922b04f1cbca70aad050868ac', object='chat.completion', created=1711469166, model='mistral-large-latest', choices=[ChatCompletionResponseChoice(index=0, message=ChatMessage(role='assistant', content='', name=None, tool_calls=[ToolCall(id='null', type=, function=FunctionCall(name='weather__current-weather__CurrentWeather', arguments='{\"city\": \"Prague, Czech Republic\"}'))]), finish_reason=)], usage=UsageInfo(prompt_tokens=4201, total_tokens=4234, completion_tokens=33))" + ] + }, + "execution_count": 6, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "response = client.chat(\n", + " model=model,\n", + " messages=messages,\n", + " tools=get_superface_tools(),\n", + " tool_choice=\"auto\"\n", + ")\n", + "\n", + "response" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "beGN1uWbwHF8" + }, + "outputs": [], + "source": [ + "messages.append(response.choices[0].message)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "72EBIb8KidnB", + "outputId": "5e3dcf01-4cbe-470a-f180-ae639fe080fe" + }, + "outputs": [ + { + "data": { + "text/plain": [ + "[ChatMessage(role='user', content='What is the weather in Prague?', name=None, tool_calls=None),\n", + " ChatMessage(role='assistant', content='', name=None, tool_calls=[ToolCall(id='null', type=, function=FunctionCall(name='weather__current-weather__CurrentWeather', arguments='{\"city\": \"Prague, Czech Republic\"}'))]),\n", + " ChatMessage(role='tool', content='{\"status\": \"success\", \"assistant_hint\": \"Format the result in \\'result\\' field to the user. If the user asked for a specific format, respect it\", \"result\": {\"description\": \"Sunny\", \"feelsLike\": 13, \"temperature\": 13}}', name='weather__current-weather__CurrentWeather', tool_calls=None)]" + ] + }, + "execution_count": 8, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# Extract tool intents and params from the assistant response\n", + "tool_call = response.choices[0].message.tool_calls[0]\n", + "function_name = tool_call.function.name\n", + "function_params = json.loads(tool_call.function.arguments)\n", + "\n", + "# Pass the function name and arguments to Superface\n", + "run_function = perform_action(function_name, function_params)\n", + "\n", + "messages.append(ChatMessage(role=\"tool\", name=function_name, content=run_function))\n", + "messages" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/", + "height": 46 + }, + "id": "EDrcb3uyjdWr", + "outputId": "e8b1750e-6145-4841-cc95-def1acb8e5ef" + }, + "outputs": [ + { + "data": { + "text/markdown": [ + "The current weather in Prague, Czech Republic is sunny with a temperature of 13°C. It feels like 13°C." + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "response = client.chat(\n", + " model=model,\n", + " messages=messages\n", + ")\n", + "\n", + "display(Markdown(response.choices[0].message.content))" + ] + } + ], + "metadata": { + "colab": { + "provenance": [] + }, + "kernelspec": { + "display_name": "Python 3", + "name": "python3" + }, + "language_info": { + "name": "python" + } + }, + "nbformat": 4, + "nbformat_minor": 0 +} diff --git a/static/notebooks/superface_agent_hub_openai_example.ipynb b/static/notebooks/superface_agent_hub_openai_example.ipynb new file mode 100644 index 00000000..9bf9546c --- /dev/null +++ b/static/notebooks/superface_agent_hub_openai_example.ipynb @@ -0,0 +1,228 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": { + "id": "5kMCn52hDeNF" + }, + "source": [ + "# Superface Agent Hub - Open AI Function Calling Example\n", + "In this notebook we demonstrate how to use the Superface Agent Hub to connect your OpenAI powered agent to external tools and APIs in a way that allows for both personal and third-party use." + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": { + "id": "9gsOeblMhm1j" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n", + "jupyter-server 1.11.0 requires anyio<4,>=3.1.0, but you have anyio 4.3.0 which is incompatible.\u001b[0m\n", + "Note: you may need to restart the kernel to use updated packages.\n" + ] + } + ], + "source": [ + "%pip install openai --quiet" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": { + "id": "12W1L6n1h5Z7" + }, + "outputs": [], + "source": [ + "import openai\n", + "import json\n", + "import requests as r\n", + "from openai import OpenAI\n", + "from IPython.display import display, Markdown\n", + "\n", + "# Set a random number of your choice, but don't change it\n", + "# once you have run the notebook, otherwise you will create another user.\n", + "SUPERFACE_USER_ID_CONSTANT = \n", + "\n", + "# Use the number to create a unique ID\n", + "SUPERFACE_USER_ID = \"sfoaidemo|\" + str(SUPERFACE_USER_ID_CONSTANT)\n", + "\n", + "# Default URL for Superface\n", + "SUPERFACE_BASE_URL = \"https://pod.superface.ai/api/hub\"\n", + "\n", + "# Set the Superface authentication token\n", + "SUPERFACE_AUTH_TOKEN=\"\"\n", + "\n", + "# Set the OpenAI API Key\n", + "OPENAI_API_KEY=\"\"\n", + "\n", + "\n", + "# OpenAI Config\n", + "client = OpenAI(api_key=OPENAI_API_KEY)\n", + "GPT_MODEL = \"gpt-4-turbo-preview\"\n", + "INIT_INSTRUCTIONS = \"\"\"\n", + "You are a helpful assistant.\n", + "Respond to the following prompt by using function_call and then summarize actions.\n", + "Ask for clarification if a user request is ambiguous.\n", + "Display the agent_hint from the response to the user if it is present.\n", + "\"\"\"" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": { + "id": "i4Ve_YBxEOlY" + }, + "outputs": [], + "source": [ + "# Helper function to return the tool function descriptors\n", + "def get_superface_tools():\n", + " headers = {\"Authorization\": \"Bearer \"+ SUPERFACE_AUTH_TOKEN}\n", + " tools = r.get(SUPERFACE_BASE_URL + \"/fd\", headers=headers)\n", + " return tools.json()\n", + "\n", + "# Helper function to perform the action for all the functions.\n", + "# This is the only API call required regardless of what the function is.\n", + "def perform_action(tool_name=None, tool_body=None):\n", + " headers = {\"Authorization\": \"Bearer \"+ SUPERFACE_AUTH_TOKEN, \"x-superface-user-id\": SUPERFACE_USER_ID}\n", + " perform = r.post(SUPERFACE_BASE_URL + \"/perform/\" + tool_name, headers=headers, json=tool_body)\n", + " return json.dumps(perform.json())\n", + "\n", + "# Helper function for calling the OpenAI Chat Completions API\n", + "def perform_chat_request(messages, tools=None, tool_choice=None, model=GPT_MODEL):\n", + " try:\n", + " response = client.chat.completions.create(\n", + " model=model,\n", + " messages=messages,\n", + " tools=tools,\n", + " tool_choice=tool_choice,\n", + " )\n", + " return response\n", + " except Exception as e:\n", + " print(\"Unable to generate ChatCompletion response\")\n", + " print(f\"Exception: {e}\")\n", + " return e" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": { + "id": "JyvEKQqvSq9E" + }, + "outputs": [], + "source": [ + "# User prompt - The weather tool requires no authentication\n", + "prompt = \"What is the weather in Prague?\"\n" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/", + "height": 81 + }, + "id": "5zG-Lpbkld-A", + "outputId": "9a109b34-5682-4449-a516-45b88a796b86" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "ChatCompletion(id='chatcmpl-99u9jr7AmC358uN6FtFYumvuf720A', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='The current weather in Prague, Czech Republic is partly cloudy with a temperature of 13°C, and it feels like 13°C.', role='assistant', function_call=None, tool_calls=None))], created=1712147843, model='gpt-4-0125-preview', object='chat.completion', system_fingerprint='fp_a7daf7c51e', usage=CompletionUsage(completion_tokens=28, prompt_tokens=2314, total_tokens=2342))\n" + ] + }, + { + "data": { + "text/markdown": [ + "The current weather in Prague, Czech Republic is partly cloudy with a temperature of 13°C, and it feels like 13°C." + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "messages = []\n", + "messages.append({\"role\": \"system\", \"content\": INIT_INSTRUCTIONS})\n", + "\n", + "\n", + "messages.append({\n", + " \"role\": \"user\",\n", + " \"content\": prompt\n", + "})\n", + "\n", + "chat_response = perform_chat_request(\n", + " messages, tools=get_superface_tools()\n", + ")\n", + "\n", + "assistant_message = chat_response.choices[0].message\n", + "messages.append(assistant_message)\n", + "\n", + "#assistant_message\n", + "\n", + "tool_calls = assistant_message.tool_calls\n", + "\n", + "if (assistant_message.tool_calls):\n", + " # Assistant wants to call a tool, run it\n", + "\n", + " for tool_call in tool_calls:\n", + " function_name = tool_call.function.name\n", + " function_args = json.loads(tool_call.function.arguments)\n", + " function_response = perform_action(function_name, function_args)\n", + " #print(function_response)\n", + "\n", + " messages.append(\n", + " {\n", + " \"tool_call_id\": tool_call.id,\n", + " \"role\": \"tool\",\n", + " \"name\": function_name,\n", + " \"content\": function_response,\n", + " }\n", + " )\n", + "\n", + " second_chat_response = perform_chat_request(messages, tools=get_superface_tools())\n", + " print(second_chat_response)\n", + "\n", + " if second_chat_response.choices[0].message.content:\n", + " display(Markdown(second_chat_response.choices[0].message.content))" + ] + } + ], + "metadata": { + "colab": { + "provenance": [] + }, + "kernelspec": { + "display_name": "pytorch-env", + "language": "python", + "name": "pytorch-env" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.9.7" + } + }, + "nbformat": 4, + "nbformat_minor": 1 +} diff --git a/static/notebooks/superface_hub_api_anthropic_weather_example.ipynb b/static/notebooks/superface_hub_api_anthropic_weather_example.ipynb new file mode 100644 index 00000000..bb559507 --- /dev/null +++ b/static/notebooks/superface_hub_api_anthropic_weather_example.ipynb @@ -0,0 +1,356 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": { + "id": "qabP5mEbU86f" + }, + "source": [ + "# Superface Hub API - Anthropic Claude3 Example\n", + "\n", + "This notebook demonstrates how to use Superface's Hub API to with the function calling capability of the Claude3 LLMs." + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": { + "id": "iGEDJkt9VCxe" + }, + "outputs": [], + "source": [ + "%pip install anthropic --quiet" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "D6uZXG6lU4xG" + }, + "source": [ + "## Setup and configuration" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": { + "id": "v5uK6K_oWMP1" + }, + "outputs": [], + "source": [ + "import anthropic\n", + "import json\n", + "import requests as r\n", + "from IPython.display import display, Markdown\n", + "\n", + "# Set a random number of your choice, but don't change it\n", + "# once you have run the notebook, otherwise you will create another user.\n", + "SUPERFACE_USER_ID_CONSTANT = 123456789\n", + "\n", + "# Use the number to create a unique ID\n", + "SUPERFACE_USER_ID = \"sfoaidemo|\" + str(SUPERFACE_USER_ID_CONSTANT)\n", + "\n", + "# Default URL for Superface\n", + "SUPERFACE_BASE_URL = \"https://pod.superface.ai/api/hub\"\n", + "\n", + "# Set the Superface authentication token\n", + "SUPERFACE_AUTH_TOKEN=\"\"\n", + "\n", + "# Set the OpenAI API Key\n", + "ANTHROPIC_API_KEY=\"\"\n", + "\n", + "client = anthropic.Anthropic(api_key=ANTHROPIC_API_KEY)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "TSkiGsdSVSHx" + }, + "source": [ + "## Helper functions\n", + "Defines helpers for communicating with the Superface HUB API, and also with Claude.\n", + "\n", + "Anthropic expects a slightly different JSON schema for function definition than that used by OpenAI, MistralAI and LangChain, which is what Hub API delivers by default.\n", + "\n", + "To combat this, the `get_formatted_tools()` function is required to alter the structure and keep Claude happy." + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": { + "id": "De6vpHZuW6rk" + }, + "outputs": [], + "source": [ + "# Helper function to return the tool function descriptors\n", + "def get_superface_tools():\n", + " headers = {\"Authorization\": \"Bearer \"+ SUPERFACE_AUTH_TOKEN}\n", + " tools = r.get(SUPERFACE_BASE_URL + \"/fd\", headers=headers)\n", + " return tools.json()\n", + "\n", + "# Helper function to perform the action for all the functions.\n", + "# This is the only API call required regardless of what the function is.\n", + "def perform_action(tool_name=None, tool_body=None):\n", + " headers = {\"Authorization\": \"Bearer \"+ SUPERFACE_AUTH_TOKEN, \"x-superface-user-id\": SUPERFACE_USER_ID}\n", + " perform = r.post(SUPERFACE_BASE_URL + \"/perform/\" + tool_name, headers=headers, json=tool_body)\n", + " return json.dumps(perform.json())\n", + "\n", + "# Anthropic uses a slightly different schema, so reformat the function definitions JSON to suit\n", + "def get_formatted_tools():\n", + " original_tools = get_superface_tools()\n", + " formatted_tools = []\n", + "\n", + " for tool in original_tools:\n", + " formatted_tools.append(tool['function'])\n", + "\n", + " for tool in formatted_tools:\n", + " tool['input_schema'] = tool.pop(\"parameters\")\n", + "\n", + " return formatted_tools\n", + "\n", + "def talk_to_claude(role=None, message=None):\n", + " messages.append({\"role\": role, \"content\": message})\n", + " response = client.beta.tools.messages.create(\n", + " model=\"claude-3-opus-20240229\",\n", + " max_tokens=1024,\n", + " system=\"Today is April 5, 2024\",\n", + " tools=get_formatted_tools(),\n", + " messages=messages,\n", + " )\n", + " return response" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "T-KIowX7V7IU" + }, + "source": [ + "## Message history\n", + "The cell below sets up the message history. If you want to start your session again, you can just re-run this cell. There is no need to start again from the top of the notebook." + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": { + "id": "Wnm_GgOUznxT" + }, + "outputs": [], + "source": [ + "messages = []" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "KrHGHmIwWI8V" + }, + "source": [ + "## Submit an initial prompt" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "AynWL3B0WvGq", + "outputId": "0fbd120a-84e2-4816-fc2a-8499cbb16fca" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "ToolsBetaMessage(id='msg_01KL4PHnAeMiY13AaV3fLMQN', content=[TextBlock(text='\\nTo get the current weather forecast for Prague, I should use the weather__current-weather__CurrentWeather function. Let\\'s check if I have the required parameters:\\n\\ncity: The user provided \"Prague\" as the city. To be more precise, I\\'ll specify \"Prague, Czech Republic\".\\nunits: This is an optional parameter. The user did not specify units, so I can omit this and the function will use the default of Celsius.\\n\\nI have the required city parameter, so I can proceed with calling the function.\\n', type='text'), ToolUseBlock(id='toolu_01SBw1Bov8VnjnL7Q6u16hYG', input={'city': 'Prague, Czech Republic'}, name='weather__current-weather__CurrentWeather', type='tool_use')], model='claude-3-opus-20240229', role='assistant', stop_reason='tool_use', stop_sequence=None, type='message', usage=Usage(input_tokens=4595, output_tokens=179))\n" + ] + } + ], + "source": [ + "response = talk_to_claude(\"user\", \"What's the weather like in Prague?\")\n", + "print(response)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "T-ViCHmwoWMv" + }, + "source": [ + "## Find out what Claude is thinking" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "LatXYvG7lrPQ", + "outputId": "75aec956-793f-4a71-d74e-5d4a627684cc" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\n", + "To get the current weather forecast for Prague, I should use the weather__current-weather__CurrentWeather function. Let's check if I have the required parameters:\n", + "\n", + "city: The user provided \"Prague\" as the city. To be more precise, I'll specify \"Prague, Czech Republic\".\n", + "units: This is an optional parameter. The user did not specify units, so I can omit this and the function will use the default of Celsius.\n", + "\n", + "I have the required city parameter, so I can proceed with calling the function.\n", + "\n" + ] + } + ], + "source": [ + "print(response.content[0].text)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "JV4TlECh6v1m" + }, + "source": [ + "## Run the tool Claude chose with Hub API" + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/", + "height": 53 + }, + "id": "Ft8sKHApo-jz", + "outputId": "91b47aaa-949d-445e-95c9-a3d534fcc5fc" + }, + "outputs": [ + { + "data": { + "application/vnd.google.colaboratory.intrinsic+json": { + "type": "string" + }, + "text/plain": [ + "'{\"status\": \"success\", \"assistant_hint\": \"Format the result in \\'result\\' field to the user. If the user asked for a specific format, respect it\", \"result\": {\"description\": \"Partly cloudy\", \"feelsLike\": 16, \"temperature\": 16}}'" + ] + }, + "execution_count": 10, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "if (response.content[1] and response.content[1].type == \"tool_use\"):\n", + " claude_response = response.content[1]\n", + " messages.append({\n", + " \"role\": \"assistant\",\n", + " \"content\": [\n", + " {\n", + " \"type\": \"text\",\n", + " \"text\": response.content[0].text\n", + " },\n", + " {\n", + " \"type\": claude_response.type,\n", + " \"id\": claude_response.id,\n", + " \"name\": claude_response.name,\n", + " \"input\": claude_response.input\n", + " }\n", + " ]\n", + " })\n", + "\n", + " function_name = claude_response.name\n", + " function_inputs = claude_response.input\n", + " tool_use_id = claude_response.id\n", + "\n", + " superface_response = perform_action(function_name, function_inputs)\n", + "\n", + "superface_response" + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "metadata": { + "id": "2IWPuDVjsVOr" + }, + "outputs": [], + "source": [ + "tool_response_content = [{\n", + " \"type\": \"tool_result\",\n", + " \"tool_use_id\": tool_use_id,\n", + " \"content\": superface_response\n", + "\n", + "}]\n", + "claude_response = talk_to_claude(\"user\", tool_response_content)\n", + "messages.append({\"role\": \"assistant\", \"content\": claude_response.content[0].text})" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "qCAaNj4THGGp" + }, + "source": [ + "## Final response" + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/", + "height": 46 + }, + "id": "wN0oxy24zHRo", + "outputId": "ea3de838-bbb0-46e7-bc94-adf607415424" + }, + "outputs": [ + { + "data": { + "text/markdown": [ + "Current weather in Prague, Czech Republic:\n", + "Temperature: 16°C\n", + "Feels like: 16°C\n", + "Description: Partly cloudy" + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "display(Markdown(claude_response.content[0].text))" + ] + } + ], + "metadata": { + "colab": { + "provenance": [] + }, + "kernelspec": { + "display_name": "Python 3", + "name": "python3" + }, + "language_info": { + "name": "python" + } + }, + "nbformat": 4, + "nbformat_minor": 0 +}