Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

updates to tooluse and switch to converse api #7

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
"source": [
"## How to get started\n",
"\n",
"1. Clone this repository to your local machine.\n",
"1. If you are attending an instructor lead workshop or deployed the workshop infrastructure using the provided [CloudFormation Template](https://raw.githubusercontent.com/aws-samples/prompt-engineering-with-anthropic-claude-v-3/main/cloudformation/workshop-v1-final-cfn.yml) you can proceed to step 2, otherwise you will need to download the workshop [GitHub Repository](https://github.com/aws-samples/prompt-engineering-with-anthropic-claude-v-3) to your local machine.\n",
"\n",
"2. Install the required dependencies by running the following command:\n",
" "
Expand All @@ -31,36 +31,20 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"%pip install -qU pip\n",
"%pip install -qr ../requirements.txt"
"%pip install -qUr requirements.txt --force-reinstall"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"3. Restart the kernel after installing dependencies"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# restart kernel\n",
"from IPython.core.display import HTML\n",
"HTML(\"<script>Jupyter.notebook.kernel.restart()</script>\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"4. Run the notebook cells in order, following the instructions provided."
"3. Run the notebook cells in order, following the instructions provided."
]
},
{
Expand All @@ -77,8 +61,8 @@
"\n",
"- When you reach the bottom of a tutorial page, navigate to the next numbered file in the folder, or to the next numbered folder if you're finished with the content within that chapter file.\n",
"\n",
"### The Anthropic SDK & the Messages API\n",
"We will be using the [Anthropic python SDK](https://docs.anthropic.com/claude/reference/client-sdks) and the [Messages API](https://docs.anthropic.com/claude/reference/messages_post) throughout this tutorial. \n",
"### The Boto3 SDK & the Converse API\n",
"We will be using the [Amazon Boto3 SDK](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/bedrock-runtime.html) and the [Converse API](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/bedrock-runtime/client/converse.html) throughout this tutorial. \n",
"\n",
"Below is an example of what running a prompt will look like in this tutorial. First, we create `get_completion`, which is a helper function that sends a prompt to Claude and returns Claude's generated response. Run that cell now."
]
Expand All @@ -97,13 +81,30 @@
"outputs": [],
"source": [
"import boto3\n",
"session = boto3.Session() # create a boto3 session to dynamically get and set the region name\n",
"AWS_REGION = session.region_name\n",
"print(\"AWS Region:\", AWS_REGION)\n",
"MODEL_NAME = \"anthropic.claude-3-haiku-20240307-v1:0\"\n",
"import json\n",
"from datetime import datetime\n",
"from botocore.exceptions import ClientError\n",
"\n",
"%store MODEL_NAME\n",
"%store AWS_REGION"
"session = boto3.Session()\n",
"region = session.region_name"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#modelId = 'anthropic.claude-3-sonnet-20240229-v1:0'\n",
"modelId = 'anthropic.claude-3-haiku-20240307-v1:0'\n",
"\n",
"%store modelId\n",
"%store region\n",
"\n",
"print(f'Using modelId: {modelId}')\n",
"print('Using region: ', region)\n",
"\n",
"bedrock_client = boto3.client(service_name = 'bedrock-runtime', region_name = region,)"
]
},
{
Expand All @@ -116,29 +117,46 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"import boto3\n",
"import json\n",
"def get_completion(prompt, system_prompt=None):\n",
" # Define the inference configuration\n",
" inference_config = {\n",
" \"temperature\": 0.0, # Set the temperature for generating diverse responses\n",
" \"maxTokens\": 200 # Set the maximum number of tokens to generate\n",
" }\n",
" # Define additional model fields\n",
" additional_model_fields = {\n",
" \"top_p\": 1, # Set the top_p value for nucleus sampling\n",
" }\n",
" # Create the converse method parameters\n",
" converse_api_params = {\n",
" \"modelId\": modelId, # Specify the model ID to use\n",
" \"messages\": [{\"role\": \"user\", \"content\": [{\"text\": prompt}]}], # Provide the user's prompt\n",
" \"inferenceConfig\": inference_config, # Pass the inference configuration\n",
" \"additionalModelRequestFields\": additional_model_fields # Pass additional model fields\n",
" }\n",
" # Check if system_text is provided\n",
" if system_prompt:\n",
" # If system_text is provided, add the system parameter to the converse_params dictionary\n",
" converse_api_params[\"system\"] = [{\"text\": system_prompt}]\n",
"\n",
" # Send a request to the Bedrock client to generate a response\n",
" try:\n",
" response = bedrock_client.converse(**converse_api_params)\n",
"\n",
"bedrock = boto3.client('bedrock-runtime',region_name=AWS_REGION)\n",
"\n",
"def get_completion(prompt):\n",
" body = json.dumps(\n",
" {\n",
" \"anthropic_version\": '',\n",
" \"max_tokens\": 2000,\n",
" \"messages\": [{\"role\": \"user\", \"content\": prompt}],\n",
" \"temperature\": 0.0,\n",
" \"top_p\": 1,\n",
" \"system\": ''\n",
" }\n",
" )\n",
" response = bedrock.invoke_model(body=body, modelId=MODEL_NAME)\n",
" response_body = json.loads(response.get('body').read())\n",
"\n",
" return response_body.get('content')[0].get('text')"
" # Extract the generated text content from the response\n",
" text_content = response['output']['message']['content'][0]['text']\n",
"\n",
" # Return the generated text content\n",
" return text_content\n",
"\n",
" except ClientError as err:\n",
" message = err.response['Error']['Message']\n",
" print(f\"A client error occured: {message}\")"
]
},
{
Expand All @@ -153,7 +171,9 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# Prompt\n",
Expand All @@ -167,15 +187,15 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"The `MODEL_NAME` and `AWS_REGION` variables defined earlier will be used throughout the tutorial. Just make sure to run the cells for each tutorial page from top to bottom."
"The `modelId` and `region` variables defined earlier will be used throughout the tutorial. Just make sure to run the cells for each tutorial page from top to bottom."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "py310",
"display_name": "conda_tensorflow2_p310",
"language": "python",
"name": "python3"
"name": "conda_tensorflow2_p310"
},
"language_info": {
"codemirror_mode": {
Expand All @@ -187,9 +207,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.0"
"version": "3.10.14"
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 4
}
Loading