You can use the Amazon Bedrock API to give a model access to tools that can help it generate
responses for messages that you send to the model. For example, you might have a chat
application that lets users find out out the most popular song played on a radio station. To
answer a request for the most popular song, a model needs a tool that can query and return
the song information.
Tool use with models is also known as Function calling.
In Amazon Bedrock, the model doesn't directly call the tool. Rather, when you send a message to
a model, you also supply a definition for one or more tools that could potentially help the
model generate a response. In this example, you would supply a definition for a tool that
returns the most popular song for a specified radio station. If the model determines that it
needs the tool to generate a response for the message, the model responds with a request for
you to call the tool. It also includes the input parameters (the required radio station) to
pass to the tool.
In your code, you call the tool on the model's behalf. In this scenario, assume the tool
implementation is an API. The tool could just as easily be a database, Lambda function, or
some other software. You decide how you want to implement the tool. You then continue the
conversation with the model by supplying a message with the result from the tool. Finally
the model generates a response for the orginal message that includes the tool results that
you sent to the model.
To use tools with a model you can use the Converse API (Converse or ConverseStream). The example
code in this topic uses the Converse API to show how to use a tool that gets the most
popular song for a radio station. For general information about calling the Converse API,
see Use the Converse API.
It is possible to use tools with the base inference operations (InvokeModel or InvokeModelWithResponseStream). To find the inference parameters that you pass
in the request body, see the inference parameters
for the model that you want to use. We recommend using the Converse API as it provides
a consistent API, that works with all Amazon Bedrock models that support tool use.
Amazon Bedrock supports tool calling with the following models.
-
Anthropic Claude 3 models
Mistral AI Mistral Large and Mistral Small
Cohere Command R and Command R+
For more information, see
Supported models and model features.
The following steps show how to use a tool with the Converse API.
To send the message and tool definition, you use the Converse or ConverseStream (for streaming responses) operations.
The definition of the tool is a JSON schema that you pass in the toolConfig
(ToolConfiguration) request parameter to the Converse
operation. For
information about the schema, see JSON schema. The following is an example
schema for a tool that gets the most popular song played on a radio station.
{
"tools": [
{
"toolSpec": {
"name": "top_song",
"description": "Get the most popular song played on a radio station.",
"inputSchema": {
"json": {
"type": "object",
"properties": {
"sign": {
"type": "string",
"description": "The call sign for the radio station for which you want the most popular song. Example calls signs are WZPZ and WKRP."
}
},
"required": [
"sign"
]
}
}
}
}
]
}
In the same request, you also pass a user message in the messages
(Message) request parameter.
[
{
"role": "user",
"content": [
{
"text": "What is the most popular song on WZPZ?"
}
]
}
]
If you are using an Anthropic Claude 3 model, you can force the use of a tool
by specifying the toolChoice
(ToolChoice) field in the toolConfig
request parameter. Forcing
the use of a tool is useful for testing your tool during development. The following
example shows how to force the use of a tool called
top_song.
{"tool" : {"name" : "top_song"}}
For information about other parameters that
you can pass, see Use the Converse API.
When you invoke the Converse
operation with the message and tool definition, the model uses
the tool definition to determine if the tool is needed to answer the
message. For example, if your chat app user sends
the message What's the most popular song on WZPZ?, the model matches the message with
the schema in the top_song tool definition and determines that the
tool can help generate a response.
When the model decides that it needs a tool to generate a response, the model sets the stopReason
response field to tool_use
. The response also identifies the tool (top_song) that the model
wants you to run and the radio station (WZPZ) that it wants you to query with the tool.
Information about the requested tool is in the message that the model returns in
the output
(ConverseOutput) field. Specifically, the
toolUse
(ToolUseBlock) field. You use the
toolUseId
field to identify the tool request in later calls.
The following example shows the response from Converse
when you pass the message discussed in Step 1: Send the message and tool definition.
{
"output": {
"message": {
"role": "assistant",
"content": [
{
"toolUse": {
"toolUseId": "tooluse_hbTgdi0CSLq_hM4P8csZJA",
"name": "top_song",
"input": {
"sign": "WZPZ"
}
}
}
]
}
},
"stopReason": "tool_use"
}
From the toolUse
field in the model response, use the
name
field to identify the name of the tool. Then call your
implementation of the tool and pass the input parameters from the input
field.
Next, construct a user message that includes a toolResult
(ToolResultBlock) content block. In the content block, include the
response from the tool and the ID for the tool request that you got in the previous
step.
{
"role": "user",
"content": [
{
"toolResult": {
"toolUseId": "tooluse_kZJMlvQmRJ6eAyJE5GIl7Q",
"content": [
{
"json": {
"song": "Elemental Hotel",
"artist": "8 Storey Hike"
}
}
]
}
}
]
}
Should an error occur in the tool, such as a request for a non existant radio
station, you can send error information to the model in the toolResult
field. To indicate an error, specify error
in the status
field. The following example error is for when the tool can't find the radio
station.
{
"role": "user",
"content": [
{
"toolResult": {
"toolUseId": "tooluse_kZJMlvQmRJ6eAyJE5GIl7Q",
"content": [
{
"text": "Station WZPA not found."
}
],
"status": "error"
}
}
]
}
Continue the conversation with the model by including the user message that you
created in the previous step in a call to Converse
.
The model then generates a response that answers the original message (
What's the most popular song on WZPZ?) with the information that you
provided in the toolResult
field of the message.
{
"output": {
"message": {
"role": "assistant",
"content": [
{
"text": "The most popular song on WZPZ is Elemental Hotel by 8 Storey Hike."
}
]
}
},
"stopReason": "end_turn"
The following examples show you how to use a tool with the Converse API. The
tool returns the most popular song on a fictional radio station.
- Converse
-
This example shows how to use a tool with the Converse
operation
with the Command R model.
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to use tools with the Converse API and the Cohere Command R model.
"""
import logging
import json
import boto3
from botocore.exceptions import ClientError
class StationNotFoundError(Exception):
"""Raised when a radio station isn't found."""
pass
logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)
def get_top_song(call_sign):
"""Returns the most popular song for the requested station.
Args:
call_sign (str): The call sign for the station for which you want
the most popular song.
Returns:
response (json): The most popular song and artist.
"""
song = ""
artist = ""
if call_sign == 'WZPZ':
song = "Elemental Hotel"
artist = "8 Storey Hike"
else:
raise StationNotFoundError(f"Station {call_sign} not found.")
return song, artist
def generate_text(bedrock_client, model_id, tool_config, input_text):
"""Generates text using the supplied Amazon Bedrock model. If necessary,
the function handles tool use requests and sends the result to the model.
Args:
bedrock_client: The Boto3 Bedrock runtime client.
model_id (str): The Amazon Bedrock model ID.
tool_config (dict): The tool configuration.
input_text (str): The input text.
Returns:
Nothing.
"""
logger.info("Generating text with model %s", model_id)
# Create the initial message from the user input.
messages = [{
"role": "user",
"content": [{"text": input_text}]
}]
response = bedrock_client.converse(
modelId=model_id,
messages=messages,
toolConfig=tool_config
)
output_message = response['output']['message']
messages.append(output_message)
stop_reason = response['stopReason']
if stop_reason == 'tool_use':
# Tool use requested. Call the tool and send the result to the model.
tool_requests = response['output']['message']['content']
for tool_request in tool_requests:
if 'toolUse' in tool_request:
tool = tool_request['toolUse']
logger.info("Requesting tool %s. Request: %s",
tool['name'], tool['toolUseId'])
if tool['name'] == 'top_song':
tool_result = {}
try:
song, artist = get_top_song(tool['input']['sign'])
tool_result = {
"toolUseId": tool['toolUseId'],
"content": [{"json": {"song": song, "artist": artist}}]
}
except StationNotFoundError as err:
tool_result = {
"toolUseId": tool['toolUseId'],
"content": [{"text": err.args[0]}],
"status": 'error'
}
tool_result_message = {
"role": "user",
"content": [
{
"toolResult": tool_result
}
]
}
messages.append(tool_result_message)
# Send the tool result to the model.
response = bedrock_client.converse(
modelId=model_id,
messages=messages,
toolConfig=tool_config
)
output_message = response['output']['message']
# print the final response from the model.
for content in output_message['content']:
print(json.dumps(content, indent=4))
def main():
"""
Entrypoint for tool use example.
"""
logging.basicConfig(level=logging.INFO,
format="%(levelname)s: %(message)s")
model_id = "cohere.command-r-v1:0"
input_text = "What is the most popular song on WZPZ?"
tool_config = {
"tools": [
{
"toolSpec": {
"name": "top_song",
"description": "Get the most popular song played on a radio station.",
"inputSchema": {
"json": {
"type": "object",
"properties": {
"sign": {
"type": "string",
"description": "The call sign for the radio station for which you want the most popular song. Example calls signs are WZPZ, and WKRP."
}
},
"required": [
"sign"
]
}
}
}
}
]
}
bedrock_client = boto3.client(service_name='bedrock-runtime')
try:
print(f"Question: {input_text}")
generate_text(bedrock_client, model_id, tool_config, input_text)
except ClientError as err:
message = err.response['Error']['Message']
logger.error("A client error occurred: %s", message)
print(f"A client error occured: {message}")
else:
print(
f"Finished generating text with model {model_id}.")
if __name__ == "__main__":
main()
- ConverseStream
-
This example shows how to use a tool with the ConverseStream
streaming operation and the Anthropic Claude 3 Haiku
model.
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to use a tool with a streaming conversation.
"""
import logging
import json
import boto3
from botocore.exceptions import ClientError
logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)
class StationNotFoundError(Exception):
"""Raised when a radio station isn't found."""
pass
def get_top_song(call_sign):
"""Returns the most popular song for the requested station.
Args:
call_sign (str): The call sign for the station for which you want
the most popular song.
Returns:
response (json): The most popular song and artist.
"""
song = ""
artist = ""
if call_sign == 'WZPZ':
song = "Elemental Hotel"
artist = "8 Storey Hike"
else:
raise StationNotFoundError(f"Station {call_sign} not found.")
return song, artist
def stream_messages(bedrock_client,
model_id,
messages,
tool_config):
"""
Sends a message to a model and streams the response.
Args:
bedrock_client: The Boto3 Bedrock runtime client.
model_id (str): The model ID to use.
messages (JSON) : The messages to send to the model.
tool_config : Tool Information to send to the model.
Returns:
stop_reason (str): The reason why the model stopped generating text.
message (JSON): The message that the model generated.
"""
logger.info("Streaming messages with model %s", model_id)
response = bedrock_client.converse_stream(
modelId=model_id,
messages=messages,
toolConfig=tool_config
)
stop_reason = ""
message = {}
content = []
message['content'] = content
text = ''
tool_use = {}
#stream the response into a message.
for chunk in response['stream']:
if 'messageStart' in chunk:
message['role'] = chunk['messageStart']['role']
elif 'contentBlockStart' in chunk:
tool = chunk['contentBlockStart']['start']['toolUse']
tool_use['toolUseId'] = tool['toolUseId']
tool_use['name'] = tool['name']
elif 'contentBlockDelta' in chunk:
delta = chunk['contentBlockDelta']['delta']
if 'toolUse' in delta:
if 'input' not in tool_use:
tool_use['input'] = ''
tool_use['input'] += delta['toolUse']['input']
elif 'text' in delta:
text += delta['text']
print(delta['text'], end='')
elif 'contentBlockStop' in chunk:
if 'input' in tool_use:
tool_use['input'] = json.loads(tool_use['input'])
content.append({'toolUse': tool_use})
tool_use = {}
else:
content.append({'text': text})
text = ''
elif 'messageStop' in chunk:
stop_reason = chunk['messageStop']['stopReason']
return stop_reason, message
def main():
"""
Entrypoint for streaming tool use example.
"""
logging.basicConfig(level=logging.INFO,
format="%(levelname)s: %(message)s")
model_id = "anthropic.claude-3-haiku-20240307-v1:0"
input_text = "What is the most popular song on WZPZ?"
try:
bedrock_client = boto3.client(service_name='bedrock-runtime')
# Create the initial message from the user input.
messages = [{
"role": "user",
"content": [{"text": input_text}]
}]
# Define the tool to send to the model.
tool_config = {
"tools": [
{
"toolSpec": {
"name": "top_song",
"description": "Get the most popular song played on a radio station.",
"inputSchema": {
"json": {
"type": "object",
"properties": {
"sign": {
"type": "string",
"description": "The call sign for the radio station for which you want the most popular song. Example calls signs are WZPZ and WKRP."
}
},
"required": ["sign"]
}
}
}
}
]
}
# Send the message and get the tool use request from response.
stop_reason, message = stream_messages(
bedrock_client, model_id, messages, tool_config)
messages.append(message)
if stop_reason == "tool_use":
for content in message['content']:
if 'toolUse' in content:
tool = content['toolUse']
if tool['name'] == 'top_song':
tool_result = {}
try:
song, artist = get_top_song(tool['input']['sign'])
tool_result = {
"toolUseId": tool['toolUseId'],
"content": [{"json": {"song": song, "artist": artist}}]
}
except StationNotFoundError as err:
tool_result = {
"toolUseId": tool['toolUseId'],
"content": [{"text": err.args[0]}],
"status": 'error'
}
tool_result_message = {
"role": "user",
"content": [
{
"toolResult": tool_result
}
]
}
# Add the result info to message.
messages.append(tool_result_message)
#Send the messages, including the tool result, to the model.
stop_reason, message = stream_messages(
bedrock_client, model_id, messages, tool_config)
except ClientError as err:
message = err.response['Error']['Message']
logger.error("A client error occurred: %s", message)
print("A client error occured: " +
format(message))
else:
print(
f"\nFinished streaming messages with model {model_id}.")
if __name__ == "__main__":
main()