Anthropic Claude Messages API
This section provides inference parameters and code examples for using the Anthropic Claude Messages API.
Anthropic Claude Messages API overview
You can use the Messages API to create chat bots or virtual assistant applications. The API manages the conversational exchanges between a user and an Anthropic Claude model (assistant).
Tip
This topic shows how to uses the Anthropic Claude messages API with the base inference operations (InvokeModel or InvokeModelWithResponseStream). However, we recommend that you use the Converse API to implement messages in your application. The Converse API provides a unified set of parameters that work across all models that support messages. For more information, see Carry out a conversation with the Converse API operations.
Anthropic trains Claude models to operate on alternating user and assistant conversational turns. When creating a new message, you specify the prior conversational turns with the messages parameter. The model then generates the next Message in the conversation.
Each input message must be an object with a role and content. You can specify a single user-role message, or you can include multiple user and assistant messages. The first message must always use the user role.
If you are using the technique of prefilling the response from Claude (filling in the beginning of Claude's response by using a final assistant role Message), Claude will respond by picking up from where you left off. With this technique, Claude will still return a response with the assistant role.
If the final message uses the assistant role, the response content will continue immediately from the content in that message. You can use this to constrain part of the model's response.
Example with a single user message:
[{"role": "user", "content": "Hello, Claude"}]
Example with multiple conversational turns:
[ {"role": "user", "content": "Hello there."}, {"role": "assistant", "content": "Hi, I'm Claude. How can I help you?"}, {"role": "user", "content": "Can you explain LLMs in plain English?"}, ]
Example with a partially-filled response from Claude:
[ {"role": "user", "content": "Please describe yourself using only JSON"}, {"role": "assistant", "content": "Here is my JSON description:\n{"}, ]
Each input message content may be either a single string or an array of content blocks, where each block has a specific type. Using a string is shorthand for an array of one content block of type "text". The following input messages are equivalent:
{"role": "user", "content": "Hello, Claude"}
{"role": "user", "content": [{"type": "text", "text": "Hello, Claude"}]}
For information about creating prompts for Anthropic Claude models, see Intro to
prompting
System prompts
You can also include a system prompt in the request. A system prompt lets you provide
context and instructions to Anthropic Claude, such as specifying a particular goal
or role. Specify a system prompt in the system
field, as shown in the following example.
"system": "You are Claude, an AI assistant created by Anthropic to be helpful, harmless, and honest. Your goal is to provide informative and substantive responses to queries while avoiding potential harms."
For more information, see System
prompts
Multimodal prompts
A multimodal prompt combines multiple modalities (images and text) in a single
prompt. You specify the modalities in the content
input field. The
following example shows how you could ask Anthropic Claude to describe the
content of a supplied image. For example code, see Multimodal code examples.
{ "anthropic_version": "bedrock-2023-05-31", "max_tokens": 1024, "messages": [ { "role": "user", "content": [ { "type": "image", "source": { "type": "base64", "media_type": "image/jpeg", "data": "iVBORw..." } }, { "type": "text", "text": "What's in these images?" } ] } ] }
Note
The following restrictions pertain to the content
field:
-
You can include up to 20 images. Each image's size, height, and width must be no more than 3.75 MB, 8,000 px, and 8,000 px, respectively.
-
You can include up to five documents. Each document's size must be no more than 4.5 MB.
-
You can only include images and documents if the
role
isuser
.
Each image you include in a request counts towards your token usage. For more
information, see Image
costs
Tool use (function calling)
With Anthropic Claude 3 models, you can specify a tool that the model can use
to answer a message. For example, you could specify a tool that gets the most
popular song on a radio station. If the user passes the message What's the
most popular song on WZPZ?, the model determines that
the tool you specifed can help answer the question. In its response, the model
requests that you run the tool on its behalf. You then run the tool and pass the tool
result to the model, which then generates a response for the original message. For more information,
see Tool use (function calling)
Tip
We recommend that you use the Converse API for integrating tool use into your application. For more information, see Use a tool to complete an Amazon Bedrock model response.
You specify the tools that you want to make available to a model in the
tools
field. The following example is for a tool that gets the most
popular songs on a radio station.
[ { "name": "top_song", "description": "Get the most popular song played on a radio station.", "input_schema": { "type": "object", "properties": { "sign": { "type": "string", "description": "The call sign for the radio station for which you want the most popular song. Example calls signs are WZPZ and WKRP." } }, "required": [ "sign" ] } } ]
When the model needs a tool to generate a response to a message, it returns
information about the requested tool, and the input to the tool, in the message
content
field. It also sets the stop reason for the response to
tool_use
.
{ "id": "msg_bdrk_01USsY5m3XRUF4FCppHP8KBx", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 375, "output_tokens": 36 }, "content": [ { "type": "tool_use", "id": "toolu_bdrk_01SnXQc6YVWD8Dom5jz7KhHy", "name": "top_song", "input": { "sign": "WZPZ" } } ], "stop_reason": "tool_use" }
In your code, you call the tool on the tools behalf. You then pass the tool result (tool_result
)
in a user message to the model.
{ "role": "user", "content": [ { "type": "tool_result", "tool_use_id": "toolu_bdrk_01SnXQc6YVWD8Dom5jz7KhHy", "content": "Elemental Hotel" } ] }
In its response, the model uses the tool result to generate a response for the original message.
{ "id": "msg_bdrk_012AaqvTiKuUSc6WadhUkDLP", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "content": [ { "type": "text", "text": "According to the tool, the most popular song played on radio station WZPZ is \"Elemental Hotel\"." } ], "stop_reason": "end_turn" }
Computer use (Beta)
Computer use is a new Anthropic Claude model capability (in beta) available with Claude 3.5 Sonnet v2 only. With computer use, Claude can help you automate tasks through basic GUI actions.
Warning
Computer use feature is made available to you as a ‘Beta Service’ as defined in the AWS Service Terms. It is subject to your Agreement with AWS and the AWS Service Terms, and the applicable model EULA. Please be aware that the Computer Use API poses unique risks that are distinct from standard API features or chat interfaces. These risks are heightened when using the Computer Use API to interact with the Internet. To minimize risks, consider taking precautions such as:
Operate computer use functionality in a dedicated Virtual Machine or container with minimal privileges to prevent direct system attacks or accidents.
To prevent information theft, avoid giving the Computer Use API access to sensitive accounts or data.
Limiting the computer use API’s internet access to required domains to reduce exposure to malicious content.
To ensure proper oversight, keep a human in the loop for sensitive tasks (such as making decisions that could have meaningful real-world consequences) and for anything requiring affirmative consent (such as accepting cookies, executing financial transactions, or agreeing to terms of service).
Any content that you enable Claude to see or access can potentially override instructions or cause Claude to make mistakes or perform unintended actions. Taking proper precautions, such as isolating Claude from sensitive surfaces, is essential — including to avoid risks related to prompt injection. Before enabling or requesting permissions necessary to enable computer use features in your own products, please inform end users of any relevant risks, and obtain their consent as appropriate.
The computer use API offers several pre-defined computer use tools (computer_20241022, bash_20241022, and
text_editor_20241022) for you to use. You can then create a prompt with your request, such as
“send an email to Ben with the notes from my last meeting” and a screenshot (when
required). The response contains a list of tool_use
actions in JSON format (for example,
scroll_down, left_button_press, screenshot). Your code runs the computer actions and
provides Claude with screenshot showcasing outputs (when requested).
The tools parameter has been updated to accept polymorphic tool types; a new tool.type
property is being added to distinguish them. type
is optional; if omitted, the tool is
assumed to be a custom tool (previously the only tool type supported). Additionally, a new
parameter, anthropic_beta
, has been added, with a corresponding enum value:
computer-use-2024-10-22
. Only requests made with this parameter and enum can use the new
computer use tools. It can be specified as follows: "anthropic_beta":
["computer-use-2024-10-22"]
.
For more information, see Computer use (beta)
The following is an example response that assumes the request contained a screenshot of your desktop with a Firefox icon.
{ "id": "msg_123", "type": "message", "role": "assistant", "model": "anthropic.claude-3-5-sonnet-20241022-v2:0", "content": [ { "type": "text", "text": "I see the Firefox icon. Let me click on it and then navigate to a weather website." }, { "type": "tool_use", "id": "toolu_123", "name": "computer", "input": { "action": "mouse_move", "coordinate": [ 708, 736 ] } }, { "type": "tool_use", "id": "toolu_234", "name": "computer", "input": { "action": "left_click" } } ], "stop_reason": "tool_use", "stop_sequence": null, "usage": { "input_tokens": 3391, "output_tokens": 132 } }
Supported models
You can use the Messages API with the following Anthropic Claude models.
Anthropic Claude Instant v1.2
Anthropic Claude 2 v2
Anthropic Claude 2 v2.1
Anthropic Claude 3 Sonnet
Anthropic Claude 3.5 Sonnet
Anthropic Claude 3.5 Sonnet v2
Anthropic Claude 3 Haiku
Anthropic Claude 3 Opus
Request and Response
The request body is passed in the body
field of a request to
InvokeModel or InvokeModelWithResponseStream. The maximum size of the payload you can send in a request is 20MB.
For more information,
see https://docs.anthropic.com/claude/reference/messages_post
Code examples
The following code examples show how to use the messages API.
Messages code example
This example shows how to send a single turn user message and a user turn with a prefilled assistant message to an Anthropic Claude 3 Sonnet model.
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 """ Shows how to generate a message with Anthropic Claude (on demand). """ import boto3 import json import logging from botocore.exceptions import ClientError logger = logging.getLogger(__name__) logging.basicConfig(level=logging.INFO) def generate_message(bedrock_runtime, model_id, system_prompt, messages, max_tokens): body=json.dumps( { "anthropic_version": "bedrock-2023-05-31", "max_tokens": max_tokens, "system": system_prompt, "messages": messages } ) response = bedrock_runtime.invoke_model(body=body, modelId=model_id) response_body = json.loads(response.get('body').read()) return response_body def main(): """ Entrypoint for Anthropic Claude message example. """ try: bedrock_runtime = boto3.client(service_name='bedrock-runtime') model_id = 'anthropic.claude-3-sonnet-20240229-v1:0' system_prompt = "Please respond only with emoji." max_tokens = 1000 # Prompt with user turn only. user_message = {"role": "user", "content": "Hello World"} messages = [user_message] response = generate_message (bedrock_runtime, model_id, system_prompt, messages, max_tokens) print("User turn only.") print(json.dumps(response, indent=4)) # Prompt with both user turn and prefilled assistant response. #Anthropic Claude continues by using the prefilled assistant text. assistant_message = {"role": "assistant", "content": "<emoji>"} messages = [user_message, assistant_message] response = generate_message(bedrock_runtime, model_id,system_prompt, messages, max_tokens) print("User turn and prefilled assistant response.") print(json.dumps(response, indent=4)) except ClientError as err: message=err.response["Error"]["Message"] logger.error("A client error occurred: %s", message) print("A client error occured: " + format(message)) if __name__ == "__main__": main()
Multimodal code examples
The following examples show how to pass an image and prompt text in a multimodal message to an Anthropic Claude 3 Sonnet model.
Topics
Multimodal prompt with InvokeModel
The following example shows how to send a multimodal prompt to Anthropic Claude 3 Sonnet with InvokeModel.
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 """ Shows how to run a multimodal prompt with Anthropic Claude (on demand) and InvokeModel. """ import json import logging import base64 import boto3 from botocore.exceptions import ClientError logger = logging.getLogger(__name__) logging.basicConfig(level=logging.INFO) def run_multi_modal_prompt(bedrock_runtime, model_id, messages, max_tokens): """ Invokes a model with a multimodal prompt. Args: bedrock_runtime: The Amazon Bedrock boto3 client. model_id (str): The model ID to use. messages (JSON) : The messages to send to the model. max_tokens (int) : The maximum number of tokens to generate. Returns: None. """ body = json.dumps( { "anthropic_version": "bedrock-2023-05-31", "max_tokens": max_tokens, "messages": messages } ) response = bedrock_runtime.invoke_model( body=body, modelId=model_id) response_body = json.loads(response.get('body').read()) return response_body def main(): """ Entrypoint for Anthropic Claude multimodal prompt example. """ try: bedrock_runtime = boto3.client(service_name='bedrock-runtime') model_id = 'anthropic.claude-3-sonnet-20240229-v1:0' max_tokens = 1000 input_image = "/path/to/image" input_text = "What's in this image?" # Read reference image from file and encode as base64 strings. with open(input_image, "rb") as image_file: content_image = base64.b64encode(image_file.read()).decode('utf8') message = {"role": "user", "content": [ {"type": "image", "source": {"type": "base64", "media_type": "image/jpeg", "data": content_image}}, {"type": "text", "text": input_text} ]} messages = [message] response = run_multi_modal_prompt( bedrock_runtime, model_id, messages, max_tokens) print(json.dumps(response, indent=4)) except ClientError as err: message = err.response["Error"]["Message"] logger.error("A client error occurred: %s", message) print("A client error occured: " + format(message)) if __name__ == "__main__": main()
Streaming multimodal prompt with InvokeModelWithResponseStream
The following example shows how to stream the response from a multimodal prompt sent to Anthropic Claude 3 Sonnet with InvokeModelWithResponseStream.
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 """ Shows how to stream the response from Anthropic Claude Sonnet (on demand) for a multimodal request. """ import json import base64 import logging import boto3 from botocore.exceptions import ClientError logger = logging.getLogger(__name__) logging.basicConfig(level=logging.INFO) def stream_multi_modal_prompt(bedrock_runtime, model_id, input_text, image, max_tokens): """ Streams the response from a multimodal prompt. Args: bedrock_runtime: The Amazon Bedrock boto3 client. model_id (str): The model ID to use. input_text (str) : The prompt text image (str) : The path to an image that you want in the prompt. max_tokens (int) : The maximum number of tokens to generate. Returns: None. """ with open(image, "rb") as image_file: encoded_string = base64.b64encode(image_file.read()) body = json.dumps({ "anthropic_version": "bedrock-2023-05-31", "max_tokens": max_tokens, "messages": [ { "role": "user", "content": [ {"type": "text", "text": input_text}, {"type": "image", "source": {"type": "base64", "media_type": "image/jpeg", "data": encoded_string.decode('utf-8')}} ] } ] }) response = bedrock_runtime.invoke_model_with_response_stream( body=body, modelId=model_id) for event in response.get("body"): chunk = json.loads(event["chunk"]["bytes"]) if chunk['type'] == 'message_delta': print(f"\nStop reason: {chunk['delta']['stop_reason']}") print(f"Stop sequence: {chunk['delta']['stop_sequence']}") print(f"Output tokens: {chunk['usage']['output_tokens']}") if chunk['type'] == 'content_block_delta': if chunk['delta']['type'] == 'text_delta': print(chunk['delta']['text'], end="") def main(): """ Entrypoint for Anthropic Claude Sonnet multimodal prompt example. """ model_id = "anthropic.claude-3-sonnet-20240229-v1:0" input_text = "What can you tell me about this image?" image = "/path/to/image" max_tokens = 100 try: bedrock_runtime = boto3.client('bedrock-runtime') stream_multi_modal_prompt( bedrock_runtime, model_id, input_text, image, max_tokens) except ClientError as err: message = err.response["Error"]["Message"] logger.error("A client error occurred: %s", message) print("A client error occured: " + format(message)) if __name__ == "__main__": main()