本文属于机器翻译版本。若本译文内容与英语原文存在差异,则一律以英文原文为准。
Anthropic Claude 消息 API
本节提供推理参数和代码示例,用于使用 Anthropic Claude 消息API。
Anthropic Claude 消息API概述
您可以使用消息API创建聊天机器人或虚拟助手应用程序。API管理用户和用户之间的对话交流 Anthropic Claude 模特(助手)。
提示
本主题说明如何使用 Anthropic Claude API带有基本推理操作(InvokeModel或 InvokeModelWithResponseStream)的消息。但是,我们建议您使用 Convers API e 在应用程序中实现消息。Converse API 提供了一组统一的参数,适用于所有支持消息的模型。有关更多信息,请参阅 与匡威API运营部门进行对话。
Anthropic 训练 Claude 模型在交替的用户和助手对话回合中进行操作。创建新消息时,您可以使用 messages 参数指定之前的对话回合。然后,模型在对话中生成下一条消息。
每条输入消息都必须是一个具有角色和内容的对象。您可以指定一条用户角色消息,也可以包含多条用户和助手消息。第一条消息必须始终使用用户角色。
如果您使用的是预先填充来自的响应的技术 Claude (使用最后一个助手角色 Message 填写克劳德回复的开头),Claude 将通过从上次停下来的地方继续进行回应。有了这种技术,Claude 仍然会返回带有助手角色的回复。
如果最后一条消息使用助理角色,则响应内容将立即从该消息中的内容继续。你可以用它来限制模型的部分响应。
包含单个用户消息的示例:
[{"role": "user", "content": "Hello, Claude"}]
具有多个对话回合的示例:
[ {"role": "user", "content": "Hello there."}, {"role": "assistant", "content": "Hi, I'm Claude. How can I help you?"}, {"role": "user", "content": "Can you explain LLMs in plain English?"}, ]
以 Claude 的部分填充回复为例:
[ {"role": "user", "content": "Please describe yourself using only JSON"}, {"role": "assistant", "content": "Here is my JSON description:\n{"}, ]
每个输入消息内容可以是单个字符串或内容块数组,其中每个块都有特定的类型。使用字符串是由 “文本” 类型的一个内容块组成的数组的简写。以下输入消息是等效的:
{"role": "user", "content": "Hello, Claude"}
{"role": "user", "content": [{"type": "text", "text": "Hello, Claude"}]}
有关为以下内容创建提示的信息 Anthropic Claude 模型,参见《提示简介
系统提示
您还可以在请求中加入系统提示。系统提示符允许您提供上下文和说明 Anthropic Claude,例如指定特定的目标或角色。在system
字段中指定系统提示符,如以下示例所示。
"system": "You are Claude, an AI assistant created by Anthropic to be helpful, harmless, and honest. Your goal is to provide informative and substantive responses to queries while avoiding potential harms."
有关更多信息,请参阅中的系统提示
多式联运提示
多模式提示将多种模式(图像和文本)组合到一个提示中。您可以在content
输入字段中指定模式。以下示例显示了如何提问 Anthropic Claude 描述所提供图像的内容。有关代码示例,请参阅 多式联运代码示例。
{ "anthropic_version": "bedrock-2023-05-31", "max_tokens": 1024, "messages": [ { "role": "user", "content": [ { "type": "image", "source": { "type": "base64", "media_type": "image/jpeg", "data": "iVBORw..." } }, { "type": "text", "text": "What's in these images?" } ] } ] }
注意
以下限制与该content
领域有关:
-
您最多可以包含 20 张图片。每张图片的大小、高度和宽度必须分别不超过 3.75 MB、8,000 像素和 8,000 像素。
-
您最多可以包括五个文档。每个文档的大小不得超过 4.5 MB。
-
如果
role
是,则只能包含图像和文档user
。
您在请求中包含的每张图片都计入您的令牌使用量。有关更多信息,请参阅《》中的图片费用
工具使用(函数调用)
随着 Anthropic Claude 3 个模型,你可以指定模型可以用来回复消息的工具。例如,您可以指定一种工具来获取广播电台上最受欢迎的歌曲。如果用户传递消息最受欢迎的歌曲WZPZ是什么? ,模型确定您指定的工具可以帮助回答问题。在响应中,模型要求您代表其运行该工具。然后,您运行该工具并将工具结果传递给模型,然后模型会生成对原始消息的响应。有关更多信息,请参阅《》中的工具使用(函数调用)
提示
我们建议您使用 Converse 将工具API的使用集成到您的应用程序中。有关更多信息,请参阅 使用工具完成 Amazon Bedrock 模型回复。
您可以指定要在tools
现场为模型提供的工具。以下示例是用于获取广播电台上最受欢迎的歌曲的工具。
[ { "name": "top_song", "description": "Get the most popular song played on a radio station.", "input_schema": { "type": "object", "properties": { "sign": { "type": "string", "description": "The call sign for the radio station for which you want the most popular song. Example calls signs are WZPZ and WKRP." } }, "required": [ "sign" ] } } ]
当模型需要工具来生成对消息的响应时,它会在消息content
字段中返回有关所请求工具的信息以及该工具的输入。它还会将响应的停止原因设置为tool_use
。
{ "id": "msg_bdrk_01USsY5m3XRUF4FCppHP8KBx", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 375, "output_tokens": 36 }, "content": [ { "type": "tool_use", "id": "toolu_bdrk_01SnXQc6YVWD8Dom5jz7KhHy", "name": "top_song", "input": { "sign": "WZPZ" } } ], "stop_reason": "tool_use" }
在你的代码中,你代表工具调用该工具。然后,在用户消息中将工具结果 (tool_result
) 传递给模型。
{ "role": "user", "content": [ { "type": "tool_result", "tool_use_id": "toolu_bdrk_01SnXQc6YVWD8Dom5jz7KhHy", "content": "Elemental Hotel" } ] }
在响应中,模型使用工具结果为原始消息生成响应。
{ "id": "msg_bdrk_012AaqvTiKuUSc6WadhUkDLP", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "content": [ { "type": "text", "text": "According to the tool, the most popular song played on radio station WZPZ is \"Elemental Hotel\"." } ], "stop_reason": "end_turn" }
支持的型号
您可以将消息API与以下内容一起使用 Anthropic Claude 模型。
Anthropic Claude Instant v1.2
Anthropic Claude 2 v2
Anthropic Claude 2 v2.1
Anthropic Claude 3 Sonnet
Anthropic Claude 3.5 Sonnet
Anthropic Claude 3 Haiku
Anthropic Claude 3 Opus
请求和响应
请求正文在请求body
字段中传递给InvokeModel或InvokeModelWithResponseStream。您可以在请求中发送的最大负载大小为 20MB。
有关更多信息,请参阅https://docs.anthropic.com/claude/参考文献/
代码示例
以下代码示例展示了如何使用这些消息API。
消息代码示例
此示例说明如何向用户发送单回合用户消息和带有预填助手消息的用户回合 Anthropic Claude 3 Sonnet 模型。
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 """ Shows how to generate a message with Anthropic Claude (on demand). """ import boto3 import json import logging from botocore.exceptions import ClientError logger = logging.getLogger(__name__) logging.basicConfig(level=logging.INFO) def generate_message(bedrock_runtime, model_id, system_prompt, messages, max_tokens): body=json.dumps( { "anthropic_version": "bedrock-2023-05-31", "max_tokens": max_tokens, "system": system_prompt, "messages": messages } ) response = bedrock_runtime.invoke_model(body=body, modelId=model_id) response_body = json.loads(response.get('body').read()) return response_body def main(): """ Entrypoint for Anthropic Claude message example. """ try: bedrock_runtime = boto3.client(service_name='bedrock-runtime') model_id = 'anthropic.claude-3-sonnet-20240229-v1:0' system_prompt = "Please respond only with emoji." max_tokens = 1000 # Prompt with user turn only. user_message = {"role": "user", "content": "Hello World"} messages = [user_message] response = generate_message (bedrock_runtime, model_id, system_prompt, messages, max_tokens) print("User turn only.") print(json.dumps(response, indent=4)) # Prompt with both user turn and prefilled assistant response. #Anthropic Claude continues by using the prefilled assistant text. assistant_message = {"role": "assistant", "content": "<emoji>"} messages = [user_message, assistant_message] response = generate_message(bedrock_runtime, model_id,system_prompt, messages, max_tokens) print("User turn and prefilled assistant response.") print(json.dumps(response, indent=4)) except ClientError as err: message=err.response["Error"]["Message"] logger.error("A client error occurred: %s", message) print("A client error occured: " + format(message)) if __name__ == "__main__": main()
多式联运代码示例
以下示例说明如何将多式联运消息中的图像和提示文本传递给 Anthropic Claude 3 Sonnet 模型。
多式联运提示 InvokeModel
以下示例说明如何向发送多式联运提示 Anthropic Claude 3 Sonnet 和InvokeModel。
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 """ Shows how to run a multimodal prompt with Anthropic Claude (on demand) and InvokeModel. """ import json import logging import base64 import boto3 from botocore.exceptions import ClientError logger = logging.getLogger(__name__) logging.basicConfig(level=logging.INFO) def run_multi_modal_prompt(bedrock_runtime, model_id, messages, max_tokens): """ Invokes a model with a multimodal prompt. Args: bedrock_runtime: The Amazon Bedrock boto3 client. model_id (str): The model ID to use. messages (JSON) : The messages to send to the model. max_tokens (int) : The maximum number of tokens to generate. Returns: None. """ body = json.dumps( { "anthropic_version": "bedrock-2023-05-31", "max_tokens": max_tokens, "messages": messages } ) response = bedrock_runtime.invoke_model( body=body, modelId=model_id) response_body = json.loads(response.get('body').read()) return response_body def main(): """ Entrypoint for Anthropic Claude multimodal prompt example. """ try: bedrock_runtime = boto3.client(service_name='bedrock-runtime') model_id = 'anthropic.claude-3-sonnet-20240229-v1:0' max_tokens = 1000 input_image = "/path/to/image" input_text = "What's in this image?" # Read reference image from file and encode as base64 strings. with open(input_image, "rb") as image_file: content_image = base64.b64encode(image_file.read()).decode('utf8') message = {"role": "user", "content": [ {"type": "image", "source": {"type": "base64", "media_type": "image/jpeg", "data": content_image}}, {"type": "text", "text": input_text} ]} messages = [message] response = run_multi_modal_prompt( bedrock_runtime, model_id, messages, max_tokens) print(json.dumps(response, indent=4)) except ClientError as err: message = err.response["Error"]["Message"] logger.error("A client error occurred: %s", message) print("A client error occured: " + format(message)) if __name__ == "__main__": main()
使用直播多模式提示音 InvokeModelWithResponseStream
以下示例说明如何将来自发送到的多式联运提示的响应流式传输 Anthropic Claude 3 Sonnet 和InvokeModelWithResponseStream。
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 """ Shows how to stream the response from Anthropic Claude Sonnet (on demand) for a multimodal request. """ import json import base64 import logging import boto3 from botocore.exceptions import ClientError logger = logging.getLogger(__name__) logging.basicConfig(level=logging.INFO) def stream_multi_modal_prompt(bedrock_runtime, model_id, input_text, image, max_tokens): """ Streams the response from a multimodal prompt. Args: bedrock_runtime: The Amazon Bedrock boto3 client. model_id (str): The model ID to use. input_text (str) : The prompt text image (str) : The path to an image that you want in the prompt. max_tokens (int) : The maximum number of tokens to generate. Returns: None. """ with open(image, "rb") as image_file: encoded_string = base64.b64encode(image_file.read()) body = json.dumps({ "anthropic_version": "bedrock-2023-05-31", "max_tokens": max_tokens, "messages": [ { "role": "user", "content": [ {"type": "text", "text": input_text}, {"type": "image", "source": {"type": "base64", "media_type": "image/jpeg", "data": encoded_string.decode('utf-8')}} ] } ] }) response = bedrock_runtime.invoke_model_with_response_stream( body=body, modelId=model_id) for event in response.get("body"): chunk = json.loads(event["chunk"]["bytes"]) if chunk['type'] == 'message_delta': print(f"\nStop reason: {chunk['delta']['stop_reason']}") print(f"Stop sequence: {chunk['delta']['stop_sequence']}") print(f"Output tokens: {chunk['usage']['output_tokens']}") if chunk['type'] == 'content_block_delta': if chunk['delta']['type'] == 'text_delta': print(chunk['delta']['text'], end="") def main(): """ Entrypoint for Anthropic Claude Sonnet multimodal prompt example. """ model_id = "anthropic.claude-3-sonnet-20240229-v1:0" input_text = "What can you tell me about this image?" image = "/path/to/image" max_tokens = 100 try: bedrock_runtime = boto3.client('bedrock-runtime') stream_multi_modal_prompt( bedrock_runtime, model_id, input_text, image, max_tokens) except ClientError as err: message = err.response["Error"]["Message"] logger.error("A client error occurred: %s", message) print("A client error occured: " + format(message)) if __name__ == "__main__": main()