本文為英文版的機器翻譯版本,如內容有任何歧義或不一致之處,概以英文版為準。
AnthropicClaude訊息 API
本節提供使用「AnthropicClaude訊息 API」的推論參數和程式碼範例。
AnthropicClaude訊息 API 概觀
您可以使用訊息 API 建立聊天機器人或虛擬助理應用程式。該 API 管理用戶和AnthropicClaude模型(助手)之間的對話交流。
Anthropic訓練克勞德模型在交替的用戶和助理對話轉彎上操作。建立新訊息時,您可以使用 message 參數指定先前的交談圈數。然後,模型會在交談中產生下一個訊息。
每個輸入訊息必須是具有角色和內容的物件。您可以指定單一使用者角色訊息,也可以包含多個使用者和助理訊息。第一封郵件必須始終使用使用者角色。
如果您使用的是預填響應的技術Claude(通過使用最終的助理角色消息填寫克勞德的響應開始),Claude將通過從您中斷的地方拾取來響應。使用這種技術,仍Claude將返回與助理角色的響應。
如果最後一封郵件使用助理角色,則回應內容會立即從該郵件中的內容繼續進行。您可以使用它來約束模型的一部分回應。
帶有單個用戶消息的示例:
[{"role": "user", "content": "Hello, Claude"}]
具有多個對話回合的示例:
[ {"role": "user", "content": "Hello there."}, {"role": "assistant", "content": "Hi, I'm Claude. How can I help you?"}, {"role": "user", "content": "Can you explain LLMs in plain English?"}, ]
克勞德(Claude)部分填充響應的示例:
[ {"role": "user", "content": "Please describe yourself using only JSON"}, {"role": "assistant", "content": "Here is my JSON description:\n{"}, ]
每個輸入消息的內容可以是單個字符串或內容塊的數組,其中每個塊具有特定的類型。使用字符串是類型為「text」的一個內容塊的數組的速記。下列輸入訊息是等效的:
{"role": "user", "content": "Hello, Claude"}
{"role": "user", "content": [{"type": "text", "text": "Hello, Claude"}]}
若要取得有關建立AnthropicClaude模型提示的資訊,請參閱AnthropicClaude文件中的提示簡介
系統提示
您也可以在請求中包含系統提示。系統提示可讓您為其提供前後關聯和指示 AnthropicClaude,例如指定特定目標或角色。在system
欄位中指定系統提示,如下列範例所示。
"system": "You are Claude, an AI assistant created by Anthropic to be helpful, harmless, and honest. Your goal is to provide informative and substantive responses to queries while avoiding potential harms."
如需詳細資訊,請參閱Anthropic文件中的系統提示
多模式提示
多模式提示會在單一提示中結合多種模式 (影像和文字)。您可以在content
輸入欄位中指定模式。下列範例會示範如何AnthropicClaude要求描述所提供影像的內容。如需範例程式碼,請參閱 多模態代碼示例。
{ "anthropic_version": "bedrock-2023-05-31", "max_tokens": 1024, "messages": [ { "role": "user", "content": [ { "type": "image", "source": { "type": "base64", "media_type": "image/jpeg", "data": "iVBORw..." } }, { "type": "text", "text": "What's in these images?" } ] } ] }
您最多可以為模型提供 20 個影像。您不能將圖像放在助理角色中。
您在請求中包含的每個圖像都計入您的令牌使用情況。如需詳細資訊,請參閱Anthropic文件中的映像成本
支援的型號
您可以將訊息 API 與下列AnthropicClaude型號搭配使用。
AnthropicClaudeInstantV1.2
AnthropicClaude2 V2
AnthropicClaude
Anthropic Claude 3 Sonnet
Anthropic Claude 3 Haiku
Anthropic Claude 3 Opus
請求與回應
要求主體會在要求body
欄位中傳遞給InvokeModel或InvokeModelWithResponseStream。您可以在要求中傳送的承載大小上限為 20MB。
如需詳細資訊,請參閱 https://docs.anthropic.com/claude/reference/messages_post
程式碼範例
下列程式碼範例會示範如何使用訊息 API。
訊息程式碼範例
此範例顯示如何將單回合使用者訊息以及帶有預先填入輔助訊息的使用者回合傳送至AnthropicClaude 3 Sonnet模型。
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 """ Shows how to generate a message with Anthropic Claude (on demand). """ import boto3 import json import logging from botocore.exceptions import ClientError logger = logging.getLogger(__name__) logging.basicConfig(level=logging.INFO) def generate_message(bedrock_runtime, model_id, system_prompt, messages, max_tokens): body=json.dumps( { "anthropic_version": "bedrock-2023-05-31", "max_tokens": max_tokens, "system": system_prompt, "messages": messages } ) response = bedrock_runtime.invoke_model(body=body, modelId=model_id) response_body = json.loads(response.get('body').read()) return response_body def main(): """ Entrypoint for Anthropic Claude message example. """ try: bedrock_runtime = boto3.client(service_name='bedrock-runtime') model_id = 'anthropic.claude-3-sonnet-20240229-v1:0' system_prompt = "Please respond only with emoji." max_tokens = 1000 # Prompt with user turn only. user_message = {"role": "user", "content": "Hello World"} messages = [user_message] response = generate_message (bedrock_runtime, model_id, system_prompt, messages, max_tokens) print("User turn only.") print(json.dumps(response, indent=4)) # Prompt with both user turn and prefilled assistant response. #Anthropic Claude continues by using the prefilled assistant text. assistant_message = {"role": "assistant", "content": "<emoji>"} messages = [user_message, assistant_message] response = generate_message(bedrock_runtime, model_id,system_prompt, messages, max_tokens) print("User turn and prefilled assistant response.") print(json.dumps(response, indent=4)) except ClientError as err: message=err.response["Error"]["Message"] logger.error("A client error occurred: %s", message) print("A client error occured: " + format(message)) if __name__ == "__main__": main()
多模態代碼示例
下列範例說明如何將多模式訊息中的影像和提示文字傳遞至AnthropicClaude 3 Sonnet模型。
多模態提示 InvokeModel
下面的例子演示了如何發送一個多模式提示AnthropicClaude 3 Sonnet與InvokeModel。
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 """ Shows how to run a multimodal prompt with Anthropic Claude (on demand) and InvokeModel. """ import json import logging import base64 import boto3 from botocore.exceptions import ClientError logger = logging.getLogger(__name__) logging.basicConfig(level=logging.INFO) def run_multi_modal_prompt(bedrock_runtime, model_id, messages, max_tokens): """ Invokes a model with a multimodal prompt. Args: bedrock_runtime: The Amazon Bedrock boto3 client. model_id (str): The model ID to use. messages (JSON) : The messages to send to the model. max_tokens (int) : The maximum number of tokens to generate. Returns: None. """ body = json.dumps( { "anthropic_version": "bedrock-2023-05-31", "max_tokens": max_tokens, "messages": messages } ) response = bedrock_runtime.invoke_model( body=body, modelId=model_id) response_body = json.loads(response.get('body').read()) return response_body def main(): """ Entrypoint for Anthropic Claude multimodal prompt example. """ try: bedrock_runtime = boto3.client(service_name='bedrock-runtime') model_id = 'anthropic.claude-3-sonnet-20240229-v1:0' max_tokens = 1000 input_image = "/path/to/image" input_text = "What's in this image?" # Read reference image from file and encode as base64 strings. with open(input_image, "rb") as image_file: content_image = base64.b64encode(image_file.read()).decode('utf8') message = {"role": "user", "content": [ {"type": "image", "source": {"type": "base64", "media_type": "image/jpeg", "data": content_image}}, {"type": "text", "text": input_text} ]} messages = [message] response = run_multi_modal_prompt( bedrock_runtime, model_id, messages, max_tokens) print(json.dumps(response, indent=4)) except ClientError as err: message = err.response["Error"]["Message"] logger.error("A client error occurred: %s", message) print("A client error occured: " + format(message)) if __name__ == "__main__": main()
串流多模式提示 InvokeModelWithResponseStream
下面的例子演示了如何從發送到AnthropicClaude 3 Sonnet與InvokeModelWithResponseStream多模式提示的響應。
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 """ Shows how to stream the response from Anthropic Claude Sonnet (on demand) for a multimodal request. """ import json import base64 import logging import boto3 from botocore.exceptions import ClientError logger = logging.getLogger(__name__) logging.basicConfig(level=logging.INFO) def stream_multi_modal_prompt(bedrock_runtime, model_id, input_text, image, max_tokens): """ Streams the response from a multimodal prompt. Args: bedrock_runtime: The Amazon Bedrock boto3 client. model_id (str): The model ID to use. input_text (str) : The prompt text image (str) : The path to an image that you want in the prompt. max_tokens (int) : The maximum number of tokens to generate. Returns: None. """ with open(image, "rb") as image_file: encoded_string = base64.b64encode(image_file.read()) body = json.dumps({ "anthropic_version": "bedrock-2023-05-31", "max_tokens": max_tokens, "messages": [ { "role": "user", "content": [ {"type": "text", "text": input_text}, {"type": "image", "source": {"type": "base64", "media_type": "image/jpeg", "data": encoded_string.decode('utf-8')}} ] } ] }) response = bedrock_runtime.invoke_model_with_response_stream( body=body, modelId=model_id) for event in response.get("body"): chunk = json.loads(event["chunk"]["bytes"]) if chunk['type'] == 'message_delta': print(f"\nStop reason: {chunk['delta']['stop_reason']}") print(f"Stop sequence: {chunk['delta']['stop_sequence']}") print(f"Output tokens: {chunk['usage']['output_tokens']}") if chunk['type'] == 'content_block_delta': if chunk['delta']['type'] == 'text_delta': print(chunk['delta']['text'], end="") def main(): """ Entrypoint for Anthropic Claude Sonnet multimodal prompt example. """ model_id = "anthropic.claude-3-sonnet-20240229-v1:0" input_text = "What can you tell me about this image?" image = "/path/to/image" max_tokens = 100 try: bedrock_runtime = boto3.client('bedrock-runtime') stream_multi_modal_prompt( bedrock_runtime, model_id, input_text, image, max_tokens) except ClientError as err: message = err.response["Error"]["Message"] logger.error("A client error occurred: %s", message) print("A client error occured: " + format(message)) if __name__ == "__main__": main()