本文為英文版的機器翻譯版本,如內容有任何歧義或不一致之處,概以英文版為準。
Anthropic Claude 訊息 API
本節提供使用 的推論參數和程式碼範例 Anthropic Claude 訊息 API。
Anthropic Claude 訊息API概觀
您可以使用訊息API來建立聊天機器人或虛擬助理應用程式。API 管理使用者與 之間的對話交換 Anthropic Claude 模型 (助理)。
提示
本主題說明如何使用 Anthropic Claude API 具有基礎推論操作 (InvokeModel 或 InvokeModelWithResponseStream) 的訊息。不過,我們建議您使用 Converse 在應用程式中API實作訊息。Converse API提供一組統一的參數,可用於支援訊息的所有模型。如需詳細資訊,請參閱與 Converse API操作進行對話。
Anthropic 訓練 Claude 模型在交替使用者和助理對話輪換時操作。建立新訊息時,您可以使用訊息參數指定先前的對話輪換。然後,模型會在對話中產生下一個訊息。
每個輸入訊息必須是具有角色和內容的物件。您可以指定單一使用者角色訊息,也可以包含多個使用者和助理訊息。第一個訊息必須一律使用使用者角色。
如果您使用從 預先填入回應的技巧 Claude (在 Claude 的回應開頭使用最終助理角色訊息填寫),Claude 將透過從您離開的地方提取 來回應。透過此技術 Claude 仍會傳回具有助理角色的回應。
如果最終訊息使用助理角色,回應內容會立即從該訊息的內容中繼續。您可以使用此功能來限制模型回應的一部分。
具有單一使用者訊息的範例:
[{"role": "user", "content": "Hello, Claude"}]
具有多個對話輪換的範例:
[ {"role": "user", "content": "Hello there."}, {"role": "assistant", "content": "Hi, I'm Claude. How can I help you?"}, {"role": "user", "content": "Can you explain LLMs in plain English?"}, ]
具有來自 Claude 部分填充回應的範例:
[ {"role": "user", "content": "Please describe yourself using only JSON"}, {"role": "assistant", "content": "Here is my JSON description:\n{"}, ]
每個輸入訊息內容可能是單一字串或內容區塊陣列,其中每個區塊都有特定類型。對於類型為 "text" 的單一內容區塊陣列,使用字串是短暫的。下列輸入訊息相當:
{"role": "user", "content": "Hello, Claude"}
{"role": "user", "content": [{"type": "text", "text": "Hello, Claude"}]}
如需為 建立提示的相關資訊 Anthropic Claude 模型,請參閱 中的提示簡介
系統提示
您也可以在請求中包含系統提示。系統提示可讓您提供內容和指示給 Anthropic Claude,例如指定特定目標或角色。在 欄位中指定系統提示system
,如下列範例所示。
"system": "You are Claude, an AI assistant created by Anthropic to be helpful, harmless, and honest. Your goal is to provide informative and substantive responses to queries while avoiding potential harms."
如需詳細資訊,請參閱中的系統提示
多模式提示
多模態提示在單一提示中結合了多種模態 (影像和文字)。您可以在content
輸入欄位中指定模態。下列範例顯示您可以如何提出 Anthropic Claude 描述所提供映像的內容。如需範例程式碼,請參閱 多模式程式碼範例。
{ "anthropic_version": "bedrock-2023-05-31", "max_tokens": 1024, "messages": [ { "role": "user", "content": [ { "type": "image", "source": { "type": "base64", "media_type": "image/jpeg", "data": "iVBORw..." } }, { "type": "text", "text": "What's in these images?" } ] } ] }
注意
下列限制與 content
欄位相關:
-
您最多可以包含 20 個影像。每個影像的大小、高度和寬度分別不得超過 3.75 MB、8,000 px 和 8,000 px。
-
您最多可以包含五個文件。每個文件的大小不得超過 4.5 MB。
-
如果
role
是 ,您只能包含影像和文件user
。
您在請求中包含的每個影像都會計入您的字符用量。如需詳細資訊,請參閱中的映像成本
工具使用 (函數呼叫)
使用 Anthropic Claude 3 種模型,您可以指定一個工具,讓模型用來回應訊息。例如,您可以指定工具,取得廣播電台上最受歡迎的歌曲。如果使用者傳遞訊息 上最熱門的歌曲是什麼WZPZ?,模型會判斷您指定的工具有助於回答問題。在其回應中,模型會請求您代其執行工具。然後,您會執行工具並將工具結果傳遞至模型,然後產生原始訊息的回應。如需詳細資訊,請參閱中的工具使用 (函數呼叫)
提示
我們建議您使用 Converse API將工具使用整合至應用程式。如需詳細資訊,請參閱使用工具完成 Amazon 基岩模型回應。
您可以在 tools
欄位中指定要提供給模型的工具。下列範例適用於取得廣播電台上最熱門歌曲的工具。
[ { "name": "top_song", "description": "Get the most popular song played on a radio station.", "input_schema": { "type": "object", "properties": { "sign": { "type": "string", "description": "The call sign for the radio station for which you want the most popular song. Example calls signs are WZPZ and WKRP." } }, "required": [ "sign" ] } } ]
當模型需要工具來產生對訊息的回應時,它會在訊息content
欄位中,傳回所請求工具的相關資訊,以及對工具的輸入。它也會將回應的停止原因設定為 tool_use
。
{ "id": "msg_bdrk_01USsY5m3XRUF4FCppHP8KBx", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "stop_sequence": null, "usage": { "input_tokens": 375, "output_tokens": 36 }, "content": [ { "type": "tool_use", "id": "toolu_bdrk_01SnXQc6YVWD8Dom5jz7KhHy", "name": "top_song", "input": { "sign": "WZPZ" } } ], "stop_reason": "tool_use" }
在程式碼中,您可以代表工具呼叫工具。然後,您將使用者訊息中的工具結果 (tool_result
) 傳遞給模型。
{ "role": "user", "content": [ { "type": "tool_result", "tool_use_id": "toolu_bdrk_01SnXQc6YVWD8Dom5jz7KhHy", "content": "Elemental Hotel" } ] }
在其回應中,模型會使用工具結果來產生原始訊息的回應。
{ "id": "msg_bdrk_012AaqvTiKuUSc6WadhUkDLP", "type": "message", "role": "assistant", "model": "claude-3-sonnet-20240229", "content": [ { "type": "text", "text": "According to the tool, the most popular song played on radio station WZPZ is \"Elemental Hotel\"." } ], "stop_reason": "end_turn" }
支援的模型
您可以搭配API下列項目使用訊息 Anthropic Claude 模型。
Anthropic Claude Instant v1.2
Anthropic Claude 2 v2
Anthropic Claude 2 v2.1
Anthropic Claude 3 Sonnet
Anthropic Claude 3.5 Sonnet
Anthropic Claude 3 Haiku
Anthropic Claude 3 Opus
請求與回應
請求內文會在請求的 body
欄位中傳遞給 InvokeModel或 InvokeModelWithResponseStream。您可以在請求中傳送的承載大小上限為 20MB
如需詳細資訊,請參閱 https://docs.anthropic.com/claude/reference/messages_post。
程式碼範例
下列程式碼範例示範如何使用訊息 API。
訊息程式碼範例
此範例示範如何將單轉使用者訊息和具有預先填入助理訊息的使用者轉彎傳送至 Anthropic Claude 3 Sonnet 模型。
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 """ Shows how to generate a message with Anthropic Claude (on demand). """ import boto3 import json import logging from botocore.exceptions import ClientError logger = logging.getLogger(__name__) logging.basicConfig(level=logging.INFO) def generate_message(bedrock_runtime, model_id, system_prompt, messages, max_tokens): body=json.dumps( { "anthropic_version": "bedrock-2023-05-31", "max_tokens": max_tokens, "system": system_prompt, "messages": messages } ) response = bedrock_runtime.invoke_model(body=body, modelId=model_id) response_body = json.loads(response.get('body').read()) return response_body def main(): """ Entrypoint for Anthropic Claude message example. """ try: bedrock_runtime = boto3.client(service_name='bedrock-runtime') model_id = 'anthropic.claude-3-sonnet-20240229-v1:0' system_prompt = "Please respond only with emoji." max_tokens = 1000 # Prompt with user turn only. user_message = {"role": "user", "content": "Hello World"} messages = [user_message] response = generate_message (bedrock_runtime, model_id, system_prompt, messages, max_tokens) print("User turn only.") print(json.dumps(response, indent=4)) # Prompt with both user turn and prefilled assistant response. #Anthropic Claude continues by using the prefilled assistant text. assistant_message = {"role": "assistant", "content": "<emoji>"} messages = [user_message, assistant_message] response = generate_message(bedrock_runtime, model_id,system_prompt, messages, max_tokens) print("User turn and prefilled assistant response.") print(json.dumps(response, indent=4)) except ClientError as err: message=err.response["Error"]["Message"] logger.error("A client error occurred: %s", message) print("A client error occured: " + format(message)) if __name__ == "__main__": main()
多模式程式碼範例
下列範例示範如何將多模式訊息中的映像和提示文字傳遞至 Anthropic Claude 3 Sonnet 模型。
具有 的多模式提示 InvokeModel
下列範例示範如何將多模式提示傳送至 Anthropic Claude 3 Sonnet 使用 InvokeModel。
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 """ Shows how to run a multimodal prompt with Anthropic Claude (on demand) and InvokeModel. """ import json import logging import base64 import boto3 from botocore.exceptions import ClientError logger = logging.getLogger(__name__) logging.basicConfig(level=logging.INFO) def run_multi_modal_prompt(bedrock_runtime, model_id, messages, max_tokens): """ Invokes a model with a multimodal prompt. Args: bedrock_runtime: The Amazon Bedrock boto3 client. model_id (str): The model ID to use. messages (JSON) : The messages to send to the model. max_tokens (int) : The maximum number of tokens to generate. Returns: None. """ body = json.dumps( { "anthropic_version": "bedrock-2023-05-31", "max_tokens": max_tokens, "messages": messages } ) response = bedrock_runtime.invoke_model( body=body, modelId=model_id) response_body = json.loads(response.get('body').read()) return response_body def main(): """ Entrypoint for Anthropic Claude multimodal prompt example. """ try: bedrock_runtime = boto3.client(service_name='bedrock-runtime') model_id = 'anthropic.claude-3-sonnet-20240229-v1:0' max_tokens = 1000 input_image = "/path/to/image" input_text = "What's in this image?" # Read reference image from file and encode as base64 strings. with open(input_image, "rb") as image_file: content_image = base64.b64encode(image_file.read()).decode('utf8') message = {"role": "user", "content": [ {"type": "image", "source": {"type": "base64", "media_type": "image/jpeg", "data": content_image}}, {"type": "text", "text": input_text} ]} messages = [message] response = run_multi_modal_prompt( bedrock_runtime, model_id, messages, max_tokens) print(json.dumps(response, indent=4)) except ClientError as err: message = err.response["Error"]["Message"] logger.error("A client error occurred: %s", message) print("A client error occured: " + format(message)) if __name__ == "__main__": main()
使用 串流多模式提示 InvokeModelWithResponseStream
下列範例示範如何從傳送至 的多模式提示串流回應 Anthropic Claude 3 Sonnet 使用 InvokeModelWithResponseStream。
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 """ Shows how to stream the response from Anthropic Claude Sonnet (on demand) for a multimodal request. """ import json import base64 import logging import boto3 from botocore.exceptions import ClientError logger = logging.getLogger(__name__) logging.basicConfig(level=logging.INFO) def stream_multi_modal_prompt(bedrock_runtime, model_id, input_text, image, max_tokens): """ Streams the response from a multimodal prompt. Args: bedrock_runtime: The Amazon Bedrock boto3 client. model_id (str): The model ID to use. input_text (str) : The prompt text image (str) : The path to an image that you want in the prompt. max_tokens (int) : The maximum number of tokens to generate. Returns: None. """ with open(image, "rb") as image_file: encoded_string = base64.b64encode(image_file.read()) body = json.dumps({ "anthropic_version": "bedrock-2023-05-31", "max_tokens": max_tokens, "messages": [ { "role": "user", "content": [ {"type": "text", "text": input_text}, {"type": "image", "source": {"type": "base64", "media_type": "image/jpeg", "data": encoded_string.decode('utf-8')}} ] } ] }) response = bedrock_runtime.invoke_model_with_response_stream( body=body, modelId=model_id) for event in response.get("body"): chunk = json.loads(event["chunk"]["bytes"]) if chunk['type'] == 'message_delta': print(f"\nStop reason: {chunk['delta']['stop_reason']}") print(f"Stop sequence: {chunk['delta']['stop_sequence']}") print(f"Output tokens: {chunk['usage']['output_tokens']}") if chunk['type'] == 'content_block_delta': if chunk['delta']['type'] == 'text_delta': print(chunk['delta']['text'], end="") def main(): """ Entrypoint for Anthropic Claude Sonnet multimodal prompt example. """ model_id = "anthropic.claude-3-sonnet-20240229-v1:0" input_text = "What can you tell me about this image?" image = "/path/to/image" max_tokens = 100 try: bedrock_runtime = boto3.client('bedrock-runtime') stream_multi_modal_prompt( bedrock_runtime, model_id, input_text, image, max_tokens) except ClientError as err: message = err.response["Error"]["Message"] logger.error("A client error occurred: %s", message) print("A client error occured: " + format(message)) if __name__ == "__main__": main()