使用 for Python (Boto3) SDK 的 Amazon Bedrock 執行期範例 - AWS SDK 程式碼範例

文件範例儲存庫中有更多 AWS SDK可用的範例。 AWS SDK GitHub

本文為英文版的機器翻譯版本,如內容有任何歧義或不一致之處,概以英文版為準。

使用 for Python (Boto3) SDK 的 Amazon Bedrock 執行期範例

下列程式碼範例說明如何搭配 Amazon Bedrock Runtime 使用 AWS SDK for Python (Boto3) 來執行動作和實作常見案例。

案例是程式碼範例,示範如何透過呼叫服務內的多個函數或與其他 結合來完成特定任務 AWS 服務。

每個範例都包含完整原始程式碼的連結,您可以在其中找到如何在內容中設定和執行程式碼的指示。

案例

下列程式碼範例示範如何建立遊樂場,以透過不同的模態與 Amazon Bedrock 基礎模型互動。

SDK for Python (Boto3)

Python Foundation Model (FM) 遊樂場是 Python/FastAPI 範例應用程式,展示如何使用 Amazon Bedrock 搭配 Python。此範例顯示 Python 開發人員如何使用 Amazon Bedrock 建置支援 AI 的生成應用程式。您可以使用下列三個遊樂場來測試 Amazon Bedrock 基礎模型並與之互動:

  • 文字遊樂場。

  • 聊天遊樂場。

  • 映像遊樂場。

此範例也會列出並顯示您可以存取的基礎模型及其特性。如需原始程式碼和部署指示,請參閱中的專案GitHub

此範例中使用的服務
  • Amazon Bedrock 執行期

下列程式碼範例示範如何使用 Amazon Bedrock 和 Step Functions 建置和協調生成的 AI 應用程式。

SDK for Python (Boto3)

Amazon Bedrock Serverless Prompt Chaining 案例示範 AWS Step FunctionsAmazon Bedrock 和 如何https://docs.aws.amazon.com/bedrock/latest/userguide/agents.html用於建置和協調複雜、無伺服器和高度可擴展的生成 AI 應用程式。它包含下列工作範例:

  • 為文獻部落格撰寫指定小說的分析。此範例說明簡單、循序的提示鏈。

  • 產生有關指定主題的簡短故事。此範例說明 AI 如何反覆處理先前產生的項目清單。

  • 建立前往指定目的地的週末假期行程。此範例說明如何平行處理多個不同的提示。

  • 將電影想法調適給擔任電影製作者的人類使用者。此範例說明如何使用不同的推論參數平行處理相同的提示,如何回溯至鏈中的上一個步驟,以及如何將人工輸入納入工作流程。

  • 根據使用者手頭的食材來規劃餐點。此範例說明提示鏈如何整合兩個不同的 AI 對話,其中兩個 AI 角色彼此進行爭論以改善最終結果。

  • 尋找並摘要今天最熱門的 GitHub 儲存庫。此範例說明連結多個與外部 互動的 AI 代理程式APIs。

如需設定和執行的完整原始程式碼和指示,請參閱 上的完整專案GitHub

此範例中使用的服務
  • Amazon Bedrock

  • Amazon Bedrock 執行期

  • Amazon Bedrock 代理程式

  • Amazon Bedrock 代理程式執行期

  • Step Functions

AI21 實驗室 Jurassic-2

下列程式碼範例示範如何使用 Bedrock 的 Converse 將文字訊息傳送至 AI21 Labs Jurassic-2API。

SDK for Python (Boto3)
注意

還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

使用 Bedrock 的 Converse 將文字訊息傳送至 AI21 Labs Jurassic-2API。

# Use the Conversation API to send a text message to AI21 Labs Jurassic-2. import boto3 from botocore.exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region you want to use. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Jurassic-2 Mid. model_id = "ai21.j2-mid-v1" # Start a conversation with the user message. user_message = "Describe the purpose of a 'hello world' program in one line." conversation = [ { "role": "user", "content": [{"text": user_message}], } ] try: # Send the message to the model, using a basic inference configuration. response = client.converse( modelId=model_id, messages=conversation, inferenceConfig={"maxTokens": 512, "temperature": 0.5, "topP": 0.9}, ) # Extract and print the response text. response_text = response["output"]["message"]["content"][0]["text"] print(response_text) except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1)
  • 如需API詳細資訊,請參閱適用於 Python (Boto3) 的 Converse 參考 。 AWS SDK API

下列程式碼範例示範如何使用叫用模型 將文字訊息傳送至 AI21 Labs Jurassic-2API。

SDK for Python (Boto3)
注意

還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

使用叫用模型API傳送文字訊息。

# Use the native inference API to send a text message to AI21 Labs Jurassic-2. import boto3 import json from botocore.exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region of your choice. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Jurassic-2 Mid. model_id = "ai21.j2-mid-v1" # Define the prompt for the model. prompt = "Describe the purpose of a 'hello world' program in one line." # Format the request payload using the model's native structure. native_request = { "prompt": prompt, "maxTokens": 512, "temperature": 0.5, } # Convert the native request to JSON. request = json.dumps(native_request) try: # Invoke the model with the request. response = client.invoke_model(modelId=model_id, body=request) except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1) # Decode the response body. model_response = json.loads(response["body"].read()) # Extract and print the response text. response_text = model_response["completions"][0]["data"]["text"] print(response_text)
  • 如需API詳細資訊,請參閱 InvokeModel 中的 AWS SDK for Python (Boto3) API參考

Amazon Titan Image Generator

下列程式碼範例示範如何在 Amazon Bedrock 上叫用 Amazon Titan Image 以產生映像。

SDK for Python (Boto3)
注意

還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

使用 Amazon Titan Image Generator 建立映像。

# Use the native inference API to create an image with Amazon Titan Image Generator import base64 import boto3 import json import os import random # Create a Bedrock Runtime client in the AWS Region of your choice. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Titan Image Generator G1. model_id = "amazon.titan-image-generator-v1" # Define the image generation prompt for the model. prompt = "A stylized picture of a cute old steampunk robot." # Generate a random seed. seed = random.randint(0, 2147483647) # Format the request payload using the model's native structure. native_request = { "taskType": "TEXT_IMAGE", "textToImageParams": {"text": prompt}, "imageGenerationConfig": { "numberOfImages": 1, "quality": "standard", "cfgScale": 8.0, "height": 512, "width": 512, "seed": seed, }, } # Convert the native request to JSON. request = json.dumps(native_request) # Invoke the model with the request. response = client.invoke_model(modelId=model_id, body=request) # Decode the response body. model_response = json.loads(response["body"].read()) # Extract the image data. base64_image_data = model_response["images"][0] # Save the generated image to a local folder. i, output_dir = 1, "output" if not os.path.exists(output_dir): os.makedirs(output_dir) while os.path.exists(os.path.join(output_dir, f"titan_{i}.png")): i += 1 image_data = base64.b64decode(base64_image_data) image_path = os.path.join(output_dir, f"titan_{i}.png") with open(image_path, "wb") as file: file.write(image_data) print(f"The generated image has been saved to {image_path}")
  • 如需API詳細資訊,請參閱 InvokeModel 中的 AWS SDK for Python (Boto3) API參考

Amazon Titan 文字

下列程式碼範例示範如何使用 Bedrock 的 Converse 將文字訊息傳送至 Amazon Titan TextAPI。

SDK for Python (Boto3)
注意

還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

使用 Bedrock 的 Converse 將文字訊息傳送至 Amazon Titan TextAPI。

# Use the Conversation API to send a text message to Amazon Titan Text. import boto3 from botocore.exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region you want to use. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Titan Text Premier. model_id = "amazon.titan-text-premier-v1:0" # Start a conversation with the user message. user_message = "Describe the purpose of a 'hello world' program in one line." conversation = [ { "role": "user", "content": [{"text": user_message}], } ] try: # Send the message to the model, using a basic inference configuration. response = client.converse( modelId=model_id, messages=conversation, inferenceConfig={"maxTokens": 512, "temperature": 0.5, "topP": 0.9}, ) # Extract and print the response text. response_text = response["output"]["message"]["content"][0]["text"] print(response_text) except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1)
  • 如需API詳細資訊,請參閱適用於 Python (Boto3) 的 Converse 參考 。 AWS SDK API

下列程式碼範例示範如何使用 Bedrock 的 Converse 將文字訊息傳送至 Amazon Titan Text,API並即時處理回應串流。

SDK for Python (Boto3)
注意

還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

使用 Bedrock 的 Converse 將文字訊息傳送至 Amazon Titan Text,API並即時處理回應串流。

# Use the Conversation API to send a text message to Amazon Titan Text # and print the response stream. import boto3 from botocore.exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region you want to use. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Titan Text Premier. model_id = "amazon.titan-text-premier-v1:0" # Start a conversation with the user message. user_message = "Describe the purpose of a 'hello world' program in one line." conversation = [ { "role": "user", "content": [{"text": user_message}], } ] try: # Send the message to the model, using a basic inference configuration. streaming_response = client.converse_stream( modelId=model_id, messages=conversation, inferenceConfig={"maxTokens": 512, "temperature": 0.5, "topP": 0.9}, ) # Extract and print the streamed response text in real-time. for chunk in streaming_response["stream"]: if "contentBlockDelta" in chunk: text = chunk["contentBlockDelta"]["delta"]["text"] print(text, end="") except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1)
  • 如需API詳細資訊,請參閱 ConverseStream 中的 AWS SDK for Python (Boto3) API參考

下列程式碼範例示範如何使用叫用模型 將文字訊息傳送至 Amazon Titan TextAPI。

SDK for Python (Boto3)
注意

還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

使用叫用模型API傳送文字訊息。

# Use the native inference API to send a text message to Amazon Titan Text. import boto3 import json from botocore.exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region of your choice. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Titan Text Premier. model_id = "amazon.titan-text-premier-v1:0" # Define the prompt for the model. prompt = "Describe the purpose of a 'hello world' program in one line." # Format the request payload using the model's native structure. native_request = { "inputText": prompt, "textGenerationConfig": { "maxTokenCount": 512, "temperature": 0.5, }, } # Convert the native request to JSON. request = json.dumps(native_request) try: # Invoke the model with the request. response = client.invoke_model(modelId=model_id, body=request) except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1) # Decode the response body. model_response = json.loads(response["body"].read()) # Extract and print the response text. response_text = model_response["results"][0]["outputText"] print(response_text)
  • 如需API詳細資訊,請參閱 InvokeModel 中的 AWS SDK for Python (Boto3) API參考

下列程式碼範例示範如何使用叫用模型 將文字訊息傳送至 Amazon Titan Text 模型API,並列印回應串流。

SDK for Python (Boto3)
注意

還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

使用叫用模型API傳送文字訊息,並即時處理回應串流。

# Use the native inference API to send a text message to Amazon Titan Text # and print the response stream. import boto3 import json # Create a Bedrock Runtime client in the AWS Region of your choice. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Titan Text Premier. model_id = "amazon.titan-text-premier-v1:0" # Define the prompt for the model. prompt = "Describe the purpose of a 'hello world' program in one line." # Format the request payload using the model's native structure. native_request = { "inputText": prompt, "textGenerationConfig": { "maxTokenCount": 512, "temperature": 0.5, }, } # Convert the native request to JSON. request = json.dumps(native_request) # Invoke the model with the request. streaming_response = client.invoke_model_with_response_stream( modelId=model_id, body=request ) # Extract and print the response text in real-time. for event in streaming_response["body"]: chunk = json.loads(event["chunk"]["bytes"]) if "outputText" in chunk: print(chunk["outputText"], end="")

Amazon Titan Text Embeddings

以下程式碼範例顯示做法:

  • 開始建立您的第一次內嵌。

  • 建立內嵌,設定維度和標準化的數量 (僅限 V2)。

SDK for Python (Boto3)
注意

還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

使用 Amazon Titan Text Embeddings 建立您的第一個內嵌。

# Generate and print an embedding with Amazon Titan Text Embeddings V2. import boto3 import json # Create a Bedrock Runtime client in the AWS Region of your choice. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Titan Text Embeddings V2. model_id = "amazon.titan-embed-text-v2:0" # The text to convert to an embedding. input_text = "Please recommend books with a theme similar to the movie 'Inception'." # Create the request for the model. native_request = {"inputText": input_text} # Convert the native request to JSON. request = json.dumps(native_request) # Invoke the model with the request. response = client.invoke_model(modelId=model_id, body=request) # Decode the model's native response body. model_response = json.loads(response["body"].read()) # Extract and print the generated embedding and the input text token count. embedding = model_response["embedding"] input_token_count = model_response["inputTextTokenCount"] print("\nYour input:") print(input_text) print(f"Number of input tokens: {input_token_count}") print(f"Size of the generated embedding: {len(embedding)}") print("Embedding:") print(embedding)
  • 如需API詳細資訊,請參閱 InvokeModel 中的 AWS SDK for Python (Boto3) API參考

Anthropic Claude

下列程式碼範例示範如何使用 Bedrock 的 Converse 將文字訊息傳送至 Anthropic ClaudeAPI。

SDK for Python (Boto3)
注意

還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

使用 Bedrock 的 Converse 將文字訊息傳送至 Anthropic ClaudeAPI。

# Use the Conversation API to send a text message to Anthropic Claude. import boto3 from botocore.exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region you want to use. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Claude 3 Haiku. model_id = "anthropic.claude-3-haiku-20240307-v1:0" # Start a conversation with the user message. user_message = "Describe the purpose of a 'hello world' program in one line." conversation = [ { "role": "user", "content": [{"text": user_message}], } ] try: # Send the message to the model, using a basic inference configuration. response = client.converse( modelId=model_id, messages=conversation, inferenceConfig={"maxTokens": 512, "temperature": 0.5, "topP": 0.9}, ) # Extract and print the response text. response_text = response["output"]["message"]["content"][0]["text"] print(response_text) except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1)
  • 如需API詳細資訊,請參閱適用於 Python (Boto3) 的 Converse 參考 。 AWS SDK API

下列程式碼範例示範如何使用 Bedrock 的 Converse 將文字訊息傳送至 Anthropic Claude,API並即時處理回應串流。

SDK for Python (Boto3)
注意

還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

使用 Bedrock 的 Converse 將文字訊息傳送至 Anthropic Claude,API並即時處理回應串流。

# Use the Conversation API to send a text message to Anthropic Claude # and print the response stream. import boto3 from botocore.exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region you want to use. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Claude 3 Haiku. model_id = "anthropic.claude-3-haiku-20240307-v1:0" # Start a conversation with the user message. user_message = "Describe the purpose of a 'hello world' program in one line." conversation = [ { "role": "user", "content": [{"text": user_message}], } ] try: # Send the message to the model, using a basic inference configuration. streaming_response = client.converse_stream( modelId=model_id, messages=conversation, inferenceConfig={"maxTokens": 512, "temperature": 0.5, "topP": 0.9}, ) # Extract and print the streamed response text in real-time. for chunk in streaming_response["stream"]: if "contentBlockDelta" in chunk: text = chunk["contentBlockDelta"]["delta"]["text"] print(text, end="") except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1)
  • 如需API詳細資訊,請參閱 ConverseStream 中的 AWS SDK for Python (Boto3) API參考

下列程式碼範例示範如何使用叫用模型 將文字訊息傳送至 Anthropic ClaudeAPI。

SDK for Python (Boto3)
注意

還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

使用叫用模型API傳送文字訊息。

# Use the native inference API to send a text message to Anthropic Claude. import boto3 import json from botocore.exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region of your choice. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Claude 3 Haiku. model_id = "anthropic.claude-3-haiku-20240307-v1:0" # Define the prompt for the model. prompt = "Describe the purpose of a 'hello world' program in one line." # Format the request payload using the model's native structure. native_request = { "anthropic_version": "bedrock-2023-05-31", "max_tokens": 512, "temperature": 0.5, "messages": [ { "role": "user", "content": [{"type": "text", "text": prompt}], } ], } # Convert the native request to JSON. request = json.dumps(native_request) try: # Invoke the model with the request. response = client.invoke_model(modelId=model_id, body=request) except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1) # Decode the response body. model_response = json.loads(response["body"].read()) # Extract and print the response text. response_text = model_response["content"][0]["text"] print(response_text)
  • 如需API詳細資訊,請參閱 InvokeModel 中的 AWS SDK for Python (Boto3) API參考

下列程式碼範例示範如何使用叫用模型 將文字訊息傳送至 Anthropic Claude 模型API,並列印回應串流。

SDK for Python (Boto3)
注意

還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

使用叫用模型API傳送文字訊息,並即時處理回應串流。

# Use the native inference API to send a text message to Anthropic Claude # and print the response stream. import boto3 import json # Create a Bedrock Runtime client in the AWS Region of your choice. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Claude 3 Haiku. model_id = "anthropic.claude-3-haiku-20240307-v1:0" # Define the prompt for the model. prompt = "Describe the purpose of a 'hello world' program in one line." # Format the request payload using the model's native structure. native_request = { "anthropic_version": "bedrock-2023-05-31", "max_tokens": 512, "temperature": 0.5, "messages": [ { "role": "user", "content": [{"type": "text", "text": prompt}], } ], } # Convert the native request to JSON. request = json.dumps(native_request) # Invoke the model with the request. streaming_response = client.invoke_model_with_response_stream( modelId=model_id, body=request ) # Extract and print the response text in real-time. for event in streaming_response["body"]: chunk = json.loads(event["chunk"]["bytes"]) if chunk["type"] == "content_block_delta": print(chunk["delta"].get("text", ""), end="")

下列程式碼範例示範如何在應用程式、生成 AI 模型和連線工具之間建立典型的互動,或APIs介導 AI 與外部世界之間的互動。它使用將外部天氣連接到 API AI 模型的範例,以便根據使用者輸入提供即時天氣資訊。

SDK for Python (Boto3)
注意

還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

示範的主要執行指令碼。此指令碼會協調使用者、Amazon Bedrock Converse API和天氣工具之間的對話。

""" This demo illustrates a tool use scenario using Amazon Bedrock's Converse API and a weather tool. The script interacts with a foundation model on Amazon Bedrock to provide weather information based on user input. It uses the Open-Meteo API (https://open-meteo.com) to retrieve current weather data for a given location. """ import boto3 import logging from enum import Enum import utils.tool_use_print_utils as output import weather_tool logging.basicConfig(level=logging.INFO, format="%(message)s") AWS_REGION = "us-east-1" # For the most recent list of models supported by the Converse API's tool use functionality, visit: # https://docs.aws.amazon.com/bedrock/latest/userguide/conversation-inference.html class SupportedModels(Enum): CLAUDE_OPUS = "anthropic.claude-3-opus-20240229-v1:0" CLAUDE_SONNET = "anthropic.claude-3-sonnet-20240229-v1:0" CLAUDE_HAIKU = "anthropic.claude-3-haiku-20240307-v1:0" COHERE_COMMAND_R = "cohere.command-r-v1:0" COHERE_COMMAND_R_PLUS = "cohere.command-r-plus-v1:0" # Set the model ID, e.g., Claude 3 Haiku. MODEL_ID = SupportedModels.CLAUDE_HAIKU.value SYSTEM_PROMPT = """ You are a weather assistant that provides current weather data for user-specified locations using only the Weather_Tool, which expects latitude and longitude. Infer the coordinates from the location yourself. If the user provides coordinates, infer the approximate location and refer to it in your response. To use the tool, you strictly apply the provided tool specification. - Explain your step-by-step process, and give brief updates before each step. - Only use the Weather_Tool for data. Never guess or make up information. - Repeat the tool use for subsequent requests if necessary. - If the tool errors, apologize, explain weather is unavailable, and suggest other options. - Report temperatures in °C (°F) and wind in km/h (mph). Keep weather reports concise. Sparingly use emojis where appropriate. - Only respond to weather queries. Remind off-topic users of your purpose. - Never claim to search online, access external data, or use tools besides Weather_Tool. - Complete the entire process until you have all required data before sending the complete response. """ # The maximum number of recursive calls allowed in the tool_use_demo function. # This helps prevent infinite loops and potential performance issues. MAX_RECURSIONS = 5 class ToolUseDemo: """ Demonstrates the tool use feature with the Amazon Bedrock Converse API. """ def __init__(self): # Prepare the system prompt self.system_prompt = [{"text": SYSTEM_PROMPT}] # Prepare the tool configuration with the weather tool's specification self.tool_config = {"tools": [weather_tool.get_tool_spec()]} # Create a Bedrock Runtime client in the specified AWS Region. self.bedrockRuntimeClient = boto3.client( "bedrock-runtime", region_name=AWS_REGION ) def run(self): """ Starts the conversation with the user and handles the interaction with Bedrock. """ # Print the greeting and a short user guide output.header() # Start with an emtpy conversation conversation = [] # Get the first user input user_input = self._get_user_input() while user_input is not None: # Create a new message with the user input and append it to the conversation message = {"role": "user", "content": [{"text": user_input}]} conversation.append(message) # Send the conversation to Amazon Bedrock bedrock_response = self._send_conversation_to_bedrock(conversation) # Recursively handle the model's response until the model has returned # its final response or the recursion counter has reached 0 self._process_model_response( bedrock_response, conversation, max_recursion=MAX_RECURSIONS ) # Repeat the loop until the user decides to exit the application user_input = self._get_user_input() output.footer() def _send_conversation_to_bedrock(self, conversation): """ Sends the conversation, the system prompt, and the tool spec to Amazon Bedrock, and returns the response. :param conversation: The conversation history including the next message to send. :return: The response from Amazon Bedrock. """ output.call_to_bedrock(conversation) # Send the conversation, system prompt, and tool configuration, and return the response return self.bedrockRuntimeClient.converse( modelId=MODEL_ID, messages=conversation, system=self.system_prompt, toolConfig=self.tool_config, ) def _process_model_response( self, model_response, conversation, max_recursion=MAX_RECURSIONS ): """ Processes the response received via Amazon Bedrock and performs the necessary actions based on the stop reason. :param model_response: The model's response returned via Amazon Bedrock. :param conversation: The conversation history. :param max_recursion: The maximum number of recursive calls allowed. """ if max_recursion <= 0: # Stop the process, the number of recursive calls could indicate an infinite loop logging.warning( "Warning: Maximum number of recursions reached. Please try again." ) exit(1) # Append the model's response to the ongoing conversation message = model_response["output"]["message"] conversation.append(message) if model_response["stopReason"] == "tool_use": # If the stop reason is "tool_use", forward everything to the tool use handler self._handle_tool_use(message, conversation, max_recursion) if model_response["stopReason"] == "end_turn": # If the stop reason is "end_turn", print the model's response text, and finish the process output.model_response(message["content"][0]["text"]) return def _handle_tool_use( self, model_response, conversation, max_recursion=MAX_RECURSIONS ): """ Handles the tool use case by invoking the specified tool and sending the tool's response back to Bedrock. The tool response is appended to the conversation, and the conversation is sent back to Amazon Bedrock for further processing. :param model_response: The model's response containing the tool use request. :param conversation: The conversation history. :param max_recursion: The maximum number of recursive calls allowed. """ # Initialize an empty list of tool results tool_results = [] # The model's response can consist of multiple content blocks for content_block in model_response["content"]: if "text" in content_block: # If the content block contains text, print it to the console output.model_response(content_block["text"]) if "toolUse" in content_block: # If the content block is a tool use request, forward it to the tool tool_response = self._invoke_tool(content_block["toolUse"]) # Add the tool use ID and the tool's response to the list of results tool_results.append( { "toolResult": { "toolUseId": (tool_response["toolUseId"]), "content": [{"json": tool_response["content"]}], } } ) # Embed the tool results in a new user message message = {"role": "user", "content": tool_results} # Append the new message to the ongoing conversation conversation.append(message) # Send the conversation to Amazon Bedrock response = self._send_conversation_to_bedrock(conversation) # Recursively handle the model's response until the model has returned # its final response or the recursion counter has reached 0 self._process_model_response(response, conversation, max_recursion - 1) def _invoke_tool(self, payload): """ Invokes the specified tool with the given payload and returns the tool's response. If the requested tool does not exist, an error message is returned. :param payload: The payload containing the tool name and input data. :return: The tool's response or an error message. """ tool_name = payload["name"] if tool_name == "Weather_Tool": input_data = payload["input"] output.tool_use(tool_name, input_data) # Invoke the weather tool with the input data provided by response = weather_tool.fetch_weather_data(input_data) else: error_message = ( f"The requested tool with name '{tool_name}' does not exist." ) response = {"error": "true", "message": error_message} return {"toolUseId": payload["toolUseId"], "content": response} @staticmethod def _get_user_input(prompt="Your weather info request"): """ Prompts the user for input and returns the user's response. Returns None if the user enters 'x' to exit. :param prompt: The prompt to display to the user. :return: The user's input or None if the user chooses to exit. """ output.separator() user_input = input(f"{prompt} (x to exit): ") if user_input == "": prompt = "Please enter your weather info request, e.g. the name of a city" return ToolUseDemo._get_user_input(prompt) elif user_input.lower() == "x": return None else: return user_input if __name__ == "__main__": tool_use_demo = ToolUseDemo() tool_use_demo.run()

示範使用的天氣工具。此指令碼定義工具規格,並實作邏輯,以使用 Open-Meteo 擷取天氣資料API。

import requests from requests.exceptions import RequestException def get_tool_spec(): """ Returns the JSON Schema specification for the Weather tool. The tool specification defines the input schema and describes the tool's functionality. For more information, see https://json-schema.org/understanding-json-schema/reference. :return: The tool specification for the Weather tool. """ return { "toolSpec": { "name": "Weather_Tool", "description": "Get the current weather for a given location, based on its WGS84 coordinates.", "inputSchema": { "json": { "type": "object", "properties": { "latitude": { "type": "string", "description": "Geographical WGS84 latitude of the location.", }, "longitude": { "type": "string", "description": "Geographical WGS84 longitude of the location.", }, }, "required": ["latitude", "longitude"], } }, } } def fetch_weather_data(input_data): """ Fetches weather data for the given latitude and longitude using the Open-Meteo API. Returns the weather data or an error message if the request fails. :param input_data: The input data containing the latitude and longitude. :return: The weather data or an error message. """ endpoint = "https://api.open-meteo.com/v1/forecast" latitude = input_data.get("latitude") longitude = input_data.get("longitude", "") params = {"latitude": latitude, "longitude": longitude, "current_weather": True} try: response = requests.get(endpoint, params=params) weather_data = {"weather_data": response.json()} response.raise_for_status() return weather_data except RequestException as e: return e.response.json() except Exception as e: return {"error": type(e), "message": str(e)}
  • 如需API詳細資訊,請參閱適用於 Python (Boto3) 的 Converse 參考 。 AWS SDK API

Cohere Command

下列程式碼範例示範如何使用 Bedrock 的 Converse 將文字訊息傳送至 Cohere CommandAPI。

SDK for Python (Boto3)
注意

還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

使用 Bedrock 的 Converse 將文字訊息傳送至 Cohere CommandAPI。

# Use the Conversation API to send a text message to Cohere Command. import boto3 from botocore.exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region you want to use. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Command R. model_id = "cohere.command-r-v1:0" # Start a conversation with the user message. user_message = "Describe the purpose of a 'hello world' program in one line." conversation = [ { "role": "user", "content": [{"text": user_message}], } ] try: # Send the message to the model, using a basic inference configuration. response = client.converse( modelId=model_id, messages=conversation, inferenceConfig={"maxTokens": 512, "temperature": 0.5, "topP": 0.9}, ) # Extract and print the response text. response_text = response["output"]["message"]["content"][0]["text"] print(response_text) except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1)
  • 如需API詳細資訊,請參閱適用於 Python (Boto3) 的 Converse 參考 。 AWS SDK API

下列程式碼範例示範如何使用 Bedrock 的 Converse 將文字訊息傳送至 Cohere Command,API並即時處理回應串流。

SDK for Python (Boto3)
注意

還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

使用 Bedrock 的 Converse 將文字訊息傳送至 Cohere Command,API並即時處理回應串流。

# Use the Conversation API to send a text message to Cohere Command # and print the response stream. import boto3 from botocore.exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region you want to use. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Command R. model_id = "cohere.command-r-v1:0" # Start a conversation with the user message. user_message = "Describe the purpose of a 'hello world' program in one line." conversation = [ { "role": "user", "content": [{"text": user_message}], } ] try: # Send the message to the model, using a basic inference configuration. streaming_response = client.converse_stream( modelId=model_id, messages=conversation, inferenceConfig={"maxTokens": 512, "temperature": 0.5, "topP": 0.9}, ) # Extract and print the streamed response text in real-time. for chunk in streaming_response["stream"]: if "contentBlockDelta" in chunk: text = chunk["contentBlockDelta"]["delta"]["text"] print(text, end="") except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1)
  • 如需API詳細資訊,請參閱 ConverseStream 中的 AWS SDK for Python (Boto3) API參考

下列程式碼範例示範如何使用叫用模型 將文字訊息傳送至 Cohere Command R 和 R+API。

SDK for Python (Boto3)
注意

還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

使用叫用模型API傳送文字訊息。

# Use the native inference API to send a text message to Cohere Command R and R+. import boto3 import json from botocore.exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region of your choice. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Command R. model_id = "cohere.command-r-v1:0" # Define the prompt for the model. prompt = "Describe the purpose of a 'hello world' program in one line." # Format the request payload using the model's native structure. native_request = { "message": prompt, "max_tokens": 512, "temperature": 0.5, } # Convert the native request to JSON. request = json.dumps(native_request) try: # Invoke the model with the request. response = client.invoke_model(modelId=model_id, body=request) except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1) # Decode the response body. model_response = json.loads(response["body"].read()) # Extract and print the response text. response_text = model_response["text"] print(response_text)
  • 如需API詳細資訊,請參閱 InvokeModel 中的 AWS SDK for Python (Boto3) API參考

下列程式碼範例示範如何使用調用模型 將文字訊息傳送至 Cohere CommandAPI。

SDK for Python (Boto3)
注意

還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

使用叫用模型API傳送文字訊息。

# Use the native inference API to send a text message to Cohere Command. import boto3 import json from botocore.exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region of your choice. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Command Light. model_id = "cohere.command-light-text-v14" # Define the prompt for the model. prompt = "Describe the purpose of a 'hello world' program in one line." # Format the request payload using the model's native structure. native_request = { "prompt": prompt, "max_tokens": 512, "temperature": 0.5, } # Convert the native request to JSON. request = json.dumps(native_request) try: # Invoke the model with the request. response = client.invoke_model(modelId=model_id, body=request) except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1) # Decode the response body. model_response = json.loads(response["body"].read()) # Extract and print the response text. response_text = model_response["generations"][0]["text"] print(response_text)
  • 如需API詳細資訊,請參閱 InvokeModel 中的 AWS SDK for Python (Boto3) API參考

下列程式碼範例示範如何使用叫用模型API搭配回應串流,將文字訊息傳送至 Cohere Command。

SDK for Python (Boto3)
注意

還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

使用叫用模型API傳送文字訊息,並即時處理回應串流。

# Use the native inference API to send a text message to Cohere Command R and R+ # and print the response stream. import boto3 import json from botocore.exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region of your choice. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Command R. model_id = "cohere.command-r-v1:0" # Define the prompt for the model. prompt = "Describe the purpose of a 'hello world' program in one line." # Format the request payload using the model's native structure. native_request = { "message": prompt, "max_tokens": 512, "temperature": 0.5, } # Convert the native request to JSON. request = json.dumps(native_request) try: # Invoke the model with the request. streaming_response = client.invoke_model_with_response_stream( modelId=model_id, body=request ) # Extract and print the response text in real-time. for event in streaming_response["body"]: chunk = json.loads(event["chunk"]["bytes"]) if "generations" in chunk: print(chunk["generations"][0]["text"], end="") except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1)
  • 如需API詳細資訊,請參閱 InvokeModel 中的 AWS SDK for Python (Boto3) API參考

下列程式碼範例示範如何使用叫用模型API搭配回應串流,將文字訊息傳送至 Cohere Command。

SDK for Python (Boto3)
注意

還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

使用叫用模型API傳送文字訊息,並即時處理回應串流。

# Use the native inference API to send a text message to Cohere Command # and print the response stream. import boto3 import json from botocore.exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region of your choice. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Command Light. model_id = "cohere.command-light-text-v14" # Define the prompt for the model. prompt = "Describe the purpose of a 'hello world' program in one line." # Format the request payload using the model's native structure. native_request = { "prompt": prompt, "max_tokens": 512, "temperature": 0.5, } # Convert the native request to JSON. request = json.dumps(native_request) try: # Invoke the model with the request. streaming_response = client.invoke_model_with_response_stream( modelId=model_id, body=request ) # Extract and print the response text in real-time. for event in streaming_response["body"]: chunk = json.loads(event["chunk"]["bytes"]) if "generations" in chunk: print(chunk["generations"][0]["text"], end="") except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1)
  • 如需API詳細資訊,請參閱 InvokeModel 中的 AWS SDK for Python (Boto3) API參考

下列程式碼範例示範如何在應用程式、生成 AI 模型和連線工具之間建立典型的互動,或APIs介導 AI 與外部世界之間的互動。它使用將外部天氣連接到 API AI 模型的範例,以便根據使用者輸入提供即時天氣資訊。

SDK for Python (Boto3)
注意

還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

示範的主要執行指令碼。此指令碼會協調使用者、Amazon Bedrock Converse API和天氣工具之間的對話。

""" This demo illustrates a tool use scenario using Amazon Bedrock's Converse API and a weather tool. The script interacts with a foundation model on Amazon Bedrock to provide weather information based on user input. It uses the Open-Meteo API (https://open-meteo.com) to retrieve current weather data for a given location. """ import boto3 import logging from enum import Enum import utils.tool_use_print_utils as output import weather_tool logging.basicConfig(level=logging.INFO, format="%(message)s") AWS_REGION = "us-east-1" # For the most recent list of models supported by the Converse API's tool use functionality, visit: # https://docs.aws.amazon.com/bedrock/latest/userguide/conversation-inference.html class SupportedModels(Enum): CLAUDE_OPUS = "anthropic.claude-3-opus-20240229-v1:0" CLAUDE_SONNET = "anthropic.claude-3-sonnet-20240229-v1:0" CLAUDE_HAIKU = "anthropic.claude-3-haiku-20240307-v1:0" COHERE_COMMAND_R = "cohere.command-r-v1:0" COHERE_COMMAND_R_PLUS = "cohere.command-r-plus-v1:0" # Set the model ID, e.g., Claude 3 Haiku. MODEL_ID = SupportedModels.CLAUDE_HAIKU.value SYSTEM_PROMPT = """ You are a weather assistant that provides current weather data for user-specified locations using only the Weather_Tool, which expects latitude and longitude. Infer the coordinates from the location yourself. If the user provides coordinates, infer the approximate location and refer to it in your response. To use the tool, you strictly apply the provided tool specification. - Explain your step-by-step process, and give brief updates before each step. - Only use the Weather_Tool for data. Never guess or make up information. - Repeat the tool use for subsequent requests if necessary. - If the tool errors, apologize, explain weather is unavailable, and suggest other options. - Report temperatures in °C (°F) and wind in km/h (mph). Keep weather reports concise. Sparingly use emojis where appropriate. - Only respond to weather queries. Remind off-topic users of your purpose. - Never claim to search online, access external data, or use tools besides Weather_Tool. - Complete the entire process until you have all required data before sending the complete response. """ # The maximum number of recursive calls allowed in the tool_use_demo function. # This helps prevent infinite loops and potential performance issues. MAX_RECURSIONS = 5 class ToolUseDemo: """ Demonstrates the tool use feature with the Amazon Bedrock Converse API. """ def __init__(self): # Prepare the system prompt self.system_prompt = [{"text": SYSTEM_PROMPT}] # Prepare the tool configuration with the weather tool's specification self.tool_config = {"tools": [weather_tool.get_tool_spec()]} # Create a Bedrock Runtime client in the specified AWS Region. self.bedrockRuntimeClient = boto3.client( "bedrock-runtime", region_name=AWS_REGION ) def run(self): """ Starts the conversation with the user and handles the interaction with Bedrock. """ # Print the greeting and a short user guide output.header() # Start with an emtpy conversation conversation = [] # Get the first user input user_input = self._get_user_input() while user_input is not None: # Create a new message with the user input and append it to the conversation message = {"role": "user", "content": [{"text": user_input}]} conversation.append(message) # Send the conversation to Amazon Bedrock bedrock_response = self._send_conversation_to_bedrock(conversation) # Recursively handle the model's response until the model has returned # its final response or the recursion counter has reached 0 self._process_model_response( bedrock_response, conversation, max_recursion=MAX_RECURSIONS ) # Repeat the loop until the user decides to exit the application user_input = self._get_user_input() output.footer() def _send_conversation_to_bedrock(self, conversation): """ Sends the conversation, the system prompt, and the tool spec to Amazon Bedrock, and returns the response. :param conversation: The conversation history including the next message to send. :return: The response from Amazon Bedrock. """ output.call_to_bedrock(conversation) # Send the conversation, system prompt, and tool configuration, and return the response return self.bedrockRuntimeClient.converse( modelId=MODEL_ID, messages=conversation, system=self.system_prompt, toolConfig=self.tool_config, ) def _process_model_response( self, model_response, conversation, max_recursion=MAX_RECURSIONS ): """ Processes the response received via Amazon Bedrock and performs the necessary actions based on the stop reason. :param model_response: The model's response returned via Amazon Bedrock. :param conversation: The conversation history. :param max_recursion: The maximum number of recursive calls allowed. """ if max_recursion <= 0: # Stop the process, the number of recursive calls could indicate an infinite loop logging.warning( "Warning: Maximum number of recursions reached. Please try again." ) exit(1) # Append the model's response to the ongoing conversation message = model_response["output"]["message"] conversation.append(message) if model_response["stopReason"] == "tool_use": # If the stop reason is "tool_use", forward everything to the tool use handler self._handle_tool_use(message, conversation, max_recursion) if model_response["stopReason"] == "end_turn": # If the stop reason is "end_turn", print the model's response text, and finish the process output.model_response(message["content"][0]["text"]) return def _handle_tool_use( self, model_response, conversation, max_recursion=MAX_RECURSIONS ): """ Handles the tool use case by invoking the specified tool and sending the tool's response back to Bedrock. The tool response is appended to the conversation, and the conversation is sent back to Amazon Bedrock for further processing. :param model_response: The model's response containing the tool use request. :param conversation: The conversation history. :param max_recursion: The maximum number of recursive calls allowed. """ # Initialize an empty list of tool results tool_results = [] # The model's response can consist of multiple content blocks for content_block in model_response["content"]: if "text" in content_block: # If the content block contains text, print it to the console output.model_response(content_block["text"]) if "toolUse" in content_block: # If the content block is a tool use request, forward it to the tool tool_response = self._invoke_tool(content_block["toolUse"]) # Add the tool use ID and the tool's response to the list of results tool_results.append( { "toolResult": { "toolUseId": (tool_response["toolUseId"]), "content": [{"json": tool_response["content"]}], } } ) # Embed the tool results in a new user message message = {"role": "user", "content": tool_results} # Append the new message to the ongoing conversation conversation.append(message) # Send the conversation to Amazon Bedrock response = self._send_conversation_to_bedrock(conversation) # Recursively handle the model's response until the model has returned # its final response or the recursion counter has reached 0 self._process_model_response(response, conversation, max_recursion - 1) def _invoke_tool(self, payload): """ Invokes the specified tool with the given payload and returns the tool's response. If the requested tool does not exist, an error message is returned. :param payload: The payload containing the tool name and input data. :return: The tool's response or an error message. """ tool_name = payload["name"] if tool_name == "Weather_Tool": input_data = payload["input"] output.tool_use(tool_name, input_data) # Invoke the weather tool with the input data provided by response = weather_tool.fetch_weather_data(input_data) else: error_message = ( f"The requested tool with name '{tool_name}' does not exist." ) response = {"error": "true", "message": error_message} return {"toolUseId": payload["toolUseId"], "content": response} @staticmethod def _get_user_input(prompt="Your weather info request"): """ Prompts the user for input and returns the user's response. Returns None if the user enters 'x' to exit. :param prompt: The prompt to display to the user. :return: The user's input or None if the user chooses to exit. """ output.separator() user_input = input(f"{prompt} (x to exit): ") if user_input == "": prompt = "Please enter your weather info request, e.g. the name of a city" return ToolUseDemo._get_user_input(prompt) elif user_input.lower() == "x": return None else: return user_input if __name__ == "__main__": tool_use_demo = ToolUseDemo() tool_use_demo.run()

示範使用的天氣工具。此指令碼定義工具規格,並實作邏輯,以使用 Open-Meteo 擷取天氣資料API。

import requests from requests.exceptions import RequestException def get_tool_spec(): """ Returns the JSON Schema specification for the Weather tool. The tool specification defines the input schema and describes the tool's functionality. For more information, see https://json-schema.org/understanding-json-schema/reference. :return: The tool specification for the Weather tool. """ return { "toolSpec": { "name": "Weather_Tool", "description": "Get the current weather for a given location, based on its WGS84 coordinates.", "inputSchema": { "json": { "type": "object", "properties": { "latitude": { "type": "string", "description": "Geographical WGS84 latitude of the location.", }, "longitude": { "type": "string", "description": "Geographical WGS84 longitude of the location.", }, }, "required": ["latitude", "longitude"], } }, } } def fetch_weather_data(input_data): """ Fetches weather data for the given latitude and longitude using the Open-Meteo API. Returns the weather data or an error message if the request fails. :param input_data: The input data containing the latitude and longitude. :return: The weather data or an error message. """ endpoint = "https://api.open-meteo.com/v1/forecast" latitude = input_data.get("latitude") longitude = input_data.get("longitude", "") params = {"latitude": latitude, "longitude": longitude, "current_weather": True} try: response = requests.get(endpoint, params=params) weather_data = {"weather_data": response.json()} response.raise_for_status() return weather_data except RequestException as e: return e.response.json() except Exception as e: return {"error": type(e), "message": str(e)}
  • 如需API詳細資訊,請參閱適用於 Python (Boto3) 的 Converse 參考 。 AWS SDK API

Meta Llama

下列程式碼範例示範如何使用 Bedrock 的 Converse 將文字訊息傳送至 Meta LlamaAPI。

SDK for Python (Boto3)
注意

還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

使用 Bedrock 的 Converse 將文字訊息傳送至 Meta LlamaAPI。

# Use the Conversation API to send a text message to Meta Llama. import boto3 from botocore.exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region you want to use. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Llama 3 8b Instruct. model_id = "meta.llama3-8b-instruct-v1:0" # Start a conversation with the user message. user_message = "Describe the purpose of a 'hello world' program in one line." conversation = [ { "role": "user", "content": [{"text": user_message}], } ] try: # Send the message to the model, using a basic inference configuration. response = client.converse( modelId=model_id, messages=conversation, inferenceConfig={"maxTokens": 512, "temperature": 0.5, "topP": 0.9}, ) # Extract and print the response text. response_text = response["output"]["message"]["content"][0]["text"] print(response_text) except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1)
  • 如需API詳細資訊,請參閱適用於 Python (Boto3) 的 Converse 參考 。 AWS SDK API

下列程式碼範例示範如何使用 Bedrock 的 Converse 將文字訊息傳送至 Meta Llama,API並即時處理回應串流。

SDK for Python (Boto3)
注意

還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

使用 Bedrock 的 Converse 將文字訊息傳送至 Meta Llama,API並即時處理回應串流。

# Use the Conversation API to send a text message to Meta Llama # and print the response stream. import boto3 from botocore.exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region you want to use. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Llama 3 8b Instruct. model_id = "meta.llama3-8b-instruct-v1:0" # Start a conversation with the user message. user_message = "Describe the purpose of a 'hello world' program in one line." conversation = [ { "role": "user", "content": [{"text": user_message}], } ] try: # Send the message to the model, using a basic inference configuration. streaming_response = client.converse_stream( modelId=model_id, messages=conversation, inferenceConfig={"maxTokens": 512, "temperature": 0.5, "topP": 0.9}, ) # Extract and print the streamed response text in real-time. for chunk in streaming_response["stream"]: if "contentBlockDelta" in chunk: text = chunk["contentBlockDelta"]["delta"]["text"] print(text, end="") except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1)
  • 如需API詳細資訊,請參閱 ConverseStream 中的 AWS SDK for Python (Boto3) API參考

下列程式碼範例示範如何使用叫用模型 將文字訊息傳送至 Meta Llama 2API。

SDK for Python (Boto3)
注意

還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

使用叫用模型API傳送文字訊息。

# Use the native inference API to send a text message to Meta Llama 2. import boto3 import json from botocore.exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region of your choice. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Llama 2 Chat 13B. model_id = "meta.llama2-13b-chat-v1" # Define the prompt for the model. prompt = "Describe the purpose of a 'hello world' program in one line." # Embed the prompt in Llama 2's instruction format. formatted_prompt = f"<s>[INST] {prompt} [/INST]" # Format the request payload using the model's native structure. native_request = { "prompt": formatted_prompt, "max_gen_len": 512, "temperature": 0.5, } # Convert the native request to JSON. request = json.dumps(native_request) try: # Invoke the model with the request. response = client.invoke_model(modelId=model_id, body=request) except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1) # Decode the response body. model_response = json.loads(response["body"].read()) # Extract and print the response text. response_text = model_response["generation"] print(response_text)
  • 如需API詳細資訊,請參閱 InvokeModel 中的 AWS SDK for Python (Boto3) API參考

下列程式碼範例示範如何使用叫用模型 將文字訊息傳送至 Meta Llama 3API。

SDK for Python (Boto3)
注意

還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

使用叫用模型API傳送文字訊息。

# Use the native inference API to send a text message to Meta Llama 3. import boto3 import json from botocore.exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region of your choice. client = boto3.client("bedrock-runtime", region_name="us-west-2") # Set the model ID, e.g., Llama 3 70b Instruct. model_id = "meta.llama3-70b-instruct-v1:0" # Define the prompt for the model. prompt = "Describe the purpose of a 'hello world' program in one line." # Embed the prompt in Llama 3's instruction format. formatted_prompt = f""" <|begin_of_text|><|start_header_id|>user<|end_header_id|> {prompt} <|eot_id|> <|start_header_id|>assistant<|end_header_id|> """ # Format the request payload using the model's native structure. native_request = { "prompt": formatted_prompt, "max_gen_len": 512, "temperature": 0.5, } # Convert the native request to JSON. request = json.dumps(native_request) try: # Invoke the model with the request. response = client.invoke_model(modelId=model_id, body=request) except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1) # Decode the response body. model_response = json.loads(response["body"].read()) # Extract and print the response text. response_text = model_response["generation"] print(response_text)
  • 如需API詳細資訊,請參閱 InvokeModel 中的 AWS SDK for Python (Boto3) API參考

下列程式碼範例示範如何使用叫用模型 將文字訊息傳送至 Meta Llama 2API,並列印回應串流。

SDK for Python (Boto3)
注意

還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

使用叫用模型API傳送文字訊息,並即時處理回應串流。

# Use the native inference API to send a text message to Meta Llama 2 # and print the response stream. import boto3 import json from botocore.exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region of your choice. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Llama 2 Chat 13B. model_id = "meta.llama2-13b-chat-v1" # Define the prompt for the model. prompt = "Describe the purpose of a 'hello world' program in one line." # Embed the prompt in Llama 2's instruction format. formatted_prompt = f"<s>[INST] {prompt} [/INST]" # Format the request payload using the model's native structure. native_request = { "prompt": formatted_prompt, "max_gen_len": 512, "temperature": 0.5, } # Convert the native request to JSON. request = json.dumps(native_request) try: # Invoke the model with the request. streaming_response = client.invoke_model_with_response_stream( modelId=model_id, body=request ) # Extract and print the response text in real-time. for event in streaming_response["body"]: chunk = json.loads(event["chunk"]["bytes"]) if "generation" in chunk: print(chunk["generation"], end="") except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1)

下列程式碼範例示範如何使用叫用模型 將文字訊息傳送至 Meta Llama 3API,並列印回應串流。

SDK for Python (Boto3)
注意

還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

使用叫用模型API傳送文字訊息,並即時處理回應串流。

# Use the native inference API to send a text message to Meta Llama 3 # and print the response stream. import boto3 import json from botocore.exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region of your choice. client = boto3.client("bedrock-runtime", region_name="us-west-2") # Set the model ID, e.g., Llama 3 70b Instruct. model_id = "meta.llama3-70b-instruct-v1:0" # Define the prompt for the model. prompt = "Describe the purpose of a 'hello world' program in one line." # Embed the prompt in Llama 3's instruction format. formatted_prompt = f""" <|begin_of_text|><|start_header_id|>user<|end_header_id|> {prompt} <|eot_id|> <|start_header_id|>assistant<|end_header_id|> """ # Format the request payload using the model's native structure. native_request = { "prompt": formatted_prompt, "max_gen_len": 512, "temperature": 0.5, } # Convert the native request to JSON. request = json.dumps(native_request) try: # Invoke the model with the request. streaming_response = client.invoke_model_with_response_stream( modelId=model_id, body=request ) # Extract and print the response text in real-time. for event in streaming_response["body"]: chunk = json.loads(event["chunk"]["bytes"]) if "generation" in chunk: print(chunk["generation"], end="") except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1)

混合式 AI

下列程式碼範例示範如何使用 Bedrock 的 Converse 將文字訊息傳送至 MistralAPI。

SDK for Python (Boto3)
注意

還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

使用 Bedrock 的 Converse 將文字訊息傳送至 MistralAPI。

# Use the Conversation API to send a text message to Mistral. import boto3 from botocore.exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region you want to use. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Mistral Large. model_id = "mistral.mistral-large-2402-v1:0" # Start a conversation with the user message. user_message = "Describe the purpose of a 'hello world' program in one line." conversation = [ { "role": "user", "content": [{"text": user_message}], } ] try: # Send the message to the model, using a basic inference configuration. response = client.converse( modelId=model_id, messages=conversation, inferenceConfig={"maxTokens": 512, "temperature": 0.5, "topP": 0.9}, ) # Extract and print the response text. response_text = response["output"]["message"]["content"][0]["text"] print(response_text) except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1)
  • 如需API詳細資訊,請參閱適用於 Python (Boto3) 的 Converse 參考 。 AWS SDK API

下列程式碼範例示範如何使用 Bedrock 的 Converse 將文字訊息傳送至 Mistral,API並即時處理回應串流。

SDK for Python (Boto3)
注意

還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

使用 Bedrock 的 Converse 將文字訊息傳送至 Mistral,API並即時處理回應串流。

# Use the Conversation API to send a text message to Mistral # and print the response stream. import boto3 from botocore.exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region you want to use. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Mistral Large. model_id = "mistral.mistral-large-2402-v1:0" # Start a conversation with the user message. user_message = "Describe the purpose of a 'hello world' program in one line." conversation = [ { "role": "user", "content": [{"text": user_message}], } ] try: # Send the message to the model, using a basic inference configuration. streaming_response = client.converse_stream( modelId=model_id, messages=conversation, inferenceConfig={"maxTokens": 512, "temperature": 0.5, "topP": 0.9}, ) # Extract and print the streamed response text in real-time. for chunk in streaming_response["stream"]: if "contentBlockDelta" in chunk: text = chunk["contentBlockDelta"]["delta"]["text"] print(text, end="") except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1)
  • 如需API詳細資訊,請參閱 ConverseStream 中的 AWS SDK for Python (Boto3) API參考

下列程式碼範例示範如何使用調用模型 將文字訊息傳送至 Mistral 模型API。

SDK for Python (Boto3)
注意

還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

使用叫用模型API傳送文字訊息。

# Use the native inference API to send a text message to Mistral. import boto3 import json from botocore.exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region of your choice. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Mistral Large. model_id = "mistral.mistral-large-2402-v1:0" # Define the prompt for the model. prompt = "Describe the purpose of a 'hello world' program in one line." # Embed the prompt in Mistral's instruction format. formatted_prompt = f"<s>[INST] {prompt} [/INST]" # Format the request payload using the model's native structure. native_request = { "prompt": formatted_prompt, "max_tokens": 512, "temperature": 0.5, } # Convert the native request to JSON. request = json.dumps(native_request) try: # Invoke the model with the request. response = client.invoke_model(modelId=model_id, body=request) except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1) # Decode the response body. model_response = json.loads(response["body"].read()) # Extract and print the response text. response_text = model_response["outputs"][0]["text"] print(response_text)
  • 如需API詳細資訊,請參閱 InvokeModel 中的 AWS SDK for Python (Boto3) API參考

下列程式碼範例示範如何使用叫用模型 將文字訊息傳送至 Mistral AI 模型API,並列印回應串流。

SDK for Python (Boto3)
注意

還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

使用叫用模型API傳送文字訊息,並即時處理回應串流。

# Use the native inference API to send a text message to Mistral # and print the response stream. import boto3 import json from botocore.exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region of your choice. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Mistral Large. model_id = "mistral.mistral-large-2402-v1:0" # Define the prompt for the model. prompt = "Describe the purpose of a 'hello world' program in one line." # Embed the prompt in Mistral's instruction format. formatted_prompt = f"<s>[INST] {prompt} [/INST]" # Format the request payload using the model's native structure. native_request = { "prompt": formatted_prompt, "max_tokens": 512, "temperature": 0.5, } # Convert the native request to JSON. request = json.dumps(native_request) try: # Invoke the model with the request. streaming_response = client.invoke_model_with_response_stream( modelId=model_id, body=request ) # Extract and print the response text in real-time. for event in streaming_response["body"]: chunk = json.loads(event["chunk"]["bytes"]) if "outputs" in chunk: print(chunk["outputs"][0].get("text"), end="") except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}''. Reason: {e}") exit(1)

Stable Diffusion

下列程式碼範例示範如何在 Amazon Bedrock 上叫用 Stability.ai Stable Diffusion XL 來產生映像。

SDK for Python (Boto3)
注意

還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

使用穩定擴散建立映像。

# Use the native inference API to create an image with Stability.ai Stable Diffusion import base64 import boto3 import json import os import random # Create a Bedrock Runtime client in the AWS Region of your choice. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Stable Diffusion XL 1. model_id = "stability.stable-diffusion-xl-v1" # Define the image generation prompt for the model. prompt = "A stylized picture of a cute old steampunk robot." # Generate a random seed. seed = random.randint(0, 4294967295) # Format the request payload using the model's native structure. native_request = { "text_prompts": [{"text": prompt}], "style_preset": "photographic", "seed": seed, "cfg_scale": 10, "steps": 30, } # Convert the native request to JSON. request = json.dumps(native_request) # Invoke the model with the request. response = client.invoke_model(modelId=model_id, body=request) # Decode the response body. model_response = json.loads(response["body"].read()) # Extract the image data. base64_image_data = model_response["artifacts"][0]["base64"] # Save the generated image to a local folder. i, output_dir = 1, "output" if not os.path.exists(output_dir): os.makedirs(output_dir) while os.path.exists(os.path.join(output_dir, f"stability_{i}.png")): i += 1 image_data = base64.b64decode(base64_image_data) image_path = os.path.join(output_dir, f"stability_{i}.png") with open(image_path, "wb") as file: file.write(image_data) print(f"The generated image has been saved to {image_path}")
  • 如需API詳細資訊,請參閱 InvokeModel 中的 AWS SDK for Python (Boto3) API參考