Amazon 基岩運行時示例使 SDK for Python (Boto3) - AWS SDK 程式碼範例

AWS 文件 AWS SDK 範例 GitHub 存放庫中提供了更多 SDK 範例

本文為英文版的機器翻譯版本,如內容有任何歧義或不一致之處,概以英文版為準。

Amazon 基岩運行時示例使 SDK for Python (Boto3)

下列程式碼範例說明如何使用 Amazon 基岩執行階段來執行動作和實作常見案例。 AWS SDK for Python (Boto3)

Actions 是大型程式的程式碼摘錄,必須在內容中執行。雖然動作會告訴您如何呼叫個別服務函數,但您可以在其相關情境和跨服務範例中查看內容中的動作。

Scenarios (案例) 是向您展示如何呼叫相同服務中的多個函數來完成特定任務的程式碼範例。

每個範例都包含一個連結 GitHub,您可以在其中找到如何在內容中設定和執行程式碼的指示。

愛 21 實驗室茱莉亞西克 -2

下面的代碼示例演示了如何發送文本消息到 AI21 實驗室 Jurassic-2,使用基岩的匡威 API。

適用於 Python (Boto3) 的 SDK
注意

還有更多關於 GitHub。尋找完整範例,並了解如何在AWS 設定和執行程式碼範例儲存庫

使用基岩的匡威 API 向 AI21 實驗室侏羅西克 -2 發送短信。

# Use the Conversation API to send a text message to AI21 Labs Jurassic-2. import boto3 from botocore.exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region you want to use. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Jurassic-2 Mid. model_id = "ai21.j2-mid-v1" # Start a conversation with the user message. user_message = "Describe the purpose of a 'hello world' program in one line." conversation = [ { "role": "user", "content": [{"text": user_message}], } ] try: # Send the message to the model, using a basic inference configuration. response = client.converse( modelId=model_id, messages=conversation, inferenceConfig={"maxTokens": 512, "temperature": 0.5, "topP": 0.9}, ) # Extract and print the response text. response_text = response["output"]["message"]["content"][0]["text"] print(response_text) except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1)
  • 有關 API 的詳細信息,請參閱在 AWS SDK 中進行交談以獲取 Python(博托 3)API 參考。

下面的代碼示例演示了如何發送文本消息到 AI21 實驗室 Jurassic-2,使用調用模型 API。

適用於 Python (Boto3) 的 SDK
注意

還有更多關於 GitHub。尋找完整範例,並了解如何在AWS 設定和執行程式碼範例儲存庫

使用叫用模型 API 來傳送文字訊息。

# Use the native inference API to send a text message to AI21 Labs Jurassic-2. import boto3 import json from botocore.Exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region of your choice. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Jurassic-2 Mid. model_id = "ai21.j2-mid-v1" # Define the prompt for the model. prompt = "Describe the purpose of a 'hello world' program in one line." # Format the request payload using the model's native structure. native_request = { "prompt": prompt, "maxTokens": 512, "temperature": 0.5, } # Convert the native request to JSON. request = json.dumps(native_request) try: # Invoke the model with the request. response = client.invoke_model(modelId=model_id, body=request) except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1) # Decode the response body. model_response = json.loads(response["body"].read()) # Extract and print the response text. response_text = model_response["completions"][0]["data"]["text"] print(response_text)
  • 如需 API 的詳細資訊,請參閱AWS 開發套件InvokeModel中的 Python (博托 3) API 參考。

Amazon Titan Image Generator

下面的代碼示例演示了如何在 Amazon 基岩上調用 Amazon Titan 圖像來生成圖像。

適用於 Python (Boto3) 的 SDK
注意

還有更多關於 GitHub。尋找完整範例,並了解如何在AWS 設定和執行程式碼範例儲存庫

使用 Amazon 泰坦圖像生成器創建圖像。

# Use the native inference API to create an image with Amazon Titan Image Generator import base64 import boto3 import json import os import random # Create a Bedrock Runtime client in the AWS Region of your choice. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Titan Image Generator G1. model_id = "amazon.titan-image-generator-v1" # Define the image generation prompt for the model. prompt = "A stylized picture of a cute old steampunk robot." # Generate a random seed. seed = random.randint(0, 2147483647) # Format the request payload using the model's native structure. native_request = { "taskType": "TEXT_IMAGE", "textToImageParams": {"text": prompt}, "imageGenerationConfig": { "numberOfImages": 1, "quality": "standard", "cfgScale": 8.0, "height": 512, "width": 512, "seed": seed, }, } # Convert the native request to JSON. request = json.dumps(native_request) # Invoke the model with the request. response = client.invoke_model(modelId=model_id, body=request) # Decode the response body. model_response = json.loads(response["body"].read()) # Extract the image data. base64_image_data = model_response["images"][0] # Save the generated image to a local folder. i, output_dir = 1, "output" if not os.path.exists(output_dir): os.makedirs(output_dir) while os.path.exists(os.path.join(output_dir, f"titan_{i}.png")): i += 1 image_data = base64.b64decode(base64_image_data) image_path = os.path.join(output_dir, f"titan_{i}.png") with open(image_path, "wb") as file: file.write(image_data) print(f"The generated image has been saved to {image_path}")
  • 如需 API 的詳細資訊,請參閱AWS 開發套件InvokeModel中的 Python (博托 3) API 參考。

Amazon 泰坦文本

下面的代碼示例演示了如何使用基岩的匡威 API 將文本消息發送到 Amazon 泰坦文本。

適用於 Python (Boto3) 的 SDK
注意

還有更多關於 GitHub。尋找完整範例,並了解如何在AWS 設定和執行程式碼範例儲存庫

使用基岩的匡威 API 向 Amazon 泰坦文本發送短信。

# Use the Conversation API to send a text message to Amazon Titan Text. import boto3 from botocore.exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region you want to use. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Titan Text Premier. model_id = "amazon.titan-text-premier-v1:0" # Start a conversation with the user message. user_message = "Describe the purpose of a 'hello world' program in one line." conversation = [ { "role": "user", "content": [{"text": user_message}], } ] try: # Send the message to the model, using a basic inference configuration. response = client.converse( modelId=model_id, messages=conversation, inferenceConfig={"maxTokens": 512, "temperature": 0.5, "topP": 0.9}, ) # Extract and print the response text. response_text = response["output"]["message"]["content"][0]["text"] print(response_text) except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1)
  • 有關 API 的詳細信息,請參閱在 AWS SDK 中進行交談以獲取 Python(博托 3)API 參考。

下列程式碼範例示範如何使用基岩的匡威 API 傳送文字訊息至 Amazon Titan 文字,並即時處理回應串流。

適用於 Python (Boto3) 的 SDK
注意

還有更多關於 GitHub。尋找完整範例,並了解如何在AWS 設定和執行程式碼範例儲存庫

使用基岩的匡威 API 傳送文字訊息至 Amazon Titan 文字,並即時處理回應串流。

# Use the Conversation API to send a text message to Amazon Titan Text # and print the response stream. import boto3 from botocore.exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region you want to use. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Titan Text Premier. model_id = "amazon.titan-text-premier-v1:0" # Start a conversation with the user message. user_message = "Describe the purpose of a 'hello world' program in one line." conversation = [ { "role": "user", "content": [{"text": user_message}], } ] try: # Send the message to the model, using a basic inference configuration. streaming_response = client.converse_stream( modelId=model_id, messages=conversation, inferenceConfig={"maxTokens": 512, "temperature": 0.5, "topP": 0.9}, ) # Extract and print the streamed response text in real-time. for chunk in streaming_response["stream"]: if "contentBlockDelta" in chunk: text = chunk["contentBlockDelta"]["delta"]["text"] print(text, end="") except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1)
  • 如需 API 的詳細資訊,請參閱AWS 開發套件ConverseStream中的 Python (博托 3) API 參考。

下列程式碼範例顯示如何使用叫用模型 API,將文字訊息傳送至 Amazon Titan 文字。

適用於 Python (Boto3) 的 SDK
注意

還有更多關於 GitHub。尋找完整範例,並了解如何在AWS 設定和執行程式碼範例儲存庫

使用叫用模型 API 來傳送文字訊息。

# Use the native inference API to send a text message to Amazon Titan Text. import boto3 import json from botocore.Exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region of your choice. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Titan Text Premier. model_id = "amazon.titan-text-premier-v1:0" # Define the prompt for the model. prompt = "Describe the purpose of a 'hello world' program in one line." # Format the request payload using the model's native structure. native_request = { "inputText": prompt, "textGenerationConfig": { "maxTokenCount": 512, "temperature": 0.5, }, } # Convert the native request to JSON. request = json.dumps(native_request) try: # Invoke the model with the request. response = client.invoke_model(modelId=model_id, body=request) except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1) # Decode the response body. model_response = json.loads(response["body"].read()) # Extract and print the response text. response_text = model_response["results"][0]["outputText"] print(response_text)
  • 如需 API 的詳細資訊,請參閱AWS 開發套件InvokeModel中的 Python (博托 3) API 參考。

下列程式碼範例顯示如何使用叫用模型 API 傳送文字訊息至 Amazon Titan 文字模型,以及如何列印回應串流。

適用於 Python (Boto3) 的 SDK
注意

還有更多關於 GitHub。尋找完整範例,並了解如何在AWS 設定和執行程式碼範例儲存庫

使用叫用模型 API 傳送文字訊息並即時處理回應串流。

# Use the native inference API to send a text message to Amazon Titan Text # and print the response stream. import boto3 import json # Create a Bedrock Runtime client in the AWS Region of your choice. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Titan Text Premier. model_id = "amazon.titan-text-premier-v1:0" # Define the prompt for the model. prompt = "Describe the purpose of a 'hello world' program in one line." # Format the request payload using the model's native structure. native_request = { "inputText": prompt, "textGenerationConfig": { "maxTokenCount": 512, "temperature": 0.5, }, } # Convert the native request to JSON. request = json.dumps(native_request) # Invoke the model with the request. streaming_response = client.invoke_model_with_response_stream( modelId=model_id, body=request ) # Extract and print the response text in real-time. for event in streaming_response["body"]: chunk = json.loads(event["chunk"]["bytes"]) if "outputText" in chunk: print(chunk["outputText"], end="")

Amazon Titan Text Embeddings

以下程式碼範例顯示做法:

  • 開始創建您的第一個嵌入。

  • 創建嵌入配置維度和規範化的數量(僅限 V2)。

適用於 Python (Boto3) 的 SDK
注意

還有更多關於 GitHub。尋找完整範例,並了解如何在AWS 設定和執行程式碼範例儲存庫

使用 Amazon Titan 文字嵌入建立您的第一個嵌入。

# Generate and print an embedding with Amazon Titan Text Embeddings V2. import boto3 import json # Create a Bedrock Runtime client in the AWS Region of your choice. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Titan Text Embeddings V2. model_id = "amazon.titan-embed-text-v2:0" # The text to convert to an embedding. input_text = "Please recommend books with a theme similar to the movie 'Inception'." # Create the request for the model. native_request = {"inputText": input_text} # Convert the native request to JSON. request = json.dumps(native_request) # Invoke the model with the request. response = client.invoke_model(modelId=model_id, body=request) # Decode the model's native response body. model_response = json.loads(response["body"].read()) # Extract and print the generated embedding and the input text token count. embedding = model_response["embedding"] input_token_count = model_response["inputTextTokenCount"] print("\nYour input:") print(input_text) print(f"Number of input tokens: {input_token_count}") print(f"Size of the generated embedding: {len(embedding)}") print("Embedding:") print(embedding)
  • 如需 API 的詳細資訊,請參閱AWS 開發套件InvokeModel中的 Python (博托 3) API 參考。

Anthropic Claude

下面的代碼示例演示了如何使用基岩的匡威 API 將文本消息發送給人類克勞德。

適用於 Python (Boto3) 的 SDK
注意

還有更多關於 GitHub。尋找完整範例,並了解如何在AWS 設定和執行程式碼範例儲存庫

發送文本消息給人為克勞德,使用基岩的匡威 API。

# Use the Conversation API to send a text message to Anthropic Claude. import boto3 from botocore.exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region you want to use. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Claude 3 Haiku. model_id = "anthropic.claude-3-haiku-20240307-v1:0" # Start a conversation with the user message. user_message = "Describe the purpose of a 'hello world' program in one line." conversation = [ { "role": "user", "content": [{"text": user_message}], } ] try: # Send the message to the model, using a basic inference configuration. response = client.converse( modelId=model_id, messages=conversation, inferenceConfig={"maxTokens": 512, "temperature": 0.5, "topP": 0.9}, ) # Extract and print the response text. response_text = response["output"]["message"]["content"][0]["text"] print(response_text) except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1)
  • 有關 API 的詳細信息,請參閱在 AWS SDK 中進行交談以獲取 Python(博托 3)API 參考。

下面的代碼示例演示了如何使用基岩的匡威 API 發送文本消息給人類克勞德,並實時處理響應流。

適用於 Python (Boto3) 的 SDK
注意

還有更多關於 GitHub。尋找完整範例,並了解如何在AWS 設定和執行程式碼範例儲存庫

使用基岩的匡威 API 將文本消息發送給人為克勞德,並實時處理響應流。

# Use the Conversation API to send a text message to Anthropic Claude # and print the response stream. import boto3 from botocore.exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region you want to use. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Claude 3 Haiku. model_id = "anthropic.claude-3-haiku-20240307-v1:0" # Start a conversation with the user message. user_message = "Describe the purpose of a 'hello world' program in one line." conversation = [ { "role": "user", "content": [{"text": user_message}], } ] try: # Send the message to the model, using a basic inference configuration. streaming_response = client.converse_stream( modelId=model_id, messages=conversation, inferenceConfig={"maxTokens": 512, "temperature": 0.5, "topP": 0.9}, ) # Extract and print the streamed response text in real-time. for chunk in streaming_response["stream"]: if "contentBlockDelta" in chunk: text = chunk["contentBlockDelta"]["delta"]["text"] print(text, end="") except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1)
  • 如需 API 的詳細資訊,請參閱AWS 開發套件ConverseStream中的 Python (博托 3) API 參考。

下列程式碼範例示範如何使用叫用模型 API,將文字訊息傳送至人性克勞德。

適用於 Python (Boto3) 的 SDK
注意

還有更多關於 GitHub。尋找完整範例,並了解如何在AWS 設定和執行程式碼範例儲存庫

使用叫用模型 API 來傳送文字訊息。

# Use the native inference API to send a text message to Anthropic Claude. import boto3 import json from botocore.Exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region of your choice. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Claude 3 Haiku. model_id = "anthropic.claude-3-haiku-20240307-v1:0" # Define the prompt for the model. prompt = "Describe the purpose of a 'hello world' program in one line." # Format the request payload using the model's native structure. native_request = { "anthropic_version": "bedrock-2023-05-31", "max_tokens": 512, "temperature": 0.5, "messages": [ { "role": "user", "content": [{"type": "text", "text": prompt}], } ], } # Convert the native request to JSON. request = json.dumps(native_request) try: # Invoke the model with the request. response = client.invoke_model(modelId=model_id, body=request) except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1) # Decode the response body. model_response = json.loads(response["body"].read()) # Extract and print the response text. response_text = model_response["content"][0]["text"] print(response_text)
  • 如需 API 的詳細資訊,請參閱AWS 開發套件InvokeModel中的 Python (博托 3) API 參考。

下列程式碼範例示範如何使用叫用模型 API,將文字訊息傳送至人性克勞德模型,以及列印回應資料流。

適用於 Python (Boto3) 的 SDK
注意

還有更多關於 GitHub。尋找完整範例,並了解如何在AWS 設定和執行程式碼範例儲存庫

使用叫用模型 API 傳送文字訊息並即時處理回應串流。

# Use the native inference API to send a text message to Anthropic Claude # and print the response stream. import boto3 import json # Create a Bedrock Runtime client in the AWS Region of your choice. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Claude 3 Haiku. model_id = "anthropic.claude-3-haiku-20240307-v1:0" # Define the prompt for the model. prompt = "Describe the purpose of a 'hello world' program in one line." # Format the request payload using the model's native structure. native_request = { "anthropic_version": "bedrock-2023-05-31", "max_tokens": 512, "temperature": 0.5, "messages": [ { "role": "user", "content": [{"type": "text", "text": prompt}], } ], } # Convert the native request to JSON. request = json.dumps(native_request) # Invoke the model with the request. streaming_response = client.invoke_model_with_response_stream( modelId=model_id, body=request ) # Extract and print the response text in real-time. for event in streaming_response["body"]: chunk = json.loads(event["chunk"]["bytes"]) if chunk["type"] == "content_block_delta": print(chunk["delta"].get("text", ""), end="")

Cohere Command

下面的代碼示例演示了如何發送文本消息 Cohere 命令,使用基岩的匡威 API。

適用於 Python (Boto3) 的 SDK
注意

還有更多關於 GitHub。尋找完整範例,並了解如何在AWS 設定和執行程式碼範例儲存庫

發送文本消息到 Cohere 命令,使用基岩的匡威 API。

# Use the Conversation API to send a text message to Cohere Command. import boto3 from botocore.exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region you want to use. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Command R. model_id = "cohere.command-r-v1:0" # Start a conversation with the user message. user_message = "Describe the purpose of a 'hello world' program in one line." conversation = [ { "role": "user", "content": [{"text": user_message}], } ] try: # Send the message to the model, using a basic inference configuration. response = client.converse( modelId=model_id, messages=conversation, inferenceConfig={"maxTokens": 512, "temperature": 0.5, "topP": 0.9}, ) # Extract and print the response text. response_text = response["output"]["message"]["content"][0]["text"] print(response_text) except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1)
  • 有關 API 的詳細信息,請參閱在 AWS SDK 中進行交談以獲取 Python(博托 3)API 參考。

下面的代碼示例演示了如何使用基岩的匡威 API 發送文本消息到 Cohere 命令,並實時處理響應流。

適用於 Python (Boto3) 的 SDK
注意

還有更多關於 GitHub。尋找完整範例,並了解如何在AWS 設定和執行程式碼範例儲存庫

使用基岩的匡威 API 向 Cohere 命令發送文本消息,並實時處理響應流。

# Use the Conversation API to send a text message to Cohere Command # and print the response stream. import boto3 from botocore.exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region you want to use. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Command R. model_id = "cohere.command-r-v1:0" # Start a conversation with the user message. user_message = "Describe the purpose of a 'hello world' program in one line." conversation = [ { "role": "user", "content": [{"text": user_message}], } ] try: # Send the message to the model, using a basic inference configuration. streaming_response = client.converse_stream( modelId=model_id, messages=conversation, inferenceConfig={"maxTokens": 512, "temperature": 0.5, "topP": 0.9}, ) # Extract and print the streamed response text in real-time. for chunk in streaming_response["stream"]: if "contentBlockDelta" in chunk: text = chunk["contentBlockDelta"]["delta"]["text"] print(text, end="") except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1)
  • 如需 API 的詳細資訊,請參閱AWS 開發套件ConverseStream中的 Python (博托 3) API 參考。

下列程式碼範例示範如何使用叫用模型 API,將文字訊息傳送至 Cohere 命令 R 和 R +。

適用於 Python (Boto3) 的 SDK
注意

還有更多關於 GitHub。尋找完整範例,並了解如何在AWS 設定和執行程式碼範例儲存庫

使用叫用模型 API 來傳送文字訊息。

# Use the native inference API to send a text message to Cohere Command R and R+. import boto3 import json from botocore.Exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region of your choice. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Command R. model_id = "cohere.command-r-v1:0" # Define the prompt for the model. prompt = "Describe the purpose of a 'hello world' program in one line." # Format the request payload using the model's native structure. native_request = { "message": prompt, "max_tokens": 512, "temperature": 0.5, } # Convert the native request to JSON. request = json.dumps(native_request) try: # Invoke the model with the request. response = client.invoke_model(modelId=model_id, body=request) except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1) # Decode the response body. model_response = json.loads(response["body"].read()) # Extract and print the response text. response_text = model_response["text"] print(response_text)
  • 如需 API 的詳細資訊,請參閱AWS 開發套件InvokeModel中的 Python (博托 3) API 參考。

下列程式碼範例示範如何使用叫用模型 API,將文字訊息傳送至 Cohere 命令。

適用於 Python (Boto3) 的 SDK
注意

還有更多關於 GitHub。尋找完整範例,並了解如何在AWS 設定和執行程式碼範例儲存庫

使用叫用模型 API 來傳送文字訊息。

# Use the native inference API to send a text message to Cohere Command. import boto3 import json from botocore.Exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region of your choice. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Command Light. model_id = "cohere.command-light-text-v14" # Define the prompt for the model. prompt = "Describe the purpose of a 'hello world' program in one line." # Format the request payload using the model's native structure. native_request = { "prompt": prompt, "max_tokens": 512, "temperature": 0.5, } # Convert the native request to JSON. request = json.dumps(native_request) try: # Invoke the model with the request. response = client.invoke_model(modelId=model_id, body=request) except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1) # Decode the response body. model_response = json.loads(response["body"].read()) # Extract and print the response text. response_text = model_response["generations"][0]["text"] print(response_text)
  • 如需 API 的詳細資訊,請參閱AWS 開發套件InvokeModel中的 Python (博托 3) API 參考。

下列程式碼範例會示範如何使用叫用模型 API 搭配回應資料流,將文字訊息傳送至 Cohere 命令。

適用於 Python (Boto3) 的 SDK
注意

還有更多關於 GitHub。尋找完整範例,並了解如何在AWS 設定和執行程式碼範例儲存庫

使用叫用模型 API 傳送文字訊息並即時處理回應串流。

# Use the native inference API to send a text message to Cohere Command R and R+ # and print the response stream. import boto3 import json from botocore.Exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region of your choice. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Command R. model_id = "cohere.command-r-v1:0" # Define the prompt for the model. prompt = "Describe the purpose of a 'hello world' program in one line." # Format the request payload using the model's native structure. native_request = { "message": prompt, "max_tokens": 512, "temperature": 0.5, } # Convert the native request to JSON. request = json.dumps(native_request) try: # Invoke the model with the request. streaming_response = client.invoke_model_with_response_stream( modelId=model_id, body=request ) # Extract and print the response text in real-time. for event in streaming_response["body"]: chunk = json.loads(event["chunk"]["bytes"]) if "generations" in chunk: print(chunk["generations"][0]["text"], end="") except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1)
  • 如需 API 的詳細資訊,請參閱AWS 開發套件InvokeModel中的 Python (博托 3) API 參考。

下列程式碼範例會示範如何使用叫用模型 API 搭配回應資料流,將文字訊息傳送至 Cohere 命令。

適用於 Python (Boto3) 的 SDK
注意

還有更多關於 GitHub。尋找完整範例,並了解如何在AWS 設定和執行程式碼範例儲存庫

使用叫用模型 API 傳送文字訊息並即時處理回應串流。

# Use the native inference API to send a text message to Cohere Command # and print the response stream. import boto3 import json from botocore.Exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region of your choice. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Command Light. model_id = "cohere.command-light-text-v14" # Define the prompt for the model. prompt = "Describe the purpose of a 'hello world' program in one line." # Format the request payload using the model's native structure. native_request = { "prompt": prompt, "max_tokens": 512, "temperature": 0.5, } # Convert the native request to JSON. request = json.dumps(native_request) try: # Invoke the model with the request. streaming_response = client.invoke_model_with_response_stream( modelId=model_id, body=request ) # Extract and print the response text in real-time. for event in streaming_response["body"]: chunk = json.loads(event["chunk"]["bytes"]) if "generations" in chunk: print(chunk["generations"][0]["text"], end="") except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1)
  • 如需 API 的詳細資訊,請參閱AWS 開發套件InvokeModel中的 Python (博托 3) API 參考。

美洲駝

下面的代碼示例演示了如何使用基岩的匡威 API 將文本消息發送到元駱駝。

適用於 Python (Boto3) 的 SDK
注意

還有更多關於 GitHub。尋找完整範例,並了解如何在AWS 設定和執行程式碼範例儲存庫

發送短信給元駱駝,使用基岩的匡威 API。

# Use the Conversation API to send a text message to Meta Llama. import boto3 from botocore.exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region you want to use. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Llama 3 8b Instruct. model_id = "meta.llama3-8b-instruct-v1:0" # Start a conversation with the user message. user_message = "Describe the purpose of a 'hello world' program in one line." conversation = [ { "role": "user", "content": [{"text": user_message}], } ] try: # Send the message to the model, using a basic inference configuration. response = client.converse( modelId=model_id, messages=conversation, inferenceConfig={"maxTokens": 512, "temperature": 0.5, "topP": 0.9}, ) # Extract and print the response text. response_text = response["output"]["message"]["content"][0]["text"] print(response_text) except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1)
  • 有關 API 的詳細信息,請參閱在 AWS SDK 中進行交談以獲取 Python(博托 3)API 參考。

以下代碼示例演示瞭如何使用基岩的匡威 API 將文本消息發送到 Meta Lama 並實時處理響應流。

適用於 Python (Boto3) 的 SDK
注意

還有更多關於 GitHub。尋找完整範例,並了解如何在AWS 設定和執行程式碼範例儲存庫

使用基岩的匡威 API 發送短信給元美洲駝,並實時處理響應流。

# Use the Conversation API to send a text message to Meta Llama # and print the response stream. import boto3 from botocore.exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region you want to use. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Llama 3 8b Instruct. model_id = "meta.llama3-8b-instruct-v1:0" # Start a conversation with the user message. user_message = "Describe the purpose of a 'hello world' program in one line." conversation = [ { "role": "user", "content": [{"text": user_message}], } ] try: # Send the message to the model, using a basic inference configuration. streaming_response = client.converse_stream( modelId=model_id, messages=conversation, inferenceConfig={"maxTokens": 512, "temperature": 0.5, "topP": 0.9}, ) # Extract and print the streamed response text in real-time. for chunk in streaming_response["stream"]: if "contentBlockDelta" in chunk: text = chunk["contentBlockDelta"]["delta"]["text"] print(text, end="") except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1)
  • 如需 API 的詳細資訊,請參閱AWS 開發套件ConverseStream中的 Python (博托 3) API 參考。

下列程式碼範例示範如何使用叫用模型 API,將文字訊息傳送至 Meta Lamama 2。

適用於 Python (Boto3) 的 SDK
注意

還有更多關於 GitHub。尋找完整範例,並了解如何在AWS 設定和執行程式碼範例儲存庫

使用叫用模型 API 來傳送文字訊息。

# Use the native inference API to send a text message to Meta Llama 2. import boto3 import json from botocore.Exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region of your choice. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Llama 2 Chat 13B. model_id = "meta.llama2-13b-chat-v1" # Define the prompt for the model. prompt = "Describe the purpose of a 'hello world' program in one line." # Embed the prompt in Llama 2's instruction format. formatted_prompt = f"<s>[INST] {prompt} [/INST]" # Format the request payload using the model's native structure. native_request = { "prompt": formatted_prompt, "max_gen_len": 512, "temperature": 0.5, } # Convert the native request to JSON. request = json.dumps(native_request) try: # Invoke the model with the request. response = client.invoke_model(modelId=model_id, body=request) except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1) # Decode the response body. model_response = json.loads(response["body"].read()) # Extract and print the response text. response_text = model_response["generation"] print(response_text)
  • 如需 API 的詳細資訊,請參閱AWS 開發套件InvokeModel中的 Python (博托 3) API 參考。

下列程式碼範例示範如何使用叫用模型 API,將文字訊息傳送至 Meta Lamama 3。

適用於 Python (Boto3) 的 SDK
注意

還有更多關於 GitHub。尋找完整範例,並了解如何在AWS 設定和執行程式碼範例儲存庫

使用叫用模型 API 來傳送文字訊息。

# Use the native inference API to send a text message to Meta Llama 3. import boto3 import json from botocore.Exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region of your choice. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Llama 3 8b Instruct. model_id = "meta.llama3-8b-instruct-v1:0" # Define the prompt for the model. prompt = "Describe the purpose of a 'hello world' program in one line." # Embed the prompt in Llama 3's instruction format. formatted_prompt = f""" <|begin_of_text|> <|start_header_id|>user<|end_header_id|> {prompt} <|eot_id|> <|start_header_id|>assistant<|end_header_id|> """ # Format the request payload using the model's native structure. native_request = { "prompt": formatted_prompt, "max_gen_len": 512, "temperature": 0.5, } # Convert the native request to JSON. request = json.dumps(native_request) try: # Invoke the model with the request. response = client.invoke_model(modelId=model_id, body=request) except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1) # Decode the response body. model_response = json.loads(response["body"].read()) # Extract and print the response text. response_text = model_response["generation"] print(response_text)
  • 如需 API 的詳細資訊,請參閱AWS 開發套件InvokeModel中的 Python (博托 3) API 參考。

下面的代碼示例演示了如何使用調用模型 API 發送文本消息到 Meta Lama 2,並打印響應流。

適用於 Python (Boto3) 的 SDK
注意

還有更多關於 GitHub。尋找完整範例,並了解如何在AWS 設定和執行程式碼範例儲存庫

使用叫用模型 API 傳送文字訊息並即時處理回應串流。

# Use the native inference API to send a text message to Meta Llama 2 # and print the response stream. import boto3 import json from botocore.Exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region of your choice. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Llama 2 Chat 13B. model_id = "meta.llama2-13b-chat-v1" # Define the prompt for the model. prompt = "Describe the purpose of a 'hello world' program in one line." # Embed the prompt in Llama 2's instruction format. formatted_prompt = f"<s>[INST] {prompt} [/INST]" # Format the request payload using the model's native structure. native_request = { "prompt": formatted_prompt, "max_gen_len": 512, "temperature": 0.5, } # Convert the native request to JSON. request = json.dumps(native_request) try: # Invoke the model with the request. streaming_response = client.invoke_model_with_response_stream( modelId=model_id, body=request ) # Extract and print the response text in real-time. for event in streaming_response["body"]: chunk = json.loads(event["chunk"]["bytes"]) if "generation" in chunk: print(chunk["generation"], end="") except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1)

下面的代碼示例演示了如何使用調用模型 API 發送文本消息到 Meta Lama 3,並打印響應流。

適用於 Python (Boto3) 的 SDK
注意

還有更多關於 GitHub。尋找完整範例,並了解如何在AWS 設定和執行程式碼範例儲存庫

使用叫用模型 API 傳送文字訊息並即時處理回應串流。

# Use the native inference API to send a text message to Meta Llama 3 # and print the response stream. import boto3 import json from botocore.Exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region of your choice. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Llama 3 8b Instruct. model_id = "meta.llama3-8b-instruct-v1:0" # Define the prompt for the model. prompt = "Describe the purpose of a 'hello world' program in one line." # Embed the prompt in Llama 3's instruction format. formatted_prompt = f""" <|begin_of_text|> <|start_header_id|>user<|end_header_id|> {prompt} <|eot_id|> <|start_header_id|>assistant<|end_header_id|> """ # Format the request payload using the model's native structure. native_request = { "prompt": formatted_prompt, "max_gen_len": 512, "temperature": 0.5, } # Convert the native request to JSON. request = json.dumps(native_request) try: # Invoke the model with the request. streaming_response = client.invoke_model_with_response_stream( modelId=model_id, body=request ) # Extract and print the response text in real-time. for event in streaming_response["body"]: chunk = json.loads(event["chunk"]["bytes"]) if "generation" in chunk: print(chunk["generation"], end="") except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1)

米斯特拉尔

下面的代碼示例演示了如何發送文本消息到米斯特拉爾,使用基岩的匡威 API。

適用於 Python (Boto3) 的 SDK
注意

還有更多關於 GitHub。尋找完整範例,並了解如何在AWS 設定和執行程式碼範例儲存庫

發送短信到米斯特拉爾,使用基岩的匡威 API。

# Use the Conversation API to send a text message to Mistral. import boto3 from botocore.exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region you want to use. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Mistral Large. model_id = "mistral.mistral-large-2402-v1:0" # Start a conversation with the user message. user_message = "Describe the purpose of a 'hello world' program in one line." conversation = [ { "role": "user", "content": [{"text": user_message}], } ] try: # Send the message to the model, using a basic inference configuration. response = client.converse( modelId=model_id, messages=conversation, inferenceConfig={"maxTokens": 512, "temperature": 0.5, "topP": 0.9}, ) # Extract and print the response text. response_text = response["output"]["message"]["content"][0]["text"] print(response_text) except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1)
  • 有關 API 的詳細信息,請參閱在 AWS SDK 中進行交談以獲取 Python(博托 3)API 參考。

下面的代碼示例演示了如何發送文本消息到米斯特拉爾,使用基岩的匡威 API 和處理實時響應流。

適用於 Python (Boto3) 的 SDK
注意

還有更多關於 GitHub。尋找完整範例,並了解如何在AWS 設定和執行程式碼範例儲存庫

發送短信到米斯特拉爾,使用基岩的匡威 API 和處理實時響應流。

# Use the Conversation API to send a text message to Mistral # and print the response stream. import boto3 from botocore.exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region you want to use. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Mistral Large. model_id = "mistral.mistral-large-2402-v1:0" # Start a conversation with the user message. user_message = "Describe the purpose of a 'hello world' program in one line." conversation = [ { "role": "user", "content": [{"text": user_message}], } ] try: # Send the message to the model, using a basic inference configuration. streaming_response = client.converse_stream( modelId=model_id, messages=conversation, inferenceConfig={"maxTokens": 512, "temperature": 0.5, "topP": 0.9}, ) # Extract and print the streamed response text in real-time. for chunk in streaming_response["stream"]: if "contentBlockDelta" in chunk: text = chunk["contentBlockDelta"]["delta"]["text"] print(text, end="") except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1)
  • 如需 API 的詳細資訊,請參閱AWS 開發套件ConverseStream中的 Python (博托 3) API 參考。

下面的代碼示例演示了如何發送一個文本消息到米斯特拉爾模型,使用調用模型 API。

適用於 Python (Boto3) 的 SDK
注意

還有更多關於 GitHub。尋找完整範例,並了解如何在AWS 設定和執行程式碼範例儲存庫

使用叫用模型 API 來傳送文字訊息。

# Use the native inference API to send a text message to Mistral. import boto3 import json from botocore.Exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region of your choice. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Mistral Large. model_id = "mistral.mistral-large-2402-v1:0" # Define the prompt for the model. prompt = "Describe the purpose of a 'hello world' program in one line." # Embed the prompt in Mistral's instruction format. formatted_prompt = f"<s>[INST] {prompt} [/INST]" # Format the request payload using the model's native structure. native_request = { "prompt": formatted_prompt, "max_tokens": 512, "temperature": 0.5, } # Convert the native request to JSON. request = json.dumps(native_request) try: # Invoke the model with the request. response = client.invoke_model(modelId=model_id, body=request) except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1) # Decode the response body. model_response = json.loads(response["body"].read()) # Extract and print the response text. response_text = model_response["outputs"][0]["text"] print(response_text)
  • 如需 API 的詳細資訊,請參閱AWS 開發套件InvokeModel中的 Python (博托 3) API 參考。

下列程式碼範例會示範如何使用 Invoke 模型 API 傳送文字訊息至 Mistral AI 模型,以及列印回應串流。

適用於 Python (Boto3) 的 SDK
注意

還有更多關於 GitHub。尋找完整範例,並了解如何在AWS 設定和執行程式碼範例儲存庫

使用叫用模型 API 傳送文字訊息並即時處理回應串流。

# Use the native inference API to send a text message to Mistral # and print the response stream. import boto3 import json from botocore.Exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region of your choice. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Mistral Large. model_id = "mistral.mistral-large-2402-v1:0" # Define the prompt for the model. prompt = "Describe the purpose of a 'hello world' program in one line." # Embed the prompt in Mistral's instruction format. formatted_prompt = f"<s>[INST] {prompt} [/INST]" # Format the request payload using the model's native structure. native_request = { "prompt": formatted_prompt, "max_tokens": 512, "temperature": 0.5, } # Convert the native request to JSON. request = json.dumps(native_request) try: # Invoke the model with the request. streaming_response = client.invoke_model_with_response_stream( modelId=model_id, body=request ) # Extract and print the response text in real-time. for event in streaming_response["body"]: chunk = json.loads(event["chunk"]["bytes"]) if "outputs" in chunk: print(chunk["outputs"][0].get("text"), end="") except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}''. Reason: {e}") exit(1)

案例

下列程式碼範例說明如何建立操場,以透過不同模式與 Amazon 基礎模型互動。

適用於 Python (Boto3) 的 SDK

Python 基礎模型(FM)遊樂場是一個 Python/快速 API 示例應用程序,展示瞭如何使用 Amazon 基岩與 Python。此範例顯示 Python 開發人員如何使用 Amazon 基岩來建置支援人工智慧的生成應用程式。您可以使用下列三個遊樂場來測試 Amazon 基礎模型並與之互動:

  • 一個文本遊樂場。

  • 一個聊天遊樂場。

  • 圖像遊樂場。

此範例也會列出並顯示您可存取的基礎模型及其特性。如需原始程式碼和部署指示,請參閱中的專案GitHub

此範例中使用的服務
  • Amazon 基岩運行時

下列程式碼範例說明如何使用 Amazon 基岩和 Step Functions 來建置和協調生成 AI 應用程式。

適用於 Python (Boto3) 的 SDK

Amazon 基岩無伺服器提示鏈結案例示範如何AWS Step Functions使用 Amazon 基岩和適用於 Amazon 基岩的代理程式來建置和協調複雜、無伺服器且可高度擴展的生成式 AI 應用程式。它包含以下工作實例:

  • 為文學博客撰寫給定小說的分析。此範例說明簡單、連續的提示鏈。

  • 生成有關給定主題的短篇小說。此範例說明 AI 如何反覆處理先前產生的項目清單。

  • 建立週末假期前往指定目的地的行程。此範例說明如何平行化多個不同的提示。

  • 向作為電影製作人的人類用戶推廣電影的想法。此範例說明如何使用不同的推論參數平行化相同的提示、如何回溯至鏈結中的上一個步驟,以及如何將人工輸入納入作為工作流程的一部分。

  • 根據用戶手頭的成分計劃一頓飯。此示例說明了提示鏈如何合併兩個不同的 AI 對話,其中兩個 AI 角色彼此進行辯論以改善最終結果。

  • 查找並總結當今最高趨勢的 GitHub 存儲庫。此範例說明鏈結多個與外部 API 互動的 AI 代理程式。

有關完整的源代碼和設置和運行說明,請參閱上的完整項目GitHub

此範例中使用的服務
  • Amazon Bedrock

  • Amazon 基岩運行時

  • 適用於 Amazon Bedrock 的代理程式

  • Amazon 基岩運行時的代理

  • Step Functions

Stable Diffusion

以下代碼示例演示瞭如何在 Amazon 基岩上調用 Stability.ai 穩定擴散 XL 以生成圖像。

適用於 Python (Boto3) 的 SDK
注意

還有更多關於 GitHub。尋找完整範例,並了解如何在AWS 設定和執行程式碼範例儲存庫

創建具有穩定擴散的圖像。

# Use the native inference API to create an image with Stability.ai Stable Diffusion import base64 import boto3 import json import os import random # Create a Bedrock Runtime client in the AWS Region of your choice. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Stable Diffusion XL 1. model_id = "stability.stable-diffusion-xl-v1" # Define the image generation prompt for the model. prompt = "A stylized picture of a cute old steampunk robot." # Generate a random seed. seed = random.randint(0, 4294967295) # Format the request payload using the model's native structure. native_request = { "text_prompts": [{"text": prompt}], "style_preset": "photographic", "seed": seed, "cfg_scale": 10, "steps": 30, } # Convert the native request to JSON. request = json.dumps(native_request) # Invoke the model with the request. response = client.invoke_model(modelId=model_id, body=request) # Decode the response body. model_response = json.loads(response["body"].read()) # Extract the image data. base64_image_data = model_response["artifacts"][0]["base64"] # Save the generated image to a local folder. i, output_dir = 1, "output" if not os.path.exists(output_dir): os.makedirs(output_dir) while os.path.exists(os.path.join(output_dir, f"stability_{i}.png")): i += 1 image_data = base64.b64decode(base64_image_data) image_path = os.path.join(output_dir, f"stability_{i}.png") with open(image_path, "wb") as file: file.write(image_data) print(f"The generated image has been saved to {image_path}")
  • 如需 API 的詳細資訊,請參閱AWS 開發套件InvokeModel中的 Python (博托 3) API 參考。