選取您的 Cookie 偏好設定

我們使用提供自身網站和服務所需的基本 Cookie 和類似工具。我們使用效能 Cookie 收集匿名統計資料,以便了解客戶如何使用我們的網站並進行改進。基本 Cookie 無法停用,但可以按一下「自訂」或「拒絕」以拒絕效能 Cookie。

如果您同意,AWS 與經核准的第三方也會使用 Cookie 提供實用的網站功能、記住您的偏好設定,並顯示相關內容,包括相關廣告。若要接受或拒絕所有非必要 Cookie,請按一下「接受」或「拒絕」。若要進行更詳細的選擇,請按一下「自訂」。

使用 Bedrock 的 Converse API 搭配回應串流,在 Amazon Bedrock 上叫用 Amazon Nova

焦點模式
使用 Bedrock 的 Converse API 搭配回應串流,在 Amazon Bedrock 上叫用 Amazon Nova - Amazon Bedrock

本文為英文版的機器翻譯版本,如內容有任何歧義或不一致之處,概以英文版為準。

本文為英文版的機器翻譯版本,如內容有任何歧義或不一致之處,概以英文版為準。

下列程式碼範例示範如何使用 Bedrock 的 Converse API 將文字訊息傳送至 Amazon Nova,並即時處理回應串流。

.NET
SDK for .NET
注意

GitHub 上提供更多範例。尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

使用 Bedrock 的 Converse API 將文字訊息傳送至 Amazon Nova,並即時處理回應串流。

// Use the Converse API to send a text message to Amazon Nova // and print the response stream. using System; using System.Collections.Generic; using System.Linq; using Amazon; using Amazon.BedrockRuntime; using Amazon.BedrockRuntime.Model; // Create a Bedrock Runtime client in the AWS Region you want to use. var client = new AmazonBedrockRuntimeClient(RegionEndpoint.USEast1); // Set the model ID, e.g., Amazon Nova Lite. var modelId = "amazon.nova-lite-v1:0"; // Define the user message. var userMessage = "Describe the purpose of a 'hello world' program in one line."; // Create a request with the model ID, the user message, and an inference configuration. var request = new ConverseStreamRequest { ModelId = modelId, Messages = new List<Message> { new Message { Role = ConversationRole.User, Content = new List<ContentBlock> { new ContentBlock { Text = userMessage } } } }, InferenceConfig = new InferenceConfiguration() { MaxTokens = 512, Temperature = 0.5F, TopP = 0.9F } }; try { // Send the request to the Bedrock Runtime and wait for the result. var response = await client.ConverseStreamAsync(request); // Extract and print the streamed response text in real-time. foreach (var chunk in response.Stream.AsEnumerable()) { if (chunk is ContentBlockDeltaEvent) { Console.Write((chunk as ContentBlockDeltaEvent).Delta.Text); } } } catch (AmazonBedrockRuntimeException e) { Console.WriteLine($"ERROR: Can't invoke '{modelId}'. Reason: {e.Message}"); throw; }
  • 如需 API 詳細資訊,請參閱《 AWS SDK for .NET API 參考》中的 ConverseStream

Java
SDK for Java 2.x
注意

GitHub 上提供更多範例。尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

使用 Bedrock 的 Converse API 將文字訊息傳送至 Amazon Nova,並即時處理回應串流。

import software.amazon.awssdk.auth.credentials.DefaultCredentialsProvider; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.bedrockruntime.BedrockRuntimeAsyncClient; import software.amazon.awssdk.services.bedrockruntime.model.*; import java.util.concurrent.ExecutionException; /** * This example demonstrates how to use the Amazon Nova foundation models with an * asynchronous Amazon Bedrock runtime client to generate streaming text responses. * It shows how to: * - Set up the Amazon Bedrock runtime client * - Create a message * - Configure a streaming request * - Set up a stream handler to process the response chunks * - Process the streaming response */ public class ConverseStream { public static void converseStream() { // Step 1: Create the Amazon Bedrock runtime client // The runtime client handles the communication with AI models on Amazon Bedrock BedrockRuntimeAsyncClient client = BedrockRuntimeAsyncClient.builder() .credentialsProvider(DefaultCredentialsProvider.create()) .region(Region.US_EAST_1) .build(); // Step 2: Specify which model to use // Available Amazon Nova models and their characteristics: // - Amazon Nova Micro: Text-only model optimized for lowest latency and cost // - Amazon Nova Lite: Fast, low-cost multimodal model for image, video, and text // - Amazon Nova Pro: Advanced multimodal model balancing accuracy, speed, and cost // // For the latest available models, see: // https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html String modelId = "amazon.nova-lite-v1:0"; // Step 3: Create the message // The message includes the text prompt and specifies that it comes from the user var inputText = "Describe the purpose of a 'hello world' program in one paragraph"; var message = Message.builder() .content(ContentBlock.fromText(inputText)) .role(ConversationRole.USER) .build(); // Step 4: Configure the request // Optional parameters to control the model's response: // - maxTokens: maximum number of tokens to generate // - temperature: randomness (max: 1.0, default: 0.7) // OR // - topP: diversity of word choice (max: 1.0, default: 0.9) // Note: Use either temperature OR topP, but not both ConverseStreamRequest request = ConverseStreamRequest.builder() .modelId(modelId) .messages(message) .inferenceConfig(config -> config .maxTokens(500) // The maximum response length .temperature(0.5F) // Using temperature for randomness control //.topP(0.9F) // Alternative: use topP instead of temperature ).build(); // Step 5: Set up the stream handler // The stream handler processes chunks of the response as they arrive // - onContentBlockDelta: Processes each text chunk // - onError: Handles any errors during streaming var streamHandler = ConverseStreamResponseHandler.builder() .subscriber(ConverseStreamResponseHandler.Visitor.builder() .onContentBlockDelta(chunk -> { System.out.print(chunk.delta().text()); System.out.flush(); // Ensure immediate output of each chunk }).build()) .onError(err -> System.err.printf("Can't invoke '%s': %s", modelId, err.getMessage())) .build(); // Step 6: Send the streaming request and process the response // - Send the request to the model // - Attach the handler to process response chunks as they arrive // - Handle any errors during streaming try { client.converseStream(request, streamHandler).get(); } catch (ExecutionException | InterruptedException e) { System.err.printf("Can't invoke '%s': %s", modelId, e.getCause().getMessage()); } } public static void main(String[] args) { converseStream(); } }
  • 如需 API 詳細資訊,請參閱《 AWS SDK for Java 2.x API 參考》中的 ConverseStream

JavaScript
SDK for JavaScript (v3)
注意

GitHub 上提供更多範例。尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

使用 Bedrock 的 Converse API 將文字訊息傳送至 Amazon Nova,並即時處理回應串流。

// This example demonstrates how to use the Amazon Nova foundation models // to generate streaming text responses. // It shows how to: // - Set up the Amazon Bedrock runtime client // - Create a message // - Configure a streaming request // - Process the streaming response import { BedrockRuntimeClient, ConversationRole, ConverseStreamCommand, } from "@aws-sdk/client-bedrock-runtime"; // Step 1: Create the Amazon Bedrock runtime client // Credentials will be automatically loaded from the environment const client = new BedrockRuntimeClient({ region: "us-east-1" }); // Step 2: Specify which model to use // Available Amazon Nova models and their characteristics: // - Amazon Nova Micro: Text-only model optimized for lowest latency and cost // - Amazon Nova Lite: Fast, low-cost multimodal model for image, video, and text // - Amazon Nova Pro: Advanced multimodal model balancing accuracy, speed, and cost // // For the most current model IDs, see: // https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html const modelId = "amazon.nova-lite-v1:0"; // Step 3: Create the message // The message includes the text prompt and specifies that it comes from the user const inputText = "Describe the purpose of a 'hello world' program in one paragraph"; const message = { content: [{ text: inputText }], role: ConversationRole.USER, }; // Step 4: Configure the streaming request // Optional parameters to control the model's response: // - maxTokens: maximum number of tokens to generate // - temperature: randomness (max: 1.0, default: 0.7) // OR // - topP: diversity of word choice (max: 1.0, default: 0.9) // Note: Use either temperature OR topP, but not both const request = { modelId, messages: [message], inferenceConfig: { maxTokens: 500, // The maximum response length temperature: 0.5, // Using temperature for randomness control //topP: 0.9, // Alternative: use topP instead of temperature }, }; // Step 5: Send and process the streaming request // - Send the request to the model // - Process each chunk of the streaming response try { const response = await client.send(new ConverseStreamCommand(request)); for await (const chunk of response.stream) { if (chunk.contentBlockDelta) { // Print each text chunk as it arrives process.stdout.write(chunk.contentBlockDelta.delta?.text || ""); } } } catch (error) { console.error(`ERROR: Can't invoke '${modelId}'. Reason: ${error.message}`); process.exitCode = 1; }
  • 如需 API 詳細資訊,請參閱《 AWS SDK for JavaScript API 參考》中的 ConverseStream

Kotlin
SDK for Kotlin
注意

GitHub 上提供更多範例。尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

使用 Bedrock 的 Converse API 將文字訊息傳送至 Amazon Nova,並即時處理回應串流。

import aws.sdk.kotlin.services.bedrockruntime.BedrockRuntimeClient import aws.sdk.kotlin.services.bedrockruntime.model.ContentBlock import aws.sdk.kotlin.services.bedrockruntime.model.ConversationRole import aws.sdk.kotlin.services.bedrockruntime.model.ConverseStreamOutput import aws.sdk.kotlin.services.bedrockruntime.model.ConverseStreamRequest import aws.sdk.kotlin.services.bedrockruntime.model.Message /** * This example demonstrates how to use the Amazon Nova foundation models * to generate streaming text responses. * It shows how to: * - Set up the Amazon Bedrock runtime client * - Create a message with a prompt * - Configure a streaming request with parameters * - Process the response stream in real time */ suspend fun main() { converseStream() } suspend fun converseStream(): String { // A buffer to collect the complete response val completeResponseBuffer = StringBuilder() // Create and configure the Bedrock runtime client BedrockRuntimeClient { region = "us-east-1" }.use { client -> // Specify the model ID. For the latest available models, see: // https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html val modelId = "amazon.nova-lite-v1:0" // Create the message with the user's prompt val prompt = "Describe the purpose of a 'hello world' program in a paragraph." val message = Message { role = ConversationRole.User content = listOf(ContentBlock.Text(prompt)) } // Configure the request with optional model parameters val request = ConverseStreamRequest { this.modelId = modelId messages = listOf(message) inferenceConfig { maxTokens = 500 // Maximum response length temperature = 0.5F // Lower values: more focused output // topP = 0.8F // Alternative to temperature } } // Process the streaming response runCatching { client.converseStream(request) { response -> response.stream?.collect { chunk -> when (chunk) { is ConverseStreamOutput.ContentBlockDelta -> { // Process each text chunk as it arrives chunk.value.delta?.asText()?.let { text -> print(text) System.out.flush() // Ensure immediate output completeResponseBuffer.append(text) } } else -> {} // Other output block types can be handled as needed } } } }.onFailure { error -> error.message?.let { e -> System.err.println("ERROR: Can't invoke '$modelId'. Reason: $e") } throw RuntimeException("Failed to generate text with model $modelId: $error", error) } } return completeResponseBuffer.toString() }
  • 如需 API 詳細資訊,請參閱《適用於 AWS Kotlin 的 SDK API 參考》中的 ConverseStream

Python
SDK for Python (Boto3)
注意

GitHub 上提供更多範例。尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

使用 Bedrock 的 Converse API 將文字訊息傳送至 Amazon Nova,並即時處理回應串流。

# Use the Conversation API to send a text message to Amazon Nova Text # and print the response stream. import boto3 from botocore.exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region you want to use. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Amazon Nova Lite. model_id = "amazon.nova-lite-v1:0" # Start a conversation with the user message. user_message = "Describe the purpose of a 'hello world' program in one line." conversation = [ { "role": "user", "content": [{"text": user_message}], } ] try: # Send the message to the model, using a basic inference configuration. streaming_response = client.converse_stream( modelId=model_id, messages=conversation, inferenceConfig={"maxTokens": 512, "temperature": 0.5, "topP": 0.9}, ) # Extract and print the streamed response text in real-time. for chunk in streaming_response["stream"]: if "contentBlockDelta" in chunk: text = chunk["contentBlockDelta"]["delta"]["text"] print(text, end="") except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1)
  • 如需 API 詳細資訊,請參閱《適用於 AWS Python (Boto3) 的 SDK API 參考》中的 ConverseStream

SDK for .NET
注意

GitHub 上提供更多範例。尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

使用 Bedrock 的 Converse API 將文字訊息傳送至 Amazon Nova,並即時處理回應串流。

// Use the Converse API to send a text message to Amazon Nova // and print the response stream. using System; using System.Collections.Generic; using System.Linq; using Amazon; using Amazon.BedrockRuntime; using Amazon.BedrockRuntime.Model; // Create a Bedrock Runtime client in the AWS Region you want to use. var client = new AmazonBedrockRuntimeClient(RegionEndpoint.USEast1); // Set the model ID, e.g., Amazon Nova Lite. var modelId = "amazon.nova-lite-v1:0"; // Define the user message. var userMessage = "Describe the purpose of a 'hello world' program in one line."; // Create a request with the model ID, the user message, and an inference configuration. var request = new ConverseStreamRequest { ModelId = modelId, Messages = new List<Message> { new Message { Role = ConversationRole.User, Content = new List<ContentBlock> { new ContentBlock { Text = userMessage } } } }, InferenceConfig = new InferenceConfiguration() { MaxTokens = 512, Temperature = 0.5F, TopP = 0.9F } }; try { // Send the request to the Bedrock Runtime and wait for the result. var response = await client.ConverseStreamAsync(request); // Extract and print the streamed response text in real-time. foreach (var chunk in response.Stream.AsEnumerable()) { if (chunk is ContentBlockDeltaEvent) { Console.Write((chunk as ContentBlockDeltaEvent).Delta.Text); } } } catch (AmazonBedrockRuntimeException e) { Console.WriteLine($"ERROR: Can't invoke '{modelId}'. Reason: {e.Message}"); throw; }
  • 如需 API 詳細資訊,請參閱《 AWS SDK for .NET API 參考》中的 ConverseStream

如需 AWS SDK 開發人員指南和程式碼範例的完整清單,請參閱 搭配 AWS SDK 使用 Amazon Bedrock。此主題也包含有關入門的資訊和舊版 SDK 的詳細資訊。

隱私權網站條款Cookie 偏好設定
© 2025, Amazon Web Services, Inc.或其附屬公司。保留所有權利。