Invoke Model API를 응답 스트림과 함께 사용하여 Amazon Bedrock에서 Cohere Command 간접 호출 - Amazon Bedrock

Invoke Model API를 응답 스트림과 함께 사용하여 Amazon Bedrock에서 Cohere Command 간접 호출

다음 코드 예제에서는 Invoke Model API를 응답 스트림과 함께 사용하여 Cohere Command에 텍스트 메시지를 전송하는 방법을 보여줍니다.

.NET
AWS SDK for .NET
참고

GitHub에 더 많은 내용이 있습니다. AWS코드 예시 리포지토리에서 전체 예시를 찾고 설정 및 실행하는 방법을 배워보세요.

Invoke Model API를 사용하여 텍스트 메시지를 보내고 응답 스트림을 실시간으로 처리합니다.

// Use the native inference API to send a text message to Cohere Command // and print the response stream. using System; using System.IO; using System.Text.Json; using System.Text.Json.Nodes; using Amazon; using Amazon.BedrockRuntime; using Amazon.BedrockRuntime.Model; // Create a Bedrock Runtime client in the AWS Region you want to use. var client = new AmazonBedrockRuntimeClient(RegionEndpoint.USEast1); // Set the model ID, e.g., Command Light. var modelId = "cohere.command-light-text-v14"; // Define the user message. var userMessage = "Describe the purpose of a 'hello world' program in one line."; //Format the request payload using the model's native structure. var nativeRequest = JsonSerializer.Serialize(new { prompt = userMessage, max_tokens = 512, temperature = 0.5 }); // Create a request with the model ID and the model's native request payload. var request = new InvokeModelWithResponseStreamRequest() { ModelId = modelId, Body = new MemoryStream(System.Text.Encoding.UTF8.GetBytes(nativeRequest)), ContentType = "application/json" }; try { // Send the request to the Bedrock Runtime and wait for the response. var streamingResponse = await client.InvokeModelWithResponseStreamAsync(request); // Extract and print the streamed response text in real-time. foreach (var item in streamingResponse.Body) { var chunk = JsonSerializer.Deserialize<JsonObject>((item as PayloadPart).Bytes); var text = chunk["generations"]?[0]?["text"] ?? ""; Console.Write(text); } } catch (AmazonBedrockRuntimeException e) { Console.WriteLine($"ERROR: Can't invoke '{modelId}'. Reason: {e.Message}"); throw; }
  • API에 대한 세부 정보는 AWS SDK for .NET API 참조InvokeModel을 참조하세요.

Java
SDK for Java 2.x
참고

GitHub에 더 많은 내용이 있습니다. AWS코드 예시 리포지토리에서 전체 예시를 찾고 설정 및 실행하는 방법을 배워보세요.

Invoke Model API를 사용하여 텍스트 메시지를 보내고 응답 스트림을 실시간으로 처리합니다.

// Use the native inference API to send a text message to Cohere Command // and print the response stream. import org.json.JSONObject; import org.json.JSONPointer; import software.amazon.awssdk.auth.credentials.DefaultCredentialsProvider; import software.amazon.awssdk.core.SdkBytes; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.bedrockruntime.BedrockRuntimeAsyncClient; import software.amazon.awssdk.services.bedrockruntime.model.InvokeModelWithResponseStreamRequest; import software.amazon.awssdk.services.bedrockruntime.model.InvokeModelWithResponseStreamResponseHandler; import java.util.concurrent.ExecutionException; import static software.amazon.awssdk.services.bedrockruntime.model.InvokeModelWithResponseStreamResponseHandler.Visitor; public class Command_InvokeModelWithResponseStream { public static String invokeModelWithResponseStream() throws ExecutionException, InterruptedException { // Create a Bedrock Runtime client in the AWS Region you want to use. // Replace the DefaultCredentialsProvider with your preferred credentials provider. var client = BedrockRuntimeAsyncClient.builder() .credentialsProvider(DefaultCredentialsProvider.create()) .region(Region.US_EAST_1) .build(); // Set the model ID, e.g., Command Light. var modelId = "cohere.command-light-text-v14"; // The InvokeModelWithResponseStream API uses the model's native payload. // Learn more about the available inference parameters and response fields at: // https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-cohere-command.html var nativeRequestTemplate = "{ \"prompt\": \"{{prompt}}\" }"; // Define the prompt for the model. var prompt = "Describe the purpose of a 'hello world' program in one line."; // Embed the prompt in the model's native request payload. String nativeRequest = nativeRequestTemplate.replace("{{prompt}}", prompt); // Create a request with the model ID and the model's native request payload. var request = InvokeModelWithResponseStreamRequest.builder() .body(SdkBytes.fromUtf8String(nativeRequest)) .modelId(modelId) .build(); // Prepare a buffer to accumulate the generated response text. var completeResponseTextBuffer = new StringBuilder(); // Prepare a handler to extract, accumulate, and print the response text in real-time. var responseStreamHandler = InvokeModelWithResponseStreamResponseHandler.builder() .subscriber(Visitor.builder().onChunk(chunk -> { // Extract and print the text from the model's native response. var response = new JSONObject(chunk.bytes().asUtf8String()); var text = new JSONPointer("/generations/0/text").queryFrom(response); System.out.print(text); // Append the text to the response text buffer. completeResponseTextBuffer.append(text); }).build()).build(); try { // Send the request and wait for the handler to process the response. client.invokeModelWithResponseStream(request, responseStreamHandler).get(); // Return the complete response text. return completeResponseTextBuffer.toString(); } catch (ExecutionException | InterruptedException e) { System.err.printf("Can't invoke '%s': %s", modelId, e.getCause().getMessage()); throw new RuntimeException(e); } } public static void main(String[] args) throws ExecutionException, InterruptedException { invokeModelWithResponseStream(); } }
  • API 세부 정보는 AWS SDK for Java 2.x API 참조InvokeModel을 참조하십시오.

Python
SDK for Python (Boto3)
참고

GitHub에 더 많은 내용이 있습니다. AWS코드 예시 리포지토리에서 전체 예시를 찾고 설정 및 실행하는 방법을 배워보세요.

Invoke Model API를 사용하여 텍스트 메시지를 보내고 응답 스트림을 실시간으로 처리합니다.

# Use the native inference API to send a text message to Cohere Command # and print the response stream. import boto3 import json from botocore.exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region of your choice. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Command Light. model_id = "cohere.command-light-text-v14" # Define the prompt for the model. prompt = "Describe the purpose of a 'hello world' program in one line." # Format the request payload using the model's native structure. native_request = { "prompt": prompt, "max_tokens": 512, "temperature": 0.5, } # Convert the native request to JSON. request = json.dumps(native_request) try: # Invoke the model with the request. streaming_response = client.invoke_model_with_response_stream( modelId=model_id, body=request ) # Extract and print the response text in real-time. for event in streaming_response["body"]: chunk = json.loads(event["chunk"]["bytes"]) if "generations" in chunk: print(chunk["generations"][0]["text"], end="") except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1)
  • API 세부 정보는 AWS SDK for Python (Boto3) API 참조InvokeModel를 참조하십시오.

AWS SDK 개발자 가이드 및 코드 예시의 전체 목록은 AWS SDK에서 Amazon Bedrock 사용 단원을 참조하세요. 이 주제에는 시작하기에 대한 정보와 이전 SDK 버전에 대한 세부 정보도 포함되어 있습니다.