使用调用模型 API 在亚马逊 Bedrock 上调用 Amazon Titan 文本模型 - Amazon Bedrock

本文属于机器翻译版本。若本译文内容与英语原文存在差异,则一律以英文原文为准。

使用调用模型 API 在亚马逊 Bedrock 上调用 Amazon Titan 文本模型

以下代码示例展示了如何使用调用模型 API 向 Amazon Titan 文本模型发送短信。

.NET
AWS SDK for .NET
注意

还有更多相关信息 GitHub。在 AWS 代码示例存储库中查找完整示例,了解如何进行设置和运行。

使用调用模型 API 发送短信。

/// <summary> /// Asynchronously invokes the Amazon Titan Text G1 Express model to run an inference based on the provided input. /// </summary> /// <param name="prompt">The prompt that you want Amazon Titan Text G1 Express to complete.</param> /// <returns>The inference response from the model</returns> /// <remarks> /// The different model providers have individual request and response formats. /// For the format, ranges, and default values for Amazon Titan Text G1 Express, refer to: /// https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-titan-text.html /// </remarks> public static async Task<string> InvokeTitanTextG1Async(string prompt) { string titanTextG1ModelId = "amazon.titan-text-express-v1"; AmazonBedrockRuntimeClient client = new(RegionEndpoint.USEast1); string payload = new JsonObject() { { "inputText", prompt }, { "textGenerationConfig", new JsonObject() { { "maxTokenCount", 512 }, { "temperature", 0f }, { "topP", 1f } } } }.ToJsonString(); string generatedText = ""; try { InvokeModelResponse response = await client.InvokeModelAsync(new InvokeModelRequest() { ModelId = titanTextG1ModelId, Body = AWSSDKUtils.GenerateMemoryStreamFromString(payload), ContentType = "application/json", Accept = "application/json" }); if (response.HttpStatusCode == System.Net.HttpStatusCode.OK) { var results = JsonNode.ParseAsync(response.Body).Result?["results"]?.AsArray(); return results is null ? "" : string.Join(" ", results.Select(x => x?["outputText"]?.GetValue<string?>())); } else { Console.WriteLine("InvokeModelAsync failed with status code " + response.HttpStatusCode); } } catch (AmazonBedrockRuntimeException e) { Console.WriteLine(e.Message); } return generatedText; }
  • 有关 API 的详细信息,请参阅 AWS SDK for .NET API 参考InvokeModel中的。

Go
适用于 Go V2 的 SDK
注意

还有更多相关信息 GitHub。在 AWS 代码示例存储库中查找完整示例,了解如何进行设置和运行。

使用调用模型 API 发送短信。

// Each model provider has their own individual request and response formats. // For the format, ranges, and default values for Amazon Titan Text, refer to: // https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-titan-text.html type TitanTextRequest struct { InputText string `json:"inputText"` TextGenerationConfig TextGenerationConfig `json:"textGenerationConfig"` } type TextGenerationConfig struct { Temperature float64 `json:"temperature"` TopP float64 `json:"topP"` MaxTokenCount int `json:"maxTokenCount"` StopSequences []string `json:"stopSequences,omitempty"` } type TitanTextResponse struct { InputTextTokenCount int `json:"inputTextTokenCount"` Results []Result `json:"results"` } type Result struct { TokenCount int `json:"tokenCount"` OutputText string `json:"outputText"` CompletionReason string `json:"completionReason"` } func (wrapper InvokeModelWrapper) InvokeTitanText(prompt string) (string, error) { modelId := "amazon.titan-text-express-v1" body, err := json.Marshal(TitanTextRequest{ InputText: prompt, TextGenerationConfig: TextGenerationConfig{ Temperature: 0, TopP: 1, MaxTokenCount: 4096, }, }) if err != nil { log.Fatal("failed to marshal", err) } output, err := wrapper.BedrockRuntimeClient.InvokeModel(context.Background(), &bedrockruntime.InvokeModelInput{ ModelId: aws.String(modelId), ContentType: aws.String("application/json"), Body: body, }) if err != nil { ProcessError(err, modelId) } var response TitanTextResponse if err := json.Unmarshal(output.Body, &response); err != nil { log.Fatal("failed to unmarshal", err) } return response.Results[0].OutputText, nil }
  • 有关 API 的详细信息,请参阅 AWS SDK for Go API 参考InvokeModel中的。

Java
适用于 Java 2.x 的 SDK
注意

还有更多相关信息 GitHub。在 AWS 代码示例存储库中查找完整示例,了解如何进行设置和运行。

将您的第一个提示发送到 Amazon Titan Text。

// Send a prompt to Amazon Titan Text and print the response. public class TextQuickstart { public static void main(String[] args) { // Create a Bedrock Runtime client in the AWS Region of your choice. var client = BedrockRuntimeClient.builder() .region(Region.US_EAST_1) .build(); // You can replace the modelId with any other Titan Text Model. All current model IDs // are documented at https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids.html var modelId = "amazon.titan-text-premier-v1:0"; // Define the prompt to send. var prompt = "Describe the purpose of a 'hello world' program in one line."; // Create a JSON payload using the model's native structure. var nativeRequest = new JSONObject().put("inputText", prompt); // Encode and send the request. var response = client.invokeModel(req -> req .body(SdkBytes.fromUtf8String(nativeRequest.toString())) .modelId(modelId)); // Decode the response body. var responseBody = new JSONObject(response.body().asUtf8String()); // Extract and print the response text. var responseText = responseBody.getJSONArray("results").getJSONObject(0).getString("outputText"); System.out.println(responseText); } }

使用系统提示符和其他推理参数调用 Titan Text。

/** * Invoke Titan Text with a system prompt and additional inference parameters, * using Titan's native request/response structure. * * @param userPrompt - The text prompt to send to the model. * @param systemPrompt - A system prompt to provide additional context and instructions. * @return The {@link JSONObject} representing the model's response. */ public static JSONObject invokeWithSystemPrompt(String userPrompt, String systemPrompt) { // Create a Bedrock Runtime client in the AWS Region of your choice. var client = BedrockRuntimeClient.builder() .region(Region.US_EAST_1) .build(); // Set the model ID, e.g., Titan Text Premier. var modelId = "amazon.titan-text-premier-v1:0"; /* Assemble the input text. * For best results, use the following input text format: * {{ system instruction }} * User: {{ user input }} * Bot: */ var inputText = """ %s User: %s Bot: """.formatted(systemPrompt, userPrompt); // Format the request payload using the model's native structure. var nativeRequest = new JSONObject() .put("inputText", inputText) .put("textGenerationConfig", new JSONObject() .put("maxTokenCount", 512) .put("temperature", 0.7F) .put("topP", 0.9F) ) .toString(); // Encode and send the request. var response = client.invokeModel(request -> { request.body(SdkBytes.fromUtf8String(nativeRequest)); request.modelId(modelId); }); // Decode the native response body. var nativeResponse = new JSONObject(response.body().asUtf8String()); // Extract and print the response text. var responseText = nativeResponse.getJSONArray("results").getJSONObject(0).getString("outputText"); System.out.println(responseText); // Return the model's native response. return nativeResponse; }

使用对话历史记录,使用 Titan Text 创建类似聊天的体验。

/** * Create a chat-like experience with a conversation history, using Titan's native * request/response structure. * * @param prompt - The text prompt to send to the model. * @param conversation - A String representing previous conversational turns in the format * User: {{ previous user prompt}} * Bot: {{ previous model response }} * ... * @return The {@link JSONObject} representing the model's response. */ public static JSONObject invokeWithConversation(String prompt, String conversation) { // Create a Bedrock Runtime client in the AWS Region of your choice. var client = BedrockRuntimeClient.builder() .region(Region.US_EAST_1) .build(); // Set the model ID, e.g., Titan Text Premier. var modelId = "amazon.titan-text-premier-v1:0"; /* Append the new prompt to the conversation. * For best results, use the following text format: * User: {{ previous user prompt}} * Bot: {{ previous model response }} * User: {{ new user prompt }} * Bot: """ */ conversation = conversation + """ %nUser: %s Bot: """.formatted(prompt); // Format the request payload using the model's native structure. var nativeRequest = new JSONObject().put("inputText", conversation); // Encode and send the request. var response = client.invokeModel(request -> { request.body(SdkBytes.fromUtf8String(nativeRequest.toString())); request.modelId(modelId); }); // Decode the native response body. var nativeResponse = new JSONObject(response.body().asUtf8String()); // Extract and print the response text. var responseText = nativeResponse.getJSONArray("results").getJSONObject(0).getString("outputText"); System.out.println(responseText); // Return the model's native response. return nativeResponse; }
  • 有关 API 的详细信息,请参阅 AWS SDK for Java 2.x API 参考InvokeModel中的。

JavaScript
适用于 JavaScript (v3) 的软件开发工具包
注意

还有更多相关信息 GitHub。在 AWS 代码示例存储库中查找完整示例,了解如何进行设置和运行。

使用调用模型 API 发送短信。

// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 import { fileURLToPath } from "url"; import { FoundationModels } from "../../config/foundation_models.js"; import { BedrockRuntimeClient, InvokeModelCommand, } from "@aws-sdk/client-bedrock-runtime"; /** * @typedef {Object} ResponseBody * @property {Object[]} results */ /** * Invokes an Amazon Titan Text generation model. * * @param {string} prompt - The input text prompt for the model to complete. * @param {string} [modelId] - The ID of the model to use. Defaults to "amazon.titan-text-express-v1". */ export const invokeModel = async ( prompt, modelId = "amazon.titan-text-express-v1", ) => { // Create a new Bedrock Runtime client instance. const client = new BedrockRuntimeClient({ region: "us-east-1" }); // Prepare the payload for the model. const payload = { inputText: prompt, textGenerationConfig: { maxTokenCount: 4096, stopSequences: [], temperature: 0, topP: 1, }, }; // Invoke the model with the payload and wait for the response. const command = new InvokeModelCommand({ contentType: "application/json", body: JSON.stringify(payload), modelId, }); const apiResponse = await client.send(command); // Decode and return the response. const decodedResponseBody = new TextDecoder().decode(apiResponse.body); /** @type {ResponseBody} */ const responseBody = JSON.parse(decodedResponseBody); return responseBody.results[0].outputText; }; // Invoke the function if this file was run directly. if (process.argv[1] === fileURLToPath(import.meta.url)) { const prompt = 'Complete the following in one sentence: "Once upon a time..."'; const modelId = FoundationModels.TITAN_TEXT_G1_EXPRESS.modelId; console.log(`Prompt: ${prompt}`); console.log(`Model ID: ${modelId}`); try { console.log("-".repeat(53)); const response = await invokeModel(prompt, modelId); console.log(response); } catch (err) { console.log(err); } }
  • 有关 API 的详细信息,请参阅 AWS SDK for JavaScript API 参考InvokeModel中的。

Python
SDK for Python (Boto3)
注意

还有更多相关信息 GitHub。在 AWS 代码示例存储库中查找完整示例,了解如何进行设置和运行。

使用调用模型 API 发送短信。

# Use the native inference API to send a text message to Amazon Titan Text. import boto3 import json # Create a Bedrock Runtime client in the AWS Region of your choice. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Titan Text Premier. model_id = "amazon.titan-text-premier-v1:0" # Define the prompt for the model. prompt = "Describe the purpose of a 'hello world' program in one line." # Format the request payload using the model's native structure. native_request = { "inputText": prompt, "textGenerationConfig": { "maxTokenCount": 512, "temperature": 0.5, }, } # Convert the native request to JSON. request = json.dumps(native_request) # Invoke the model with the request. response = client.invoke_model(modelId=model_id, body=request) # Decode the response body. model_response = json.loads(response["body"].read()) # Extract and print the response text. response_text = model_response["results"][0]["outputText"] print(response_text)
  • 有关 API 的详细信息,请参阅适用InvokeModelPython 的AWS SDK (Boto3) API 参考

有关 S AWS DK 开发者指南和代码示例的完整列表,请参阅将此服务与 AWS SDK 配合使用。本主题还包括有关入门的信息以及有关先前的 SDK 版本的详细信息。