Memanggil model Amazon Titan Text di Amazon Bedrock menggunakan Invoke Model API - Amazon Bedrock

Terjemahan disediakan oleh mesin penerjemah. Jika konten terjemahan yang diberikan bertentangan dengan versi bahasa Inggris aslinya, utamakan versi bahasa Inggris.

Memanggil model Amazon Titan Text di Amazon Bedrock menggunakan Invoke Model API

Contoh kode berikut menunjukkan cara mengirim pesan teks ke model Amazon Titan Text, menggunakan Invoke Model API.

.NET
AWS SDK for .NET
catatan

Ada lebih banyak tentang GitHub. Temukan contoh lengkapnya dan pelajari cara mengatur dan menjalankannya di AWS Repositori Contoh Kode.

Gunakan API Invoke Model untuk mengirim pesan teks.

/// <summary> /// Asynchronously invokes the Amazon Titan Text G1 Express model to run an inference based on the provided input. /// </summary> /// <param name="prompt">The prompt that you want Amazon Titan Text G1 Express to complete.</param> /// <returns>The inference response from the model</returns> /// <remarks> /// The different model providers have individual request and response formats. /// For the format, ranges, and default values for Amazon Titan Text G1 Express, refer to: /// https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-titan-text.html /// </remarks> public static async Task<string> InvokeTitanTextG1Async(string prompt) { string titanTextG1ModelId = "amazon.titan-text-express-v1"; AmazonBedrockRuntimeClient client = new(RegionEndpoint.USEast1); string payload = new JsonObject() { { "inputText", prompt }, { "textGenerationConfig", new JsonObject() { { "maxTokenCount", 512 }, { "temperature", 0f }, { "topP", 1f } } } }.ToJsonString(); string generatedText = ""; try { InvokeModelResponse response = await client.InvokeModelAsync(new InvokeModelRequest() { ModelId = titanTextG1ModelId, Body = AWSSDKUtils.GenerateMemoryStreamFromString(payload), ContentType = "application/json", Accept = "application/json" }); if (response.HttpStatusCode == System.Net.HttpStatusCode.OK) { var results = JsonNode.ParseAsync(response.Body).Result?["results"]?.AsArray(); return results is null ? "" : string.Join(" ", results.Select(x => x?["outputText"]?.GetValue<string?>())); } else { Console.WriteLine("InvokeModelAsync failed with status code " + response.HttpStatusCode); } } catch (AmazonBedrockRuntimeException e) { Console.WriteLine(e.Message); } return generatedText; }
  • Untuk detail API, lihat InvokeModeldi Referensi AWS SDK for .NET API.

Go
SDK untuk Go V2
catatan

Ada lebih banyak tentang GitHub. Temukan contoh lengkapnya dan pelajari cara mengatur dan menjalankannya di AWS Repositori Contoh Kode.

Gunakan API Invoke Model untuk mengirim pesan teks.

// Each model provider has their own individual request and response formats. // For the format, ranges, and default values for Amazon Titan Text, refer to: // https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-titan-text.html type TitanTextRequest struct { InputText string `json:"inputText"` TextGenerationConfig TextGenerationConfig `json:"textGenerationConfig"` } type TextGenerationConfig struct { Temperature float64 `json:"temperature"` TopP float64 `json:"topP"` MaxTokenCount int `json:"maxTokenCount"` StopSequences []string `json:"stopSequences,omitempty"` } type TitanTextResponse struct { InputTextTokenCount int `json:"inputTextTokenCount"` Results []Result `json:"results"` } type Result struct { TokenCount int `json:"tokenCount"` OutputText string `json:"outputText"` CompletionReason string `json:"completionReason"` } func (wrapper InvokeModelWrapper) InvokeTitanText(prompt string) (string, error) { modelId := "amazon.titan-text-express-v1" body, err := json.Marshal(TitanTextRequest{ InputText: prompt, TextGenerationConfig: TextGenerationConfig{ Temperature: 0, TopP: 1, MaxTokenCount: 4096, }, }) if err != nil { log.Fatal("failed to marshal", err) } output, err := wrapper.BedrockRuntimeClient.InvokeModel(context.Background(), &bedrockruntime.InvokeModelInput{ ModelId: aws.String(modelId), ContentType: aws.String("application/json"), Body: body, }) if err != nil { ProcessError(err, modelId) } var response TitanTextResponse if err := json.Unmarshal(output.Body, &response); err != nil { log.Fatal("failed to unmarshal", err) } return response.Results[0].OutputText, nil }
  • Untuk detail API, lihat InvokeModeldi Referensi AWS SDK for Go API.

Java
SDK untuk Java 2.x
catatan

Ada lebih banyak tentang GitHub. Temukan contoh lengkapnya dan pelajari cara mengatur dan menjalankannya di AWS Repositori Contoh Kode.

Kirim prompt pertama Anda ke Amazon Titan Text.

// Send a prompt to Amazon Titan Text and print the response. public class TextQuickstart { public static void main(String[] args) { // Create a Bedrock Runtime client in the AWS Region of your choice. var client = BedrockRuntimeClient.builder() .region(Region.US_EAST_1) .build(); // You can replace the modelId with any other Titan Text Model. All current model IDs // are documented at https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids.html var modelId = "amazon.titan-text-premier-v1:0"; // Define the prompt to send. var prompt = "Describe the purpose of a 'hello world' program in one line."; // Create a JSON payload using the model's native structure. var nativeRequest = new JSONObject().put("inputText", prompt); // Encode and send the request. var response = client.invokeModel(req -> req .body(SdkBytes.fromUtf8String(nativeRequest.toString())) .modelId(modelId)); // Decode the response body. var responseBody = new JSONObject(response.body().asUtf8String()); // Extract and print the response text. var responseText = responseBody.getJSONArray("results").getJSONObject(0).getString("outputText"); System.out.println(responseText); } }

Panggil Teks Titan dengan prompt sistem dan parameter inferensi tambahan.

/** * Invoke Titan Text with a system prompt and additional inference parameters, * using Titan's native request/response structure. * * @param userPrompt - The text prompt to send to the model. * @param systemPrompt - A system prompt to provide additional context and instructions. * @return The {@link JSONObject} representing the model's response. */ public static JSONObject invokeWithSystemPrompt(String userPrompt, String systemPrompt) { // Create a Bedrock Runtime client in the AWS Region of your choice. var client = BedrockRuntimeClient.builder() .region(Region.US_EAST_1) .build(); // Set the model ID, e.g., Titan Text Premier. var modelId = "amazon.titan-text-premier-v1:0"; /* Assemble the input text. * For best results, use the following input text format: * {{ system instruction }} * User: {{ user input }} * Bot: */ var inputText = """ %s User: %s Bot: """.formatted(systemPrompt, userPrompt); // Format the request payload using the model's native structure. var nativeRequest = new JSONObject() .put("inputText", inputText) .put("textGenerationConfig", new JSONObject() .put("maxTokenCount", 512) .put("temperature", 0.7F) .put("topP", 0.9F) ) .toString(); // Encode and send the request. var response = client.invokeModel(request -> { request.body(SdkBytes.fromUtf8String(nativeRequest)); request.modelId(modelId); }); // Decode the native response body. var nativeResponse = new JSONObject(response.body().asUtf8String()); // Extract and print the response text. var responseText = nativeResponse.getJSONArray("results").getJSONObject(0).getString("outputText"); System.out.println(responseText); // Return the model's native response. return nativeResponse; }

Buat pengalaman seperti obrolan dengan Titan Text, menggunakan riwayat percakapan.

/** * Create a chat-like experience with a conversation history, using Titan's native * request/response structure. * * @param prompt - The text prompt to send to the model. * @param conversation - A String representing previous conversational turns in the format * User: {{ previous user prompt}} * Bot: {{ previous model response }} * ... * @return The {@link JSONObject} representing the model's response. */ public static JSONObject invokeWithConversation(String prompt, String conversation) { // Create a Bedrock Runtime client in the AWS Region of your choice. var client = BedrockRuntimeClient.builder() .region(Region.US_EAST_1) .build(); // Set the model ID, e.g., Titan Text Premier. var modelId = "amazon.titan-text-premier-v1:0"; /* Append the new prompt to the conversation. * For best results, use the following text format: * User: {{ previous user prompt}} * Bot: {{ previous model response }} * User: {{ new user prompt }} * Bot: """ */ conversation = conversation + """ %nUser: %s Bot: """.formatted(prompt); // Format the request payload using the model's native structure. var nativeRequest = new JSONObject().put("inputText", conversation); // Encode and send the request. var response = client.invokeModel(request -> { request.body(SdkBytes.fromUtf8String(nativeRequest.toString())); request.modelId(modelId); }); // Decode the native response body. var nativeResponse = new JSONObject(response.body().asUtf8String()); // Extract and print the response text. var responseText = nativeResponse.getJSONArray("results").getJSONObject(0).getString("outputText"); System.out.println(responseText); // Return the model's native response. return nativeResponse; }
  • Untuk detail API, lihat InvokeModeldi Referensi AWS SDK for Java 2.x API.

JavaScript
SDK untuk JavaScript (v3)
catatan

Ada lebih banyak tentang GitHub. Temukan contoh lengkapnya dan pelajari cara mengatur dan menjalankannya di AWS Repositori Contoh Kode.

Gunakan API Invoke Model untuk mengirim pesan teks.

// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 import { fileURLToPath } from "url"; import { FoundationModels } from "../../config/foundation_models.js"; import { BedrockRuntimeClient, InvokeModelCommand, } from "@aws-sdk/client-bedrock-runtime"; /** * @typedef {Object} ResponseBody * @property {Object[]} results */ /** * Invokes an Amazon Titan Text generation model. * * @param {string} prompt - The input text prompt for the model to complete. * @param {string} [modelId] - The ID of the model to use. Defaults to "amazon.titan-text-express-v1". */ export const invokeModel = async ( prompt, modelId = "amazon.titan-text-express-v1", ) => { // Create a new Bedrock Runtime client instance. const client = new BedrockRuntimeClient({ region: "us-east-1" }); // Prepare the payload for the model. const payload = { inputText: prompt, textGenerationConfig: { maxTokenCount: 4096, stopSequences: [], temperature: 0, topP: 1, }, }; // Invoke the model with the payload and wait for the response. const command = new InvokeModelCommand({ contentType: "application/json", body: JSON.stringify(payload), modelId, }); const apiResponse = await client.send(command); // Decode and return the response. const decodedResponseBody = new TextDecoder().decode(apiResponse.body); /** @type {ResponseBody} */ const responseBody = JSON.parse(decodedResponseBody); return responseBody.results[0].outputText; }; // Invoke the function if this file was run directly. if (process.argv[1] === fileURLToPath(import.meta.url)) { const prompt = 'Complete the following in one sentence: "Once upon a time..."'; const modelId = FoundationModels.TITAN_TEXT_G1_EXPRESS.modelId; console.log(`Prompt: ${prompt}`); console.log(`Model ID: ${modelId}`); try { console.log("-".repeat(53)); const response = await invokeModel(prompt, modelId); console.log(response); } catch (err) { console.log(err); } }
  • Untuk detail API, lihat InvokeModeldi Referensi AWS SDK for JavaScript API.

Python
SDK untuk Python (Boto3)
catatan

Ada lebih banyak tentang GitHub. Temukan contoh lengkapnya dan pelajari cara mengatur dan menjalankannya di AWS Repositori Contoh Kode.

Gunakan API Invoke Model untuk mengirim pesan teks.

# Use the native inference API to send a text message to Amazon Titan Text. import boto3 import json # Create a Bedrock Runtime client in the AWS Region of your choice. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Titan Text Premier. model_id = "amazon.titan-text-premier-v1:0" # Define the prompt for the model. prompt = "Describe the purpose of a 'hello world' program in one line." # Format the request payload using the model's native structure. native_request = { "inputText": prompt, "textGenerationConfig": { "maxTokenCount": 512, "temperature": 0.5, }, } # Convert the native request to JSON. request = json.dumps(native_request) # Invoke the model with the request. response = client.invoke_model(modelId=model_id, body=request) # Decode the response body. model_response = json.loads(response["body"].read()) # Extract and print the response text. response_text = model_response["results"][0]["outputText"] print(response_text)
  • Untuk detail API, lihat InvokeModeldi AWS SDK for Python (Boto3) Referensi API.

Untuk daftar lengkap panduan pengembang AWS SDK dan contoh kode, lihatMenggunakan layanan ini dengan AWS SDK. Topik ini juga mencakup informasi tentang memulai dan detail tentang versi SDK sebelumnya.