Panggil Anthropic Claude di Amazon Bedrock menggunakan Model Invoke API - Amazon Bedrock

Terjemahan disediakan oleh mesin penerjemah. Jika konten terjemahan yang diberikan bertentangan dengan versi bahasa Inggris aslinya, utamakan versi bahasa Inggris.

Panggil Anthropic Claude di Amazon Bedrock menggunakan Model Invoke API

Contoh kode berikut menunjukkan cara mengirim pesan teks ke Anthropic Claude, menggunakan Model Invoke. API

.NET
AWS SDK for .NET
catatan

Ada lebih banyak tentang GitHub. Temukan contoh lengkapnya dan pelajari cara mengatur dan menjalankan di AWS Repositori Contoh Kode.

Gunakan Model Invoke API untuk mengirim pesan teks.

// Use the native inference API to send a text message to Anthropic Claude. using System; using System.IO; using System.Text.Json; using System.Text.Json.Nodes; using Amazon; using Amazon.BedrockRuntime; using Amazon.BedrockRuntime.Model; // Create a Bedrock Runtime client in the AWS Region you want to use. var client = new AmazonBedrockRuntimeClient(RegionEndpoint.USEast1); // Set the model ID, e.g., Claude 3 Haiku. var modelId = "anthropic.claude-3-haiku-20240307-v1:0"; // Define the user message. var userMessage = "Describe the purpose of a 'hello world' program in one line."; //Format the request payload using the model's native structure. var nativeRequest = JsonSerializer.Serialize(new { anthropic_version = "bedrock-2023-05-31", max_tokens = 512, temperature = 0.5, messages = new[] { new { role = "user", content = userMessage } } }); // Create a request with the model ID and the model's native request payload. var request = new InvokeModelRequest() { ModelId = modelId, Body = new MemoryStream(System.Text.Encoding.UTF8.GetBytes(nativeRequest)), ContentType = "application/json" }; try { // Send the request to the Bedrock Runtime and wait for the response. var response = await client.InvokeModelAsync(request); // Decode the response body. var modelResponse = await JsonNode.ParseAsync(response.Body); // Extract and print the response text. var responseText = modelResponse["content"]?[0]?["text"] ?? ""; Console.WriteLine(responseText); } catch (AmazonBedrockRuntimeException e) { Console.WriteLine($"ERROR: Can't invoke '{modelId}'. Reason: {e.Message}"); throw; }
  • Untuk API detailnya, lihat InvokeModeldi AWS SDK for .NET APIReferensi.

Go
SDKuntuk Go V2
catatan

Ada lebih banyak tentang GitHub. Temukan contoh lengkapnya dan pelajari cara mengatur dan menjalankan di AWS Repositori Contoh Kode.

Gunakan model dasar Anthropic Claude 2 untuk menghasilkan teks.

// Each model provider has their own individual request and response formats. // For the format, ranges, and default values for Anthropic Claude, refer to: // https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-claude.html type ClaudeRequest struct { Prompt string `json:"prompt"` MaxTokensToSample int `json:"max_tokens_to_sample"` Temperature float64 `json:"temperature,omitempty"` StopSequences []string `json:"stop_sequences,omitempty"` } type ClaudeResponse struct { Completion string `json:"completion"` } // Invokes Anthropic Claude on Amazon Bedrock to run an inference using the input // provided in the request body. func (wrapper InvokeModelWrapper) InvokeClaude(prompt string) (string, error) { modelId := "anthropic.claude-v2" // Anthropic Claude requires enclosing the prompt as follows: enclosedPrompt := "Human: " + prompt + "\n\nAssistant:" body, err := json.Marshal(ClaudeRequest{ Prompt: enclosedPrompt, MaxTokensToSample: 200, Temperature: 0.5, StopSequences: []string{"\n\nHuman:"}, }) if err != nil { log.Fatal("failed to marshal", err) } output, err := wrapper.BedrockRuntimeClient.InvokeModel(context.TODO(), &bedrockruntime.InvokeModelInput{ ModelId: aws.String(modelId), ContentType: aws.String("application/json"), Body: body, }) if err != nil { ProcessError(err, modelId) } var response ClaudeResponse if err := json.Unmarshal(output.Body, &response); err != nil { log.Fatal("failed to unmarshal", err) } return response.Completion, nil }
  • Untuk API detailnya, lihat InvokeModeldi AWS SDK for Go APIReferensi.

Java
SDKuntuk Java 2.x
catatan

Ada lebih banyak tentang GitHub. Temukan contoh lengkapnya dan pelajari cara mengatur dan menjalankan di AWS Repositori Contoh Kode.

Gunakan Model Invoke API untuk mengirim pesan teks.

// Use the native inference API to send a text message to Anthropic Claude. import org.json.JSONObject; import org.json.JSONPointer; import software.amazon.awssdk.auth.credentials.DefaultCredentialsProvider; import software.amazon.awssdk.core.SdkBytes; import software.amazon.awssdk.core.exception.SdkClientException; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.bedrockruntime.BedrockRuntimeClient; public class InvokeModel { public static String invokeModel() { // Create a Bedrock Runtime client in the AWS Region you want to use. // Replace the DefaultCredentialsProvider with your preferred credentials provider. var client = BedrockRuntimeClient.builder() .credentialsProvider(DefaultCredentialsProvider.create()) .region(Region.US_EAST_1) .build(); // Set the model ID, e.g., Claude 3 Haiku. var modelId = "anthropic.claude-3-haiku-20240307-v1:0"; // The InvokeModel API uses the model's native payload. // Learn more about the available inference parameters and response fields at: // https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-anthropic-claude-messages.html var nativeRequestTemplate = """ { "anthropic_version": "bedrock-2023-05-31", "max_tokens": 512, "temperature": 0.5, "messages": [{ "role": "user", "content": "{{prompt}}" }] }"""; // Define the prompt for the model. var prompt = "Describe the purpose of a 'hello world' program in one line."; // Embed the prompt in the model's native request payload. String nativeRequest = nativeRequestTemplate.replace("{{prompt}}", prompt); try { // Encode and send the request to the Bedrock Runtime. var response = client.invokeModel(request -> request .body(SdkBytes.fromUtf8String(nativeRequest)) .modelId(modelId) ); // Decode the response body. var responseBody = new JSONObject(response.body().asUtf8String()); // Retrieve the generated text from the model's response. var text = new JSONPointer("/content/0/text").queryFrom(responseBody).toString(); System.out.println(text); return text; } catch (SdkClientException e) { System.err.printf("ERROR: Can't invoke '%s'. Reason: %s", modelId, e.getMessage()); throw new RuntimeException(e); } } public static void main(String[] args) { invokeModel(); } }
  • Untuk API detailnya, lihat InvokeModeldi AWS SDK for Java 2.x APIReferensi.

JavaScript
SDKuntuk JavaScript (v3)
catatan

Ada lebih banyak tentang GitHub. Temukan contoh lengkapnya dan pelajari cara mengatur dan menjalankan di AWS Repositori Contoh Kode.

Gunakan Model Invoke API untuk mengirim pesan teks.

import { fileURLToPath } from "url"; import { FoundationModels } from "../../config/foundation_models.js"; import { BedrockRuntimeClient, InvokeModelCommand, InvokeModelWithResponseStreamCommand, } from "@aws-sdk/client-bedrock-runtime"; /** * @typedef {Object} ResponseContent * @property {string} text * * @typedef {Object} MessagesResponseBody * @property {ResponseContent[]} content * * @typedef {Object} Delta * @property {string} text * * @typedef {Object} Message * @property {string} role * * @typedef {Object} Chunk * @property {string} type * @property {Delta} delta * @property {Message} message */ /** * Invokes Anthropic Claude 3 using the Messages API. * * To learn more about the Anthropic Messages API, go to: * https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-anthropic-claude-messages.html * * @param {string} prompt - The input text prompt for the model to complete. * @param {string} [modelId] - The ID of the model to use. Defaults to "anthropic.claude-3-haiku-20240307-v1:0". */ export const invokeModel = async ( prompt, modelId = "anthropic.claude-3-haiku-20240307-v1:0", ) => { // Create a new Bedrock Runtime client instance. const client = new BedrockRuntimeClient({ region: "us-east-1" }); // Prepare the payload for the model. const payload = { anthropic_version: "bedrock-2023-05-31", max_tokens: 1000, messages: [ { role: "user", content: [{ type: "text", text: prompt }], }, ], }; // Invoke Claude with the payload and wait for the response. const command = new InvokeModelCommand({ contentType: "application/json", body: JSON.stringify(payload), modelId, }); const apiResponse = await client.send(command); // Decode and return the response(s) const decodedResponseBody = new TextDecoder().decode(apiResponse.body); /** @type {MessagesResponseBody} */ const responseBody = JSON.parse(decodedResponseBody); return responseBody.content[0].text; }; /** * Invokes Anthropic Claude 3 and processes the response stream. * * To learn more about the Anthropic Messages API, go to: * https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-anthropic-claude-messages.html * * @param {string} prompt - The input text prompt for the model to complete. * @param {string} [modelId] - The ID of the model to use. Defaults to "anthropic.claude-3-haiku-20240307-v1:0". */ export const invokeModelWithResponseStream = async ( prompt, modelId = "anthropic.claude-3-haiku-20240307-v1:0", ) => { // Create a new Bedrock Runtime client instance. const client = new BedrockRuntimeClient({ region: "us-east-1" }); // Prepare the payload for the model. const payload = { anthropic_version: "bedrock-2023-05-31", max_tokens: 1000, messages: [ { role: "user", content: [{ type: "text", text: prompt }], }, ], }; // Invoke Claude with the payload and wait for the API to respond. const command = new InvokeModelWithResponseStreamCommand({ contentType: "application/json", body: JSON.stringify(payload), modelId, }); const apiResponse = await client.send(command); let completeMessage = ""; // Decode and process the response stream for await (const item of apiResponse.body) { /** @type Chunk */ const chunk = JSON.parse(new TextDecoder().decode(item.chunk.bytes)); const chunk_type = chunk.type; if (chunk_type === "content_block_delta") { const text = chunk.delta.text; completeMessage = completeMessage + text; process.stdout.write(text); } } // Return the final response return completeMessage; }; // Invoke the function if this file was run directly. if (process.argv[1] === fileURLToPath(import.meta.url)) { const prompt = 'Write a paragraph starting with: "Once upon a time..."'; const modelId = FoundationModels.CLAUDE_3_HAIKU.modelId; console.log(`Prompt: ${prompt}`); console.log(`Model ID: ${modelId}`); try { console.log("-".repeat(53)); const response = await invokeModel(prompt, modelId); console.log("\n" + "-".repeat(53)); console.log("Final structured response:"); console.log(response); } catch (err) { console.log(`\n${err}`); } }
  • Untuk API detailnya, lihat InvokeModeldi AWS SDK for JavaScript APIReferensi.

PHP
SDKuntuk PHP
catatan

Ada lebih banyak tentang GitHub. Temukan contoh lengkapnya dan pelajari cara mengatur dan menjalankan di AWS Repositori Contoh Kode.

Gunakan model dasar Anthropic Claude 2 untuk menghasilkan teks.

public function invokeClaude($prompt) { # The different model providers have individual request and response formats. # For the format, ranges, and default values for Anthropic Claude, refer to: # https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-claude.html $completion = ""; try { $modelId = 'anthropic.claude-v2'; # Claude requires you to enclose the prompt as follows: $prompt = "\n\nHuman: {$prompt}\n\nAssistant:"; $body = [ 'prompt' => $prompt, 'max_tokens_to_sample' => 200, 'temperature' => 0.5, 'stop_sequences' => ["\n\nHuman:"], ]; $result = $this->bedrockRuntimeClient->invokeModel([ 'contentType' => 'application/json', 'body' => json_encode($body), 'modelId' => $modelId, ]); $response_body = json_decode($result['body']); $completion = $response_body->completion; } catch (Exception $e) { echo "Error: ({$e->getCode()}) - {$e->getMessage()}\n"; } return $completion; }
  • Untuk API detailnya, lihat InvokeModeldi AWS SDK for PHP APIReferensi.

Python
SDKuntuk Python (Boto3)
catatan

Ada lebih banyak tentang GitHub. Temukan contoh lengkapnya dan pelajari cara mengatur dan menjalankan di AWS Repositori Contoh Kode.

Gunakan Model Invoke API untuk mengirim pesan teks.

# Use the native inference API to send a text message to Anthropic Claude. import boto3 import json from botocore.exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region of your choice. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Claude 3 Haiku. model_id = "anthropic.claude-3-haiku-20240307-v1:0" # Define the prompt for the model. prompt = "Describe the purpose of a 'hello world' program in one line." # Format the request payload using the model's native structure. native_request = { "anthropic_version": "bedrock-2023-05-31", "max_tokens": 512, "temperature": 0.5, "messages": [ { "role": "user", "content": [{"type": "text", "text": prompt}], } ], } # Convert the native request to JSON. request = json.dumps(native_request) try: # Invoke the model with the request. response = client.invoke_model(modelId=model_id, body=request) except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1) # Decode the response body. model_response = json.loads(response["body"].read()) # Extract and print the response text. response_text = model_response["content"][0]["text"] print(response_text)
  • Untuk API detailnya, lihat InvokeModeldi AWS SDKuntuk Python (Boto3) Referensi. API

SAP ABAP
SDKuntuk SAP ABAP
catatan

Ada lebih banyak tentang GitHub. Temukan contoh lengkapnya dan pelajari cara mengatur dan menjalankan di AWS Repositori Contoh Kode.

Gunakan model dasar Anthropic Claude 2 untuk menghasilkan teks. Contoh ini menggunakan fitur US2 //CL_ JSON yang mungkin tidak tersedia di beberapa NetWeaver versi.

"Claude V2 Input Parameters should be in a format like this: * { * "prompt":"\n\nHuman:\\nTell me a joke\n\nAssistant:\n", * "max_tokens_to_sample":2048, * "temperature":0.5, * "top_k":250, * "top_p":1.0, * "stop_sequences":[] * } DATA: BEGIN OF ls_input, prompt TYPE string, max_tokens_to_sample TYPE /aws1/rt_shape_integer, temperature TYPE /aws1/rt_shape_float, top_k TYPE /aws1/rt_shape_integer, top_p TYPE /aws1/rt_shape_float, stop_sequences TYPE /aws1/rt_stringtab, END OF ls_input. "Leave ls_input-stop_sequences empty. ls_input-prompt = |\n\nHuman:\\n{ iv_prompt }\n\nAssistant:\n|. ls_input-max_tokens_to_sample = 2048. ls_input-temperature = '0.5'. ls_input-top_k = 250. ls_input-top_p = 1. "Serialize into JSON with /ui2/cl_json -- this assumes SAP_UI is installed. DATA(lv_json) = /ui2/cl_json=>serialize( data = ls_input pretty_name = /ui2/cl_json=>pretty_mode-low_case ). TRY. DATA(lo_response) = lo_bdr->invokemodel( iv_body = /aws1/cl_rt_util=>string_to_xstring( lv_json ) iv_modelid = 'anthropic.claude-v2' iv_accept = 'application/json' iv_contenttype = 'application/json' ). "Claude V2 Response format will be: * { * "completion": "Knock Knock...", * "stop_reason": "stop_sequence" * } DATA: BEGIN OF ls_response, completion TYPE string, stop_reason TYPE string, END OF ls_response. /ui2/cl_json=>deserialize( EXPORTING jsonx = lo_response->get_body( ) pretty_name = /ui2/cl_json=>pretty_mode-camel_case CHANGING data = ls_response ). DATA(lv_answer) = ls_response-completion. CATCH /aws1/cx_bdraccessdeniedex INTO DATA(lo_ex). WRITE / lo_ex->get_text( ). WRITE / |Don't forget to enable model access at https://console.aws.amazon.com/bedrock/home?#/modelaccess|. ENDTRY.

Panggil model dasar Anthropic Claude 2 untuk menghasilkan teks menggunakan klien tingkat tinggi L2.

TRY. DATA(lo_bdr_l2_claude) = /aws1/cl_bdr_l2_factory=>create_claude_2( lo_bdr ). " iv_prompt can contain a prompt like 'tell me a joke about Java programmers'. DATA(lv_answer) = lo_bdr_l2_claude->prompt_for_text( iv_prompt ). CATCH /aws1/cx_bdraccessdeniedex INTO DATA(lo_ex). WRITE / lo_ex->get_text( ). WRITE / |Don't forget to enable model access at https://console.aws.amazon.com/bedrock/home?#/modelaccess|. ENDTRY.

Panggil model dasar Anthropic Claude 3 untuk menghasilkan teks menggunakan klien tingkat tinggi L2.

TRY. " Choose a model ID from Anthropic that supports the Messages API - currently this is " Claude v2, Claude v3 and v3.5. For the list of model ID, see: " https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids.html " for the list of models that support the Messages API see: " https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-anthropic-claude-messages.html DATA(lo_bdr_l2_claude) = /aws1/cl_bdr_l2_factory=>create_anthropic_msg_api( io_bdr = lo_bdr iv_model_id = 'anthropic.claude-3-sonnet-20240229-v1:0' ). " choosing Claude v3 Sonnet " iv_prompt can contain a prompt like 'tell me a joke about Java programmers'. DATA(lv_answer) = lo_bdr_l2_claude->prompt_for_text( iv_prompt = iv_prompt iv_max_tokens = 100 ). CATCH /aws1/cx_bdraccessdeniedex INTO DATA(lo_ex). WRITE / lo_ex->get_text( ). WRITE / |Don't forget to enable model access at https://console.aws.amazon.com/bedrock/home?#/modelaccess|. ENDTRY.
  • Untuk API detailnya, lihat InvokeModeldi AWS SDKuntuk SAP ABAP API referensi.

Untuk daftar lengkap AWS SDKpanduan pengembang dan contoh kode, lihatMenggunakan layanan ini dengan AWS SDK. Topik ini juga mencakup informasi tentang memulai dan detail tentang SDK versi sebelumnya.