Richiama più modelli di base su Amazon Bedrock - Amazon Bedrock

Le traduzioni sono generate tramite traduzione automatica. In caso di conflitto tra il contenuto di una traduzione e la versione originale in Inglese, quest'ultima prevarrà.

Richiama più modelli di base su Amazon Bedrock

I seguenti esempi di codice mostrano come preparare e inviare un prompt a una varietà di modelli a grande lingua (LLM) su Amazon Bedrock

Go
SDK per Go V2
Nota

C'è altro su. GitHub Trova l'esempio completo e scopri di più sulla configurazione e l'esecuzione nel Repository di esempi di codice AWS.

Richiama più modelli di base su Amazon Bedrock.

// InvokeModelsScenario demonstrates how to use the Amazon Bedrock Runtime client // to invoke various foundation models for text and image generation // // 1. Generate text with Anthropic Claude 2 // 2. Generate text with AI21 Labs Jurassic-2 // 3. Generate text with Meta Llama 2 Chat // 4. Generate text and asynchronously process the response stream with Anthropic Claude 2 // 5. Generate and image with the Amazon Titan image generation model // 6. Generate text with Amazon Titan Text G1 Express model type InvokeModelsScenario struct { sdkConfig aws.Config invokeModelWrapper actions.InvokeModelWrapper responseStreamWrapper actions.InvokeModelWithResponseStreamWrapper questioner demotools.IQuestioner } // NewInvokeModelsScenario constructs an InvokeModelsScenario instance from a configuration. // It uses the specified config to get a Bedrock Runtime client and create wrappers for the // actions used in the scenario. func NewInvokeModelsScenario(sdkConfig aws.Config, questioner demotools.IQuestioner) InvokeModelsScenario { client := bedrockruntime.NewFromConfig(sdkConfig) return InvokeModelsScenario{ sdkConfig: sdkConfig, invokeModelWrapper: actions.InvokeModelWrapper{BedrockRuntimeClient: client}, responseStreamWrapper: actions.InvokeModelWithResponseStreamWrapper{BedrockRuntimeClient: client}, questioner: questioner, } } // Runs the interactive scenario. func (scenario InvokeModelsScenario) Run() { defer func() { if r := recover(); r != nil { log.Printf("Something went wrong with the demo: %v\n", r) } }() log.Println(strings.Repeat("=", 77)) log.Println("Welcome to the Amazon Bedrock Runtime model invocation demo.") log.Println(strings.Repeat("=", 77)) log.Printf("First, let's invoke a few large-language models using the synchronous client:\n\n") text2textPrompt := "In one paragraph, who are you?" log.Println(strings.Repeat("-", 77)) log.Printf("Invoking Claude with prompt: %v\n", text2textPrompt) scenario.InvokeClaude(text2textPrompt) log.Println(strings.Repeat("-", 77)) log.Printf("Invoking Jurassic-2 with prompt: %v\n", text2textPrompt) scenario.InvokeJurassic2(text2textPrompt) log.Println(strings.Repeat("-", 77)) log.Printf("Invoking Llama2 with prompt: %v\n", text2textPrompt) scenario.InvokeLlama2(text2textPrompt) log.Println(strings.Repeat("=", 77)) log.Printf("Now, let's invoke Claude with the asynchronous client and process the response stream:\n\n") log.Println(strings.Repeat("-", 77)) log.Printf("Invoking Claude with prompt: %v\n", text2textPrompt) scenario.InvokeWithResponseStream(text2textPrompt) log.Println(strings.Repeat("=", 77)) log.Printf("Now, let's create an image with the Amazon Titan image generation model:\n\n") text2ImagePrompt := "stylized picture of a cute old steampunk robot" seed := rand.Int63n(2147483648) log.Println(strings.Repeat("-", 77)) log.Printf("Invoking Amazon Titan with prompt: %v\n", text2ImagePrompt) scenario.InvokeTitanImage(text2ImagePrompt, seed) log.Println(strings.Repeat("-", 77)) log.Printf("Invoking Titan Text Express with prompt: %v\n", text2textPrompt) scenario.InvokeTitanText(text2textPrompt) log.Println(strings.Repeat("=", 77)) log.Println("Thanks for watching!") log.Println(strings.Repeat("=", 77)) } func (scenario InvokeModelsScenario) InvokeClaude(prompt string) { completion, err := scenario.invokeModelWrapper.InvokeClaude(prompt) if err != nil { panic(err) } log.Printf("\nClaude : %v\n", strings.TrimSpace(completion)) } func (scenario InvokeModelsScenario) InvokeJurassic2(prompt string) { completion, err := scenario.invokeModelWrapper.InvokeJurassic2(prompt) if err != nil { panic(err) } log.Printf("\nJurassic-2 : %v\n", strings.TrimSpace(completion)) } func (scenario InvokeModelsScenario) InvokeLlama2(prompt string) { completion, err := scenario.invokeModelWrapper.InvokeLlama2(prompt) if err != nil { panic(err) } log.Printf("\nLlama 2 : %v\n\n", strings.TrimSpace(completion)) } func (scenario InvokeModelsScenario) InvokeWithResponseStream(prompt string) { log.Println("\nClaude with response stream:") _, err := scenario.responseStreamWrapper.InvokeModelWithResponseStream(prompt) if err != nil { panic(err) } log.Println() } func (scenario InvokeModelsScenario) InvokeTitanImage(prompt string, seed int64) { base64ImageData, err := scenario.invokeModelWrapper.InvokeTitanImage(prompt, seed) if err != nil { panic(err) } imagePath := saveImage(base64ImageData, "amazon.titan-image-generator-v1") fmt.Printf("The generated image has been saved to %s\n", imagePath) } func (scenario InvokeModelsScenario) InvokeTitanText(prompt string) { completion, err := scenario.invokeModelWrapper.InvokeTitanText(prompt) if err != nil { panic(err) } log.Printf("\nTitan Text Express : %v\n\n", strings.TrimSpace(completion)) }
Java
SDK per Java 2.x
Nota

C'è altro da fare. GitHub Trova l'esempio completo e scopri di più sulla configurazione e l'esecuzione nel Repository di esempi di codice AWS.

Richiama più modelli di base su Amazon Bedrock.

package com.example.bedrockruntime; import software.amazon.awssdk.services.bedrockruntime.model.BedrockRuntimeException; import java.io.FileOutputStream; import java.net.URI; import java.nio.file.Files; import java.nio.file.Path; import java.nio.file.Paths; import java.util.Base64; import java.util.Random; import static com.example.bedrockruntime.InvokeModel.*; /** * Demonstrates the invocation of the following models: * Anthropic Claude 2, AI21 Labs Jurassic-2, Meta Llama 2 Chat, and Stability.ai * Stable Diffusion XL. */ public class BedrockRuntimeUsageDemo { private static final Random random = new Random(); private static final String CLAUDE = "anthropic.claude-v2"; private static final String JURASSIC2 = "ai21.j2-mid-v1"; private static final String MISTRAL7B = "mistral.mistral-7b-instruct-v0:2"; private static final String MIXTRAL8X7B = "mistral.mixtral-8x7b-instruct-v0:1"; private static final String STABLE_DIFFUSION = "stability.stable-diffusion-xl"; private static final String TITAN_IMAGE = "amazon.titan-image-generator-v1"; public static void main(String[] args) { BedrockRuntimeUsageDemo.textToText(); BedrockRuntimeUsageDemo.textToTextWithResponseStream(); BedrockRuntimeUsageDemo.textToImage(); } private static void textToText() { String prompt = "In one sentence, what is a large-language model?"; BedrockRuntimeUsageDemo.invoke(CLAUDE, prompt); BedrockRuntimeUsageDemo.invoke(JURASSIC2, prompt); BedrockRuntimeUsageDemo.invoke(MISTRAL7B, prompt); BedrockRuntimeUsageDemo.invoke(MIXTRAL8X7B, prompt); } private static void invoke(String modelId, String prompt) { invoke(modelId, prompt, null); } private static void invoke(String modelId, String prompt, String stylePreset) { System.out.println("\n" + new String(new char[88]).replace("\0", "-")); System.out.println("Invoking: " + modelId); System.out.println("Prompt: " + prompt); try { switch (modelId) { case CLAUDE: printResponse(invokeClaude(prompt)); break; case JURASSIC2: printResponse(invokeJurassic2(prompt)); break; case MISTRAL7B: for (String response : invokeMistral7B(prompt)) { printResponse(response); } break; case MIXTRAL8X7B: for (String response : invokeMixtral8x7B(prompt)) { printResponse(response); } break; case STABLE_DIFFUSION: createImage(STABLE_DIFFUSION, prompt, random.nextLong() & 0xFFFFFFFFL, stylePreset); break; case TITAN_IMAGE: createImage(TITAN_IMAGE, prompt, random.nextLong() & 0xFFFFFFFL); break; default: throw new IllegalStateException("Unexpected value: " + modelId); } } catch (BedrockRuntimeException e) { System.out.println("Couldn't invoke model " + modelId + ": " + e.getMessage()); throw e; } } private static void createImage(String modelId, String prompt, long seed) { createImage(modelId, prompt, seed, null); } private static void createImage(String modelId, String prompt, long seed, String stylePreset) { String base64ImageData = (modelId.equals(STABLE_DIFFUSION)) ? invokeStableDiffusion(prompt, seed, stylePreset) : invokeTitanImage(prompt, seed); String imagePath = saveImage(modelId, base64ImageData); System.out.printf("Success: The generated image has been saved to %s%n", imagePath); } private static void textToTextWithResponseStream() { String prompt = "What is a large-language model?"; BedrockRuntimeUsageDemo.invokeWithResponseStream(CLAUDE, prompt); } private static void invokeWithResponseStream(String modelId, String prompt) { System.out.println(new String(new char[88]).replace("\0", "-")); System.out.printf("Invoking %s with response stream%n", modelId); System.out.println("Prompt: " + prompt); try { Claude2.invokeMessagesApiWithResponseStream(prompt); } catch (BedrockRuntimeException e) { System.out.println("Couldn't invoke model " + modelId + ": " + e.getMessage()); throw e; } } private static void textToImage() { String imagePrompt = "stylized picture of a cute old steampunk robot"; String stylePreset = "photographic"; BedrockRuntimeUsageDemo.invoke(STABLE_DIFFUSION, imagePrompt, stylePreset); BedrockRuntimeUsageDemo.invoke(TITAN_IMAGE, imagePrompt); } private static void printResponse(String response) { System.out.printf("Generated text: %s%n", response); } private static String saveImage(String modelId, String base64ImageData) { try { String directory = "output"; URI uri = InvokeModel.class.getProtectionDomain().getCodeSource().getLocation().toURI(); Path outputPath = Paths.get(uri).getParent().getParent().resolve(directory); if (!Files.exists(outputPath)) { Files.createDirectories(outputPath); } int i = 1; String fileName; do { fileName = String.format("%s_%d.png", modelId, i); i++; } while (Files.exists(outputPath.resolve(fileName))); byte[] imageBytes = Base64.getDecoder().decode(base64ImageData); Path filePath = outputPath.resolve(fileName); try (FileOutputStream fileOutputStream = new FileOutputStream(filePath.toFile())) { fileOutputStream.write(imageBytes); } return filePath.toString(); } catch (Exception e) { System.out.println(e.getMessage()); System.exit(1); } return null; } }
JavaScript
SDK per (v3 JavaScript )
Nota

C'è altro da fare. GitHub Trova l'esempio completo e scopri di più sulla configurazione e l'esecuzione nel Repository di esempi di codice AWS.

// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 import { fileURLToPath } from "url"; import { Scenario, ScenarioAction, ScenarioInput, ScenarioOutput, } from "@aws-doc-sdk-examples/lib/scenario/index.js"; import { FoundationModels } from "../config/foundation_models.js"; /** * @typedef {Object} ModelConfig * @property {Function} module * @property {Function} invoker * @property {string} modelId * @property {string} modelName */ const greeting = new ScenarioOutput( "greeting", "Welcome to the Amazon Bedrock Runtime client demo!", { header: true }, ); const selectModel = new ScenarioInput("model", "First, select a model:", { type: "select", choices: Object.values(FoundationModels).map((model) => ({ name: model.modelName, value: model, })), }); const enterPrompt = new ScenarioInput("prompt", "Now, enter your prompt:", { type: "input", }); const printDetails = new ScenarioOutput( "print details", /** * @param {{ model: ModelConfig, prompt: string }} c */ (c) => console.log(`Invoking ${c.model.modelName} with '${c.prompt}'...`), { slow: false }, ); const invokeModel = new ScenarioAction( "invoke model", /** * @param {{ model: ModelConfig, prompt: string, response: string }} c */ async (c) => { const modelModule = await c.model.module(); const invoker = c.model.invoker(modelModule); c.response = await invoker(c.prompt, c.model.modelId); }, ); const printResponse = new ScenarioOutput( "print response", /** * @param {{ response: string }} c */ (c) => c.response, { slow: false }, ); const scenario = new Scenario("Amazon Bedrock Runtime Demo", [ greeting, selectModel, enterPrompt, printDetails, invokeModel, printResponse, ]); if (process.argv[1] === fileURLToPath(import.meta.url)) { scenario.run(); }
PHP
SDK per PHP
Nota

C'è dell'altro GitHub. Trova l'esempio completo e scopri di più sulla configurazione e l'esecuzione nel Repository di esempi di codice AWS.

Richiama più LLM su Amazon Bedrock.

namespace BedrockRuntime; class GettingStartedWithBedrockRuntime { protected BedrockRuntimeService $bedrockRuntimeService; public function runExample() { echo "\n"; echo "---------------------------------------------------------------------\n"; echo "Welcome to the Amazon Bedrock Runtime getting started demo using PHP!\n"; echo "---------------------------------------------------------------------\n"; $clientArgs = [ 'region' => 'us-east-1', 'version' => 'latest', 'profile' => 'default', ]; $bedrockRuntimeService = new BedrockRuntimeService($clientArgs); $prompt = 'In one paragraph, who are you?'; echo "\nPrompt: " . $prompt; echo "\n\nAnthropic Claude:"; echo $bedrockRuntimeService->invokeClaude($prompt); echo "\n\nAI21 Labs Jurassic-2: "; echo $bedrockRuntimeService->invokeJurassic2($prompt); echo "\n\nMeta Llama 2 Chat: "; echo $bedrockRuntimeService->invokeLlama2($prompt); echo "\n---------------------------------------------------------------------\n"; $image_prompt = 'stylized picture of a cute old steampunk robot'; echo "\nImage prompt: " . $image_prompt; echo "\n\nStability.ai Stable Diffusion XL:\n"; $diffusionSeed = rand(0, 4294967295); $style_preset = 'photographic'; $base64 = $bedrockRuntimeService->invokeStableDiffusion($image_prompt, $diffusionSeed, $style_preset); $image_path = $this->saveImage($base64, 'stability.stable-diffusion-xl'); echo "The generated images have been saved to $image_path"; echo "\n\nAmazon Titan Image Generation:\n"; $titanSeed = rand(0, 2147483647); $base64 = $bedrockRuntimeService->invokeTitanImage($image_prompt, $titanSeed); $image_path = $this->saveImage($base64, 'amazon.titan-image-generator-v1'); echo "The generated images have been saved to $image_path"; } private function saveImage($base64_image_data, $model_id): string { $output_dir = "output"; if (!file_exists($output_dir)) { mkdir($output_dir); } $i = 1; while (file_exists("$output_dir/$model_id" . '_' . "$i.png")) { $i++; } $image_data = base64_decode($base64_image_data); $file_path = "$output_dir/$model_id" . '_' . "$i.png"; $file = fopen($file_path, 'wb'); fwrite($file, $image_data); fclose($file); return $file_path; } }

Per un elenco completo delle guide per sviluppatori AWS SDK e degli esempi di codice, consulta. Utilizzo di questo servizio con un AWS SDK Questo argomento include anche informazioni su come iniziare e dettagli sulle versioni precedenti dell'SDK.