Run example Amazon Bedrock API requests through the AWS SDK for Python (Boto3)
This section guides you through trying out some common operations in Amazon Bedrock with the AWS Python to test that your permissions and authentication are set up properly. Before you run the following examples, you should check that you have fulfilled the following prerequisites:
Prerequisites
-
You have an AWS account and have permissions to access a role with the necessary permissions for Amazon Bedrock. Otherwise, follow the steps at I already have an AWS account.
-
You've requested access to the Amazon Titan Text G1 - Express model. Otherwise, follow the steps at Request access to an Amazon Bedrock foundation model.
-
You've received access keys for your IAM user and configured a profile with them. Otherwise, follow the steps that are applicable to your use case at Get credentials to grant programmatic access to a user.
Test that your permissions and access keys are set up properly for Amazon Bedrock, using the Amazon Bedrock role that you created. These examples assume that you have configured your environment with your access keys. Note the following:
-
Minimally, you must specify your AWS access key ID and an AWS secret access key.
-
If you're using temporary credentials, you must also include an AWS session token.
If you don't specify your credentials in your environment, you can specify them when creating a clientaws_access_key_id
, aws_secret_access_key
, and (if you're using short-term credentials) aws_session_token
arguments when you create the client.
Topics
List the foundation models that Amazon Bedrock has to offer
The following example runs the ListFoundationModels operation using an Amazon Bedrock client. ListFoundationModels
lists the foundation models (FMs) that are available in Amazon Bedrock in your region. Run the following SDK for Python script to create an Amazon Bedrock client and test the ListFoundationModels operation:
# Use the ListFoundationModels API to show the models that are available in your region. import boto3 # Create an &BR; client in the ®ion-us-east-1; Region. bedrock = boto3.client( service_name="bedrock" ) bedrock.list_foundation_models()
If the script is successful, the response returns a list of foundation models that are available in Amazon Bedrock.
Submit a text prompt to a model and generate a text response with InvokeModel
The following example runs the InvokeModel operation using an Amazon Bedrock client. InvokeModel
lets you submit a prompt to generate a model response. Run the following SDK for Python script to create an Amazon Bedrock runtime client and generate a text response with the operation:
# Use the native inference API to send a text message to Amazon Titan Text G1 - Express. import boto3 import json from botocore.exceptions import ClientError # Create an Amazon Bedrock Runtime client. brt = boto3.client("bedrock-runtime") # Set the model ID, e.g., Amazon Titan Text G1 - Express. model_id = "amazon.titan-text-express-v1" # Define the prompt for the model. prompt = "Describe the purpose of a 'hello world' program in one line." # Format the request payload using the model's native structure. native_request = { "inputText": prompt, "textGenerationConfig": { "maxTokenCount": 512, "temperature": 0.5, "topP": 0.9 }, } # Convert the native request to JSON. request = json.dumps(native_request) try: # Invoke the model with the request. response = brt.invoke_model(modelId=model_id, body=request) except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1) # Decode the response body. model_response = json.loads(response["body"].read()) # Extract and print the response text. response_text = model_response["results"][0]["outputText"] print(response_text)
If the command is successful, the response returns the text generated by the model in response to the prompt.
Submit a text prompt to a model and generate a text response with Converse
The following example runs the Converse operation using an Amazon Bedrock client. We recommend using Converse
operation over InvokeModel
when supported, because it unifies the inference request across Amazon Bedrock models and simplifies the management of multi-turn conversations. Run the following SDK for Python script to create an Amazon Bedrock runtime client and generate a text response with the Converse
operation:
# Use the Conversation API to send a text message to Amazon Titan Text G1 - Express. import boto3 from botocore.exceptions import ClientError # Create an Amazon Bedrock Runtime client. brt = boto3.client("bedrock-runtime") # Set the model ID, e.g., Amazon Titan Text G1 - Express. model_id = "amazon.titan-text-express-v1" # Start a conversation with the user message. user_message = "Describe the purpose of a 'hello world' program in one line." conversation = [ { "role": "user", "content": [{"text": user_message}], } ] try: # Send the message to the model, using a basic inference configuration. response = brt.converse( modelId=model_id, messages=conversation, inferenceConfig={"maxTokens": 512, "temperature": 0.5, "topP": 0.9}, ) # Extract and print the response text. response_text = response["output"]["message"]["content"][0]["text"] print(response_text) except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1)
If the command is successful, the response returns the text generated by the model in response to the prompt.