Run example Amazon Bedrock API requests through the AWS SDK for Python (Boto3)
This section guides you through trying out some common operations in Amazon Bedrock with the AWS Python to test that your permissions and authentication are set up properly. Before you run the following examples, you should check that you have fulfilled the following prerequisites:
Prerequisites
-
You have an AWS account and a user or role with authentication set up and the necessary permissions for Amazon Bedrock. Otherwise, follow the steps at Getting started with the API.
-
You've requested access to the Amazon Titan Text G1 - Express model. Otherwise, follow the steps at Request access to an Amazon Bedrock foundation model.
-
You've installed and set up authentication for the AWS SDK for Python (Boto3). To install Boto3, follow the steps at Quickstart
in the Boto3 documentation. Verify that you've set up your credentials to use Boto3 by following the steps at Get credentials to grant programmatic access.
Test that your permissions are set up properly for Amazon Bedrock, using a user or role that you set up with the proper permissions.
The Amazon Bedrock documentation also includes code examples for other programming languages. For more information, see Code examples for Amazon Bedrock using AWS SDKs.
Topics
List the foundation models that Amazon Bedrock has to offer
The following example runs the ListFoundationModels operation using an Amazon Bedrock client. ListFoundationModels
lists the foundation models (FMs) that are available in Amazon Bedrock in your region. Run the following SDK for Python script to create an Amazon Bedrock client and test the ListFoundationModels operation:
""" Lists the available Amazon Bedrock models in an AWS Region. """ import logging import json import boto3 from botocore.exceptions import ClientError logging.basicConfig(level=logging.INFO) logger = logging.getLogger(__name__) def list_foundation_models(bedrock_client): """ Gets a list of available Amazon Bedrock foundation models. :return: The list of available bedrock foundation models. """ try: response = bedrock_client.list_foundation_models() models = response["modelSummaries"] logger.info("Got %s foundation models.", len(models)) return models except ClientError: logger.error("Couldn't list foundation models.") raise def main(): """Entry point for the example. Change aws_region to the AWS Region that you want to use.""" aws_region = "us-east-1" bedrock_client = boto3.client(service_name="bedrock", region_name=aws_region) fm_models = list_foundation_models(bedrock_client) for model in fm_models: print(f"Model: {model["modelName"]}") print(json.dumps(model, indent=2)) print("---------------------------\n") logger.info("Done.") if __name__ == "__main__": main()
If the script is successful, the response returns a list of foundation models that are available in Amazon Bedrock.
Submit a text prompt to a model and generate a text response with InvokeModel
The following example runs the InvokeModel operation using an Amazon Bedrock client. InvokeModel
lets you submit a prompt to generate a model response. Run the following SDK for Python script to create an Amazon Bedrock runtime client and generate a text response with the operation:
# Use the native inference API to send a text message to Amazon Titan Text G1 - Express. import boto3 import json from botocore.exceptions import ClientError # Create an Amazon Bedrock Runtime client. brt = boto3.client("bedrock-runtime") # Set the model ID, e.g., Amazon Titan Text G1 - Express. model_id = "amazon.titan-text-express-v1" # Define the prompt for the model. prompt = "Describe the purpose of a 'hello world' program in one line." # Format the request payload using the model's native structure. native_request = { "inputText": prompt, "textGenerationConfig": { "maxTokenCount": 512, "temperature": 0.5, "topP": 0.9 }, } # Convert the native request to JSON. request = json.dumps(native_request) try: # Invoke the model with the request. response = brt.invoke_model(modelId=model_id, body=request) except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1) # Decode the response body. model_response = json.loads(response["body"].read()) # Extract and print the response text. response_text = model_response["results"][0]["outputText"] print(response_text)
If the command is successful, the response returns the text generated by the model in response to the prompt.
Submit a text prompt to a model and generate a text response with Converse
The following example runs the Converse operation using an Amazon Bedrock client. We recommend using Converse
operation over InvokeModel
when supported, because it unifies the inference request across Amazon Bedrock models and simplifies the management of multi-turn conversations. Run the following SDK for Python script to create an Amazon Bedrock runtime client and generate a text response with the Converse
operation:
# Use the Conversation API to send a text message to Amazon Titan Text G1 - Express. import boto3 from botocore.exceptions import ClientError # Create an Amazon Bedrock Runtime client. brt = boto3.client("bedrock-runtime") # Set the model ID, e.g., Amazon Titan Text G1 - Express. model_id = "amazon.titan-text-express-v1" # Start a conversation with the user message. user_message = "Describe the purpose of a 'hello world' program in one line." conversation = [ { "role": "user", "content": [{"text": user_message}], } ] try: # Send the message to the model, using a basic inference configuration. response = brt.converse( modelId=model_id, messages=conversation, inferenceConfig={"maxTokens": 512, "temperature": 0.5, "topP": 0.9}, ) # Extract and print the response text. response_text = response["output"]["message"]["content"][0]["text"] print(response_text) except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1)
If the command is successful, the response returns the text generated by the model in response to the prompt.