

# Submit prompts and generate responses using the API
<a name="inference-api"></a>

Amazon Bedrock offers the followingAPI operations for carrying out model inference:
+ [InvokeModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html) – Submit a prompt and generate a response. The request body is model-specific. To generate streaming responses, use [InvokeModelWithResponseStream](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModelWithResponseStream.html).
+ [Converse](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html) – Submit a prompt and generate responses with a structure unified across all models. Model-specific request fields can be specified in the `additionalModelRequestFields` field. You can also include system prompts and previous conversation for context. To generate streaming responses, use [ConverseStream](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ConverseStream.html).
+ [StartAsyncInvoke](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_StartAsyncInvoke.html) – Submit a prompt and generate a response asynchronously that can be retrieved later. Used to generate videos.
+ [InvokeModelWithBidirectionalStream](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModelWithBidirectionalStream.html) – 
+ OpenAI Chat completions API – Use the [OpenAI Chat Completions API](https://platform.openai.com/docs/api-reference/chat/create) with models supported by Amazon Bedrock to generate a response.

**Note**  
Restrictions apply to the following operations: `InvokeModel`, `InvokeModelWithResponseStream`, `Converse`, and `ConverseStream`. See [API restrictions](inference-api-restrictions.md) for details.

For model inference, you need to determine the following parameters:
+ Model ID – The ID or Amazon Resource Name (ARN) of the model or inference profile to use in the `modelId` field for inference. The following table describes how to find IDs for different types of resources:  
****    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/bedrock/latest/userguide/inference-api.html)
+ Request body – Contains the inference parameters for a model and other configurations. Each base model has its own inference parameters. The inference parameters for a custom or provisioned model depends on the base model from which it was created. For more information, see [Inference request parameters and response fields for foundation models](model-parameters.md).

Select a topic to learn how to use the model invocation APIs.

**Topics**
+ [Submit a single prompt with InvokeModel](inference-invoke.md)
+ [Invoke a model with the OpenAI Chat Completions API](inference-chat-completions.md)
+ [Carry out a conversation with the Converse API operations](conversation-inference.md)
+ [API restrictions](inference-api-restrictions.md)

# Submit a single prompt with InvokeModel
<a name="inference-invoke"></a>

You run inference on a single prompt by using the [InvokeModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html) and [InvokeModelWithResponseStream](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModelWithResponseStream.html) API operations and specifying a model. Amazon Bedrock models differ in whether they accept text, image, or video inputs and whether they can produce outputs of text, image, or embeddings. Some models can return the response in a stream. To check model support for input, output, and streaming support, do one of the following:
+ Check the value in the **Input modalities**, **Output modalities**, or **Streaming supported** columns for a model at [Supported foundation models in Amazon Bedrock](models-supported.md).
+ Send a [GetFoundationModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_GetFoundationModel.html) request with the model ID and check the values in the `inputModalities`, `outputModalities`, and `responseStreamingSupported` field.

Run model inference on a prompt by sending an [InvokeModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html) or [InvokeModelWithResponseStream](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModelWithResponseStream.html) request with an [Amazon Bedrock runtime endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#br-rt).

**Note**  
Restrictions apply to the following operations: `InvokeModel`, `InvokeModelWithResponseStream`, `Converse`, and `ConverseStream`. See [API restrictions](inference-api-restrictions.md) for details.

The following fields are required:


****  

| Field | Use case | 
| --- | --- | 
| modelId | To specify the model, inference profile, or prompt from Prompt management to use. To learn how to find this value, see [Submit prompts and generate responses using the API](inference-api.md). | 
| body | To specify the inference parameters for a model. To see inference parameters for different models, see [Inference request parameters and response fields for foundation models](model-parameters.md). If you specify a prompt from Prompt management in the modelId field, omit this field (if you include it, it will be ignored). | 

The following fields are optional:


****  

| Field | Use case | 
| --- | --- | 
| accept | To specify the media type for the request body. For more information, see Media Types on the [Swagger website](https://swagger.io/specification/). | 
| contentType | To specify the media type for the response body. For more information, see Media Types on the [Swagger website](https://swagger.io/specification/). | 
| performanceConfigLatency | To specify whether to optimize a model for latency. For more information, see [Optimize model inference for latency](latency-optimized-inference.md). | 
| guardrailIdentifier | To specify a guardrail to apply to the prompt and response. For more information, see [Test your guardrail](guardrails-test.md). | 
| guardrailVersion | To specify a guardrail to apply to the prompt and response. For more information, see [Test your guardrail](guardrails-test.md). | 
| trace | To specify whether to return the trace for the guardrail you specify. For more information, see [Test your guardrail](guardrails-test.md). | 
| serviceTier | To specify the service tier for a request. For more information, see [Service tiers for optimizing performance and cost](service-tiers-inference.md). | 

## Invoke model code examples
<a name="inference-example-invoke"></a>

This topic provides some basic examples for running inference using a single prompt with the [InvokeModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html) API. For more examples with different models, visit the following resources:
+ Pick an example under the [Code examples for Amazon Bedrock Runtime using AWS SDKs](service_code_examples_bedrock-runtime.md) topic.
+ Visit the inference parameter reference for the desired model at [Inference request parameters and response fields for foundation models](model-parameters.md).

The following examples assume that you've set up programmatic access such that you automatically authenticate to the AWS CLI and the SDK for Python (Boto3) in a default AWS Region when you run these examples. For information on setting up programmating access, see [Get started with the API](getting-started-api.md).

**Note**  
Review the following points before trying out the examples:  
You should test these examples in US East (N. Virginia) (us-east-1), which supports all the models used in the examples.
The `body` parameter can be large, so for some CLI examples, you'll be asked to create a JSON file and provide that file into the `--body` argument instead of specifying it in the command line.
For the image and video examples, you'll be asked to use your own image and video. The examples assume that your image file is named *image.png* and that your video file is named *video.mp4*.
You might have to convert images or videos into a base64-encoded string or upload them to an Amazon S3 location. In the examples, you'll have to replace the placeholders with the actual base64-encoded string or S3 location.

Expand a section to try some basic code examples.

### Generate text with a text prompt
<a name="w2aac15c32c33c17c19c13b1"></a>

The following examples generate a text response to a text prompt using the Amazon Titan Text Premier model. Choose the tab for your preferred method, and then follow the steps:

------
#### [ CLI ]

Run the following command in a terminal and find the generated response in a file called *invoke-model-output.txt*.

```
aws bedrock-runtime invoke-model \
    --model-id amazon.titan-text-premier-v1:0 \
    --body '{
        "inputText": "Describe the purpose of a 'hello world' program in one line.",
        "textGenerationConfig": {
            "maxTokenCount": 512,
            "temperature": 0.5
        }
    }' \
    --cli-binary-format raw-in-base64-out \
    invoke-model-output.txt
```

------
#### [ Python ]

Run the following Python code example to generate a text response:

```
# Use the native inference API to send a text message to Amazon Titan Text.

import boto3
import json

from botocore.exceptions import ClientError

# Create a Bedrock Runtime client in the AWS Region of your choice.
client = boto3.client("bedrock-runtime", region_name="us-east-1")

# Set the model ID, e.g., Titan Text Premier.
model_id = "amazon.titan-text-premier-v1:0"

# Define the prompt for the model.
prompt = "Describe the purpose of a 'hello world' program in one line."

# Format the request payload using the model's native structure.
native_request = {
    "inputText": prompt,
    "textGenerationConfig": {
        "maxTokenCount": 512,
        "temperature": 0.5,
    },
}

# Convert the native request to JSON.
request = json.dumps(native_request)

try:
    # Invoke the model with the request.
    response = client.invoke_model(modelId=model_id, body=request)

except (ClientError, Exception) as e:
    print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}")
    exit(1)

# Decode the response body.
model_response = json.loads(response["body"].read())

# Extract and print the response text.
response_text = model_response["results"][0]["outputText"]
print(response_text)
```

------

### Generate text with a text prompt using service tier
<a name="w2aac15c32c33c17c19c13b3"></a>

The following examples generate a text response to a text prompt using the OpenAI GPT model with a service tier to prioritize the request. Choose the tab for your preferred method, and then follow the steps:

------
#### [ CLI ]

Run the following command in a terminal and validate the service tier in the response.

```
aws bedrock-runtime invoke-model \
    --model-id openai.gpt-oss-120b-1:0 \
    --body '{
        "messages": [
            {
                "role": "user",
                "content": "Describe the purpose of a '\''hello world'\'' program in one line."
            }
        ],
        "max_tokens": 512,
        "temperature": 0.7
    }' \
    --content-type application/json \
    --accept application/json \
    --service-tier priority \
    --cli-binary-format raw-in-base64-out
```

------
#### [ Python ]

Run the following Python code example to generate a text response with service tier:

```
import boto3
import json

# Create a Bedrock Runtime client
bedrock_runtime = boto3.client(
    service_name="bedrock-runtime",
    region_name="us-east-1"
)

# Define the model ID and request body
model_id = "openai.gpt-oss-120b-1:0"
body = json.dumps({
    "messages": [
        {
            "role": "user",
            "content": "Describe the purpose of a 'hello world' program in one line."
        }
    ],
    "max_tokens": 512,
    "temperature": 0.7
})

# Make the request with service tier
response = bedrock_runtime.invoke_model(
    modelId=model_id,
    body=body,
    contentType="application/json",
    accept="application/json",
    serviceTier="priority"
)

# Parse and print the response
response_body = json.loads(response["body"])
print(response_body)
```

------

### Generate an image with a text prompt
<a name="w2aac15c32c33c17c19c13b5"></a>

The following code examples generate an image using a text prompt with the Stable Diffusion XL 1.0 model. Choose the tab for your preferred method, and then follow the steps:

------
#### [ CLI ]

Run the following command in a terminal and find the generated response in a file called *invoke-model-output.txt*. The bytes that represent the image can be found in the `base64` field in the response:

```
aws bedrock-runtime invoke-model \
    --model-id stability.stable-diffusion-xl-v1 \
    --body '{
        "text_prompts": [{"text": "A stylized picture of a cute old steampunk robot."}],
        "style_preset": "photographic",
        "seed": 0,
        "cfg_scale": 10,
        "steps": 30
    }' \
    --cli-binary-format raw-in-base64-out \
    invoke-model-output.txt
```

------
#### [ Python ]

Run the following Python code example to generate an image and find the resulting *stability\$11.png* image file in a folder called *output*.

```
# Use the native inference API to create an image with Amazon Titan Image Generator

import base64
import boto3
import json
import os
import random

# Create a Bedrock Runtime client in the AWS Region of your choice.
client = boto3.client("bedrock-runtime", region_name="us-east-1")

# Set the model ID, e.g., Titan Image Generator G1.
model_id = "amazon.titan-image-generator-v2:0"

# Define the image generation prompt for the model.
prompt = "A stylized picture of a cute old steampunk robot."

# Generate a random seed.
seed = random.randint(0, 2147483647)

# Format the request payload using the model's native structure.
native_request = {
    "taskType": "TEXT_IMAGE",
    "textToImageParams": {"text": prompt},
    "imageGenerationConfig": {
        "numberOfImages": 1,
        "quality": "standard",
        "cfgScale": 8.0,
        "height": 512,
        "width": 512,
        "seed": seed,
    },
}

# Convert the native request to JSON.
request = json.dumps(native_request)

# Invoke the model with the request.
response = client.invoke_model(modelId=model_id, body=request)

# Decode the response body.
model_response = json.loads(response["body"].read())

# Extract the image data.
base64_image_data = model_response["images"][0]

# Save the generated image to a local folder.
i, output_dir = 1, "output"
if not os.path.exists(output_dir):
    os.makedirs(output_dir)
while os.path.exists(os.path.join(output_dir, f"titan_{i}.png")):
    i += 1

image_data = base64.b64decode(base64_image_data)

image_path = os.path.join(output_dir, f"titan_{i}.png")
with open(image_path, "wb") as file:
    file.write(image_data)

print(f"The generated image has been saved to {image_path}")
```

------

### Generate embeddings from text
<a name="w2aac15c32c33c17c19c13b9"></a>

The following examples use the Amazon Titan Text Embeddings V2 model to generate binary embeddings for a text input. Choose the tab for your preferred method, and then follow the steps:

------
#### [ CLI ]

Run the following command in a terminal and find the generated response in a file called *invoke-model-output.txt*. The resulting embeddings are in the `binary` field.

```
aws bedrock-runtime invoke-model \
    --model-id amazon.titan-embed-text-v2:0 \
    --body '{
        "inputText": "What are the different services that you offer?",
        "embeddingTypes": ["binary"]
    }' \
    --cli-binary-format raw-in-base64-out \
    invoke-model-output.txt
```

------
#### [ Python ]

Run the following Python code example to generate embeddings for the provided text:

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to generate an embedding with the Amazon Titan Text Embeddings V2 Model
"""

import json
import logging
import boto3


from botocore.exceptions import ClientError


logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def generate_embedding(model_id, body):
    """
    Generate an embedding with the vector representation of a text input using Amazon Titan Text Embeddings G1 on demand.
    Args:
        model_id (str): The model ID to use.
        body (str) : The request body to use.
    Returns:
        response (JSON): The embedding created by the model and the number of input tokens.
    """

    logger.info("Generating an embedding with Amazon Titan Text Embeddings V2 model %s", model_id)

    bedrock = boto3.client(service_name='bedrock-runtime')

    accept = "application/json"
    content_type = "application/json"

    response = bedrock.invoke_model(
        body=body, modelId=model_id, accept=accept, contentType=content_type
    )

    response_body = json.loads(response.get('body').read())

    return response_body


def main():
    """
    Entrypoint for Amazon Titan Embeddings V2 - Text example.
    """

    logging.basicConfig(level=logging.INFO,
                        format="%(levelname)s: %(message)s")

    model_id = "amazon.titan-embed-text-v2:0"
    input_text = "What are the different services that you offer?"


    # Create request body.
    body = json.dumps({
        "inputText": input_text,
        "embeddingTypes": ["binary"]
    })


    try:

        response = generate_embedding(model_id, body)

        print(f"Generated an embedding: {response['embeddingsByType']['binary']}") # returns binary embedding
        print(f"Input text: {input_text}")
        print(f"Input Token count:  {response['inputTextTokenCount']}")

    except ClientError as err:
        message = err.response["Error"]["Message"]
        logger.error("A client error occurred: %s", message)
        print("A client error occured: " +
              format(message))

    else:
        print(f"Finished generating an embedding with Amazon Titan Text Embeddings V2 model {model_id}.")


if __name__ == "__main__":
    main()
```

------

### Generate embeddings from an image
<a name="w2aac15c32c33c17c19c13c11"></a>

The following examples use the Amazon Titan Multimodal Embeddings G1 model to generate embeddings for an image input. Choose the tab for your preferred method, and then follow the steps:

------
#### [ CLI ]

Open a terminal and do the following:

1. Convert an image titled *image.png* in your current folder into a base64-encoded string and write it to a file titled *image.txt* by running the following command:

   ```
   base64 -i image.png -o image.txt
   ```

1. Create a JSON file called *image-input-embeddings-output.json* and paste the following JSON, replacing *\$1\$1image-base64\$1* with the contents of the *image.txt* file (make sure there is no new line at the end of the string):

   ```
   {
       "inputImage": "${image-base64}",
       "embeddingConfig": {
           "outputEmbeddingLength": 256
       }
   }
   ```

1. Run the following command, specifying the *image-input-embeddings-output.json* file as the body.

   ```
   aws bedrock-runtime invoke-model \
       --model-id amazon.titan-embed-image-v1 \
       --body file://image-input-embeddings-output.json \
       --cli-binary-format raw-in-base64-out \
       invoke-model-output.txt
   ```

1. Find the resulting embeddings in the *invoke-model-output.txt* file.

------
#### [ Python ]

In the following Python script, replace */path/to/image* with the path to an actual image. Then run the script to generate embeddings:

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to generate embeddings from an image with the Amazon Titan Multimodal Embeddings G1 model (on demand).
"""

import base64
import json
import logging
import boto3

from botocore.exceptions import ClientError

class EmbedError(Exception):
    "Custom exception for errors returned by Amazon Titan Multimodal Embeddings G1"

    def __init__(self, message):
        self.message = message

logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def generate_embeddings(model_id, body):
    """
    Generate a vector of embeddings for an image input using Amazon Titan Multimodal Embeddings G1 on demand.
    Args:
        model_id (str): The model ID to use.
        body (str) : The request body to use.
    Returns:
        response (JSON): The embeddings that the model generated, token information, and the
        reason the model stopped generating embeddings.
    """

    logger.info("Generating embeddings with Amazon Titan Multimodal Embeddings G1 model %s", model_id)

    bedrock = boto3.client(service_name='bedrock-runtime')

    accept = "application/json"
    content_type = "application/json"

    response = bedrock.invoke_model(
        body=body, modelId=model_id, accept=accept, contentType=content_type
    )

    response_body = json.loads(response.get('body').read())

    finish_reason = response_body.get("message")

    if finish_reason is not None:
        raise EmbedError(f"Embeddings generation error: {finish_reason}")

    return response_body


def main():
    """
    Entrypoint for Amazon Titan Multimodal Embeddings G1 example.
    """

    logging.basicConfig(level=logging.INFO,
                        format="%(levelname)s: %(message)s")

    # Read image from file and encode it as base64 string.
    with open("/path/to/image", "rb") as image_file:
        input_image = base64.b64encode(image_file.read()).decode('utf8')

    model_id = 'amazon.titan-embed-image-v1'
    output_embedding_length = 256

    # Create request body.
    body = json.dumps({
        "inputImage": input_image,
        "embeddingConfig": {
            "outputEmbeddingLength": output_embedding_length
        }
    })


    try:

        response = generate_embeddings(model_id, body)

        print(f"Generated image embeddings of length {output_embedding_length}: {response['embedding']}")

    except ClientError as err:
        message = err.response["Error"]["Message"]
        logger.error("A client error occurred: %s", message)
        print("A client error occured: " +
              format(message))
        
    except EmbedError as err:
        logger.error(err.message)
        print(err.message)

    else:
        print(f"Finished generating image embeddings with Amazon Titan Multimodal Embeddings G1 model {model_id}.")


if __name__ == "__main__":
    main()
```

------

### Generate a text response to an image with an accompanying text prompt
<a name="w2aac15c32c33c17c19c13c13"></a>

Choose the tab for your preferred method, and then follow the steps:

------
#### [ CLI ]

The following example uses the Anthropic Claude 3 Haiku model to generate a response, given an image and a text prompt that asks the contents of the image. Open a terminal and do the following:

1. Convert an image titled *image.png* in your current folder into a base64-encoded string and write it to a file titled *image.txt* by running the following command:

   ```
   base64 -i image.png -o image.txt
   ```

1. Create a JSON file called *image-text-input.json* and paste the following JSON, replacing *\$1\$1image-base64\$1* with the contents of the *image.txt* file (make sure there is no new line at the end of the string):

   ```
   {
       "anthropic_version": "bedrock-2023-05-31",
       "max_tokens": 1000,
       "messages": [
           {               
               "role": "user",
               "content": [
                   {
                       "type": "image",
                       "source": {
                           "type": "base64",
                           "media_type": "image/png", 
                           "data": "${image-base64}"
                       }
                   },
                   {
                       "type": "text",
                       "text": "What's in this image?"
                   }
               ]
           }
       ]
   }
   ```

1. Run the following command to generate a text output, based on the image and the accompanying text prompt, to a file called *invoke-model-output.txt*:

   ```
   aws bedrock-runtime invoke-model \
       --model-id anthropic.claude-3-haiku-20240307-v1:0 \
       --body file://image-text-input.json \
       --cli-binary-format raw-in-base64-out \
       invoke-model-output.txt
   ```

1. Find the output in the *invoke-model-output.txt* file in the current folder.

------
#### [ Python ]

In the following python script, replace */path/to/image.png* with the actual path to the image before running the script:

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to run a multimodal prompt with Anthropic Claude (on demand) and InvokeModel.
"""

import json
import logging
import base64
import boto3

from botocore.exceptions import ClientError


logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def run_multi_modal_prompt(bedrock_runtime, model_id, messages, max_tokens):
    """
    Invokes a model with a multimodal prompt.
    Args:
        bedrock_runtime: The Amazon Bedrock boto3 client.
        model_id (str): The model ID to use.
        messages (JSON) : The messages to send to the model.
        max_tokens (int) : The maximum  number of tokens to generate.
    Returns:
        None.
    """



    body = json.dumps(
        {
            "anthropic_version": "bedrock-2023-05-31",
            "max_tokens": max_tokens,
            "messages": messages
        }
    )

    response = bedrock_runtime.invoke_model(
        body=body, modelId=model_id)
    response_body = json.loads(response.get('body').read())

    return response_body


def main():
    """
    Entrypoint for Anthropic Claude multimodal prompt example.
    """

    try:

        bedrock_runtime = boto3.client(service_name='bedrock-runtime')

        model_id = 'anthropic.claude-3-sonnet-20240229-v1:0'
        max_tokens = 1000
        input_text = "What's in this image?"
        input_image = "/path/to/image" # Replace with actual path to image file
 
        # Read reference image from file and encode as base64 strings.
        image_ext = input_image.split(".")[-1]
        with open(input_image, "rb") as image_file:
            content_image = base64.b64encode(image_file.read()).decode('utf8')

        message = {
            "role": "user",
            "content": [
                {
                    "type": "image", 
                    "source": {
                        "type": "base64",
                        "media_type": f"image/{image_ext}", 
                        "data": content_image
                    }
                },
                {
                    "type": "text", 
                    "text": input_text
                }
            ]
        }

    
        messages = [message]

        response = run_multi_modal_prompt(
            bedrock_runtime, model_id, messages, max_tokens)
        print(json.dumps(response, indent=4))

    except ClientError as err:
        message = err.response["Error"]["Message"]
        logger.error("A client error occurred: %s", message)
        print("A client error occurred: " +
              format(message))


if __name__ == "__main__":
    main()
```

------

### Generate a text response to a video uploaded to Amazon S3 with an accompanying text prompt
<a name="w2aac15c32c33c17c19c13c15"></a>

The following examples show how to generate a response with the Amazon Nova Lite model, given a video you upload to an S3 bucket and an accompanying text prompt.

**Prerequisite:** Upload a video titled *video.mp4* to an Amazon S3 bucket in your account by following the steps at [Uploading objects](https://docs.aws.amazon.com/AmazonS3/latest/userguide/upload-objects.html#upload-objects-procedure) in the Amazon Simple Storage Service User Guide. Take note of the S3 URI of the video.

Choose the tab for your preferred method, and then follow the steps:

------
#### [ CLI ]

Open a terminal and run the following command, replacing *s3://amzn-s3-demo-bucket/video.mp4* with the actual S3 location of your video:

```
aws bedrock-runtime invoke-model \
    --model-id amazon.nova-lite-v1:0 \
    --body '{
        "messages": [          
            {               
                "role": "user",
                "content": [      
                    {                       
                        "video": {     
                            "format": "mp4",   
                            "source": {
                                "s3Location": {
                                    "uri": "s3://amzn-s3-demo-bucket/video.mp4"
                                }
                            }
                        }                                    
                    },
                    {
                        "text": "What happens in this video?"
                    }
                ]
            }                              
        ]                  
    }' \
    --cli-binary-format raw-in-base64-out \
    invoke-model-output.txt
```

Find the output in the *invoke-model-output.txt* file in the current folder.

------
#### [ Python ]

In the following Python script, replace *s3://amzn-s3-demo-bucket/video.mp4* with the actual S3 location of your video. Then run the script:

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to run a multimodal prompt with Nova Lite (on demand) and InvokeModel.
"""

import json
import logging
import base64
import boto3

from botocore.exceptions import ClientError


logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def run_multi_modal_prompt(bedrock_runtime, model_id, messages, max_tokens):
    """
    Invokes a model with a multimodal prompt.
    Args:
        bedrock_runtime: The Amazon Bedrock boto3 client.
        model_id (str): The model ID to use.
        messages (JSON) : The messages to send to the model.
        max_tokens (int) : The maximum  number of tokens to generate.
    Returns:
        None.
    """

    body = json.dumps(
        {
            "messages": messages,
            "inferenceConfig": {
                "maxTokens": max_tokens
            }
        }
    )

    response = bedrock_runtime.invoke_model(
        body=body, modelId=model_id)
    response_body = json.loads(response.get('body').read())

    return response_body


def main():
    """
    Entrypoint for Nova Lite video prompt example.
    """

    try:

        bedrock_runtime = boto3.client(service_name='bedrock-runtime')

        model_id = "amazon.nova-lite-v1:0"
        max_tokens = 1000
        input_video_s3_uri = "s3://amzn-s3-demo-bucket/video.mp4" # Replace with real S3 URI
        video_ext = input_video_s3_uri.split(".")[-1]
        input_text = "What happens in this video?"

        message = {
            "role": "user",
            "content": [
                {
                    "video": {
                        "format": video_ext,
                        "source": {
                            "s3Location": {
                                "uri": input_video_s3_uri
                            }
                        }
                    }
                },
                {
                    "text": input_text
                }
            ]
        }

    
        messages = [message]

        response = run_multi_modal_prompt(
            bedrock_runtime, model_id, messages, max_tokens)
        print(json.dumps(response, indent=4))

    except ClientError as err:
        message = err.response["Error"]["Message"]
        logger.error("A client error occurred: %s", message)
        print("A client error occured: " +
              format(message))


if __name__ == "__main__":
    main()
```

------

### Generate a text response to a video converted to a base64-encoded string with an accompanying text prompt
<a name="w2aac15c32c33c17c19c13c17"></a>

The following examples show how to generate a response with the Amazon Nova Lite model, given a video converted to a base64-encoded string and an accompanying text prompt. Choose the tab for your preferred method, and then follow the steps:

------
#### [ CLI ]

Do the following:

1. Convert a video titled *video.mp4* in your current folder into base64 by running the following command:

   ```
   base64 -i video.mp4 -o video.txt
   ```

1. Create a JSON file called *video-text-input.json* and paste the following JSON, replacing *\$1\$1video-base64\$1* with the contents of the `video.txt` file (make sure there is no new line at the end):

   ```
   {
       "messages": [          
           {               
               "role": "user",
               "content": [      
                   {                       
                       "video": {     
                           "format": "mp4",   
                           "source": {
                               "bytes": ${video-base64}
                           }
                       }                                    
                   },
                   {
                       "text": "What happens in this video?"
                   }
               ]
           }                              
       ]                  
   }
   ```

1. Run the following command to generate a text output based on the video and the accompanying text prompt to a file called *invoke-model-output.txt*:

   ```
   aws bedrock-runtime invoke-model \
       --model-id amazon.nova-lite-v1:0 \
       --body file://video-text-input.json \
       --cli-binary-format raw-in-base64-out \
       invoke-model-output.txt
   ```

1. Find the output in the *invoke-model-output.txt* file in the current folder.

------
#### [ Python ]

In the following Python script, replace */path/to/video.mp4* with the actual path to the video. Then run the script:

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to run a multimodal prompt with Nova Lite (on demand) and InvokeModel.
"""

import json
import logging
import base64
import boto3

from botocore.exceptions import ClientError


logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def run_multi_modal_prompt(bedrock_runtime, model_id, messages, max_tokens):
    """
    Invokes a model with a multimodal prompt.
    Args:
        bedrock_runtime: The Amazon Bedrock boto3 client.
        model_id (str): The model ID to use.
        messages (JSON) : The messages to send to the model.
        max_tokens (int) : The maximum  number of tokens to generate.
    Returns:
        None.
    """

    body = json.dumps(
        {
            "messages": messages,
            "inferenceConfig": {
                "maxTokens": max_tokens
            }
        }
    )

    response = bedrock_runtime.invoke_model(
        body=body, modelId=model_id)
    response_body = json.loads(response.get('body').read())

    return response_body


def main():
    """
    Entrypoint for Nova Lite video prompt example.
    """

    try:

        bedrock_runtime = boto3.client(service_name='bedrock-runtime')

        model_id = "amazon.nova-lite-v1:0"
        max_tokens = 1000
        input_video = "/path/to/video.mp4" # Replace with real path to video
        video_ext = input_video.split(".")[-1]
        input_text = "What happens in this video?"

        # Read reference video from file and encode as base64 string.
        with open(input_video, "rb") as video_file:
            content_video = base64.b64encode(video_file.read()).decode('utf8')\

        message = {
            "role": "user",
            "content": [
                {
                    "video": {
                        "format": video_ext,
                        "source": {
                            "bytes": content_video
                        }
                    }
                },
                {
                    "text": input_text
                }
            ]
        }

    
        messages = [message]

        response = run_multi_modal_prompt(
            bedrock_runtime, model_id, messages, max_tokens)
        print(json.dumps(response, indent=4))

    except ClientError as err:
        message = err.response["Error"]["Message"]
        logger.error("A client error occurred: %s", message)
        print("A client error occured: " +
              format(message))


if __name__ == "__main__":
    main()
```

------

## Invoke model with streaming code example
<a name="inference-examples-stream"></a>

**Note**  
The AWS CLI does not support streaming.

The following example shows how to use the [InvokeModelWithResponseStream](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModelWithResponseStream.html) API to generate streaming text with Python using the prompt *write an essay for living on mars in 1000 words*.

```
import boto3
import json

brt = boto3.client(service_name='bedrock-runtime')

body = json.dumps({
    'prompt': '\n\nHuman: write an essay for living on mars in 1000 words\n\nAssistant:',
    'max_tokens_to_sample': 4000
})
                   
response = brt.invoke_model_with_response_stream(
    modelId='anthropic.claude-v2', 
    body=body
)
    
stream = response.get('body')
if stream:
    for event in stream:
        chunk = event.get('chunk')
        if chunk:
            print(json.loads(chunk.get('bytes').decode()))
```

# Invoke a model with the OpenAI Chat Completions API
<a name="inference-chat-completions"></a>

You can run model inference using the [OpenAI Create chat completion API](https://platform.openai.com/docs/api-reference/chat/create) with Amazon Bedrock models.

You can call the Create chat completion API in the following ways:
+ Make an HTTP request with an Amazon Bedrock Runtime endpoint.
+ Use an OpenAI SDK request with an Amazon Bedrock Runtime endpoint.

Select a topic to learn more:

**Topics**
+ [Supported models and Regions for the OpenAI Chat Completions API](#inference-chat-completions-supported)
+ [Prerequisites to use the Chat Completions API](#inference-chat-completions-prereq)
+ [Create a chat completion](#inference-chat-completions-create)
+ [Include a guardrail in a chat completion](#inference-chat-completions-guardrails)

## Supported models and Regions for the OpenAI Chat Completions API
<a name="inference-chat-completions-supported"></a>

You can use the Create chat completion API with all OpenAI models supported in Amazon Bedrock and in the AWS Regions that support these models. For more information about supported models and regions, see [Supported foundation models in Amazon Bedrock](models-supported.md).

## Prerequisites to use the Chat Completions API
<a name="inference-chat-completions-prereq"></a>

To see prerequisites for using the Chat Completions API, choose the tab for your preferred method, and then follow the steps:

------
#### [ OpenAI SDK ]
+ **Authentication** – The OpenAI SDK only supports authentication with an Amazon Bedrock API key. Generate an Amazon Bedrock API key to authenticate your request. To learn about Amazon Bedrock API keys and how to generate them, see the API keys section in the Build chapter.
+ **Endpoint** – Find the endpoint that corresponds to the AWS Region to use in [Amazon Bedrock Runtime endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#br-rt). If you use an AWS SDK, you might only need to specify the region code and not the whole endpoint when you set up the client.
+ **Install an OpenAI SDK** – For more information, see [Libraries](https://platform.openai.com/docs/libraries) in the OpenAI documentation.

------
#### [ HTTP request ]
+ **Authentication** – You can authenticate with either your AWS credentials or with an Amazon Bedrock API key.

  Set up your AWS credentials or generate an Amazon Bedrock API key to authenticate your request.
  + To learn about setting up your AWS credentials, see [Programmatic access with AWS security credentials](https://docs.aws.amazon.com/IAM/latest/UserGuide/security-creds-programmatic-access.html).
  + To learn about Amazon Bedrock API keys and how to generate them, see the API keys section in the Build chapter.
+ **Endpoint** – Find the endpoint that corresponds to the AWS Region to use in [Amazon Bedrock Runtime endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#br-rt). If you use an AWS SDK, you might only need to specify the region code and not the whole endpoint when you set up the client.

------

## Create a chat completion
<a name="inference-chat-completions-create"></a>

Refer to the following resources in the OpenAI documentation for details about the Create chat completion API:
+ [Request body parameters](https://platform.openai.com/docs/api-reference/chat/create)
+ [Response body parameters](https://platform.openai.com/docs/api-reference/chat/object)

**Note**  
Amazon Bedrock currently doesn't support the other OpenAI Chat completion API operations.

To learn how to use the OpenAI Create chat completion API, choose the tab for your preferred method, and then follow the steps:

------
#### [ OpenAI SDK (Python) ]

To create a chat completion with the OpenAI SDK, do the following:

1. Import the OpenAI SDK and set up the client with the following fields:
   + `base_url` – Prefix the Amazon Bedrock Runtime endpoint to `/openai/v1`, as in the following format:

     ```
     https://${bedrock-runtime-endpoint}/openai/v1
     ```
   + `api_key` – Specify an Amazon Bedrock API key.
   + `default_headers` – If you need to include any headers, you can include them as key-value pairs in this object. You can alternatively specify headers in the `extra_headers` when making a specific API call.

1. Use the `chat.completions.create()` method with the client and minimally specify the `model` and `messages` in the request body.

The following example calls the Create chat completion API in `us-west-2`. Replace *\$1AWS\$1BEARER\$1TOKEN\$1BEDROCK* with your actual API key.

```
from openai import OpenAI

client = OpenAI(
    base_url="https://bedrock-runtime.us-west-2.amazonaws.com/openai/v1", 
    api_key="$AWS_BEARER_TOKEN_BEDROCK" # Replace with actual API key
)

completion = client.chat.completions.create(
    model="openai.gpt-oss-20b-1:0",
    messages=[
        {
            "role": "developer",
            "content": "You are a helpful assistant."
        },
        {
            "role": "user",
            "content": "Hello!"
        }
    ]
)

print(completion.choices[0].message)
```

------
#### [ HTTP request ]

To create a chat completion with a direct HTTTP request, do the following:

1. Specify the URL by prefixing the Amazon Bedrock Runtime endpoint to `/openai/v1/chat/completions`, as in the following format:

   ```
   https://${bedrock-runtime-endpoint}/openai/v1/chat/completions
   ```

1. Specify your AWS credentials or an Amazon Bedrock API key in the `Authorization` header.

1. In the request body, specify at least the `model` and `messages` in the request body.

The following example uses curl to call the Create chat completion API in `us-west-2`. Replace *\$1AWS\$1BEARER\$1TOKEN\$1BEDROCK* with your actual API key:

```
curl -X POST https://bedrock-runtime.us-west-2.amazonaws.com/openai/v1/chat/completions \
   -H "Content-Type: application/json" \
   -H "Authorization: Bearer $AWS_BEARER_TOKEN_BEDROCK" \
   -d '{
    "model": "openai.gpt-oss-20b-1:0",
    "messages": [
        {
            "role": "developer",
            "content": "You are a helpful assistant."
        },
        {
            "role": "user",
            "content": "Hello!"
        }
    ]
}'
```

------

## Include a guardrail in a chat completion
<a name="inference-chat-completions-guardrails"></a>

To include safeguards in model input and responses, apply a [guardrail](guardrails.md) when running model invocation by including the following [extra parameters](https://github.com/openai/openai-python#undocumented-request-params) as fields in the request body:
+ `extra_headers` – Maps to an object containing the following fields, which specify extra headers in the request:
  + `X-Amzn-Bedrock-GuardrailIdentifier` (required) – The ID of the guardrail.
  + `X-Amzn-Bedrock-GuardrailVersion` (required) – The version of the guardrail.
  + `X-Amzn-Bedrock-Trace` (optional) – Whether or not to enable the guardrail trace.
+ `extra_body` – Maps to an object. In that object, you can include the `amazon-bedrock-guardrailConfig` field, which maps to an object containing the following fields:
  + `tagSuffix` (optional) – Include this field for [input tagging](guardrails-tagging.md).

For more information about these parameters in Amazon Bedrock Guardrails, see [Test your guardrail](guardrails-test.md).

To see examples of using guardrails with OpenAI chat completions, choose the tab for your preferred method, and then follow the steps:

------
#### [ OpenAI SDK (Python) ]

```
import openai
from openai import OpenAIError

# Endpoint for Amazon Bedrock Runtime
bedrock_endpoint = "https://bedrock-runtime.us-west-2.amazonaws.com/openai/v1"

# Model ID
model_id = "openai.gpt-oss-20b-1:0"

# Replace with actual values
bedrock_api_key = "$AWS_BEARER_TOKEN_BEDROCK"
guardrail_id = "GR12345"
guardrail_version = "DRAFT"

client = openai.OpenAI(
    api_key=bedrock_api_key,
    base_url=bedrock_endpoint,
)

try:
    response = client.chat.completions.create(
        model=model_id,
        # Specify guardrail information in the header
        extra_headers={
            "X-Amzn-Bedrock-GuardrailIdentifier": guardrail_id,
            "X-Amzn-Bedrock-GuardrailVersion": guardrail_version,
            "X-Amzn-Bedrock-Trace": "ENABLED",
        },
        # Additional guardrail information can be specified in the body
        extra_body={
            "amazon-bedrock-guardrailConfig": {
                "tagSuffix": "xyz"  # Used for input tagging
            }
        },
        messages=[
            {
                "role": "system",
                "content": "You are a helpful assistant."
            },
            {
                "role": "assistant", 
                "content": "Hello! How can I help you today?"
            },
            {
                "role": "user",
                "content": "What is the weather like today?"
            }
        ]
    )

    request_id = response._request_id
    print(f"Request ID: {request_id}")
    print(response)
    
except OpenAIError as e:
    print(f"An error occurred: {e}")
    if hasattr(e, 'response') and e.response is not None:
        request_id = e.response.headers.get("x-request-id")
        print(f"Request ID: {request_id}")
```

------
#### [ OpenAI SDK (Java) ]

```
import com.openai.client.OpenAIClient;
import com.openai.client.okhttp.OpenAIOkHttpClient;
import com.openai.core.http.HttpResponseFor;
import com.openai.models.chat.completions.ChatCompletion;
import com.openai.models.chat.completions.ChatCompletionCreateParams;

// Endpoint for Amazon Bedrock Runtime
String bedrockEndpoint = "http://bedrock-runtime.us-west-2.amazonaws.com/openai/v1"

// Model ID
String modelId = "openai.gpt-oss-20b-1:0"

// Replace with actual values
String bedrockApiKey = "$AWS_BEARER_TOKEN_BEDROCK"
String guardrailId = "GR12345"
String guardrailVersion = "DRAFT"

OpenAIClient client = OpenAIOkHttpClient.builder()
        .apiKey(bedrockApiKey)
        .baseUrl(bedrockEndpoint)
        .build()

ChatCompletionCreateParams request = ChatCompletionCreateParams.builder()
        .addUserMessage("What is the temperature in Seattle?")
        .model(modelId)
        // Specify additional headers for the guardrail
        .putAdditionalHeader("X-Amzn-Bedrock-GuardrailIdentifier", guardrailId)
        .putAdditionalHeader("X-Amzn-Bedrock-GuardrailVersion", guardrailVersion)
        // Specify additional body parameters for the guardrail
        .putAdditionalBodyProperty(
                "amazon-bedrock-guardrailConfig",
                JsonValue.from(Map.of("tagSuffix", JsonValue.of("xyz"))) // Allows input tagging
        )
        .build();
        
HttpResponseFor<ChatCompletion> rawChatCompletionResponse =
        client.chat().completions().withRawResponse().create(request);

final ChatCompletion chatCompletion = rawChatCompletionResponse.parse();

System.out.println(chatCompletion);
```

------

# Carry out a conversation with the Converse API operations
<a name="conversation-inference"></a>

You can use the Amazon Bedrock Converse API to create conversational applications that send and receive messages to and from an Amazon Bedrock model. For example, you can create a chat bot that maintains a conversation over many turns and uses a persona or tone customization that is unique to your needs, such as a helpful technical support assistant.

To use the Converse API, you use the [Converse](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html) or [ConverseStream](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ConverseStream.html) (for streaming responses) operations to send messages to a model. It is possible to use the existing base inference operations ([InvokeModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html) or [InvokeModelWithResponseStream](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModelWithResponseStream.html)) for conversation applications. However, we recommend using the Converse API as it provides consistent API, that works with all Amazon Bedrock models that support messages. This means you can write code once and use it with different models. Should a model have unique inference parameters, the Converse API also allows you to pass those unique parameters in a model specific structure. 

You can use the Converse API to implement [tool use](tool-use.md) and [guardrails](guardrails-use-converse-api.md) in your applications. 

**Note**  
With Mistral AI and Meta models, the Converse API embeds your input in a model-specific prompt template that enables conversations. 
Restrictions apply to the following operations: `InvokeModel`, `InvokeModelWithResponseStream`, `Converse`, and `ConverseStream`. See [API restrictions](inference-api-restrictions.md) for details.

For code examples, see the following:
+ Python examples for this topic – [Converse API examples](conversation-inference-examples.md)
+ Various languages and models – [Code examples for Amazon Bedrock Runtime using AWS SDKs](service_code_examples_bedrock-runtime.md)
+ Java tutorial – [A Java developer's guide to Bedrock's new Converse API](https://community.aws/content/2hUiEkO83hpoGF5nm3FWrdfYvPt/amazon-bedrock-converse-api-java-developer-guide)
+ JavaScript tutorial – [A developer's guide to Bedrock's new Converse API](https://community.aws/content/2dtauBCeDa703x7fDS9Q30MJoBA/amazon-bedrock-converse-api-developer-guide)

**Topics**
+ [Using the Converse API](conversation-inference-call.md)
+ [Converse API examples](conversation-inference-examples.md)

# Using the Converse API
<a name="conversation-inference-call"></a>

To use the Converse API, you call the `Converse` or `ConverseStream` operations to send messages to a model. To call `Converse`, you require permission for the `bedrock:InvokeModel` operation. To call `ConverseStream`, you require permission for the `bedrock:InvokeModelWithResponseStream` operation.

**Topics**
+ [Request](#conversation-inference-call-request)
+ [Response](#conversation-inference-call-response)

**Note**  
Restrictions apply to the following operations: InvokeModel, InvokeModelWithResponseStream, Converse, and ConverseStream. See [API restrictions](inference-api-restrictions.md) for details.

## Request
<a name="conversation-inference-call-request"></a>

When you make a [Converse](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html) request with an [Amazon Bedrock runtime endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#br-rt), you can include the following fields:
+ **modelId** – A required parameter in the header that lets you specify the resource to use for inference.
+ The following fields let you customize the prompt:
  + **messages** – Use to specify the content and role of the prompts.
  + **system** – Use to specify system prompts, which define instructions or context for the model.
  + **inferenceConfig** – Use to specify inference parameters that are common to all models. Inference parameters influence the generation of the response.
  + **additionalModelRequestFields** – Use to specify inference parameters that are specific to the model that you run inference with.
  + **promptVariables** – (If you use a prompt from Prompt management) Use this field to define the variables in the prompt to fill in and the values with which to fill them.
+ The following fields let you customize how the response is returned:
  + **guardrailConfig** – Use this field to include a guardrail to apply to the entire prompt.
  + **toolConfig** – Use this field to include a tool to help a model generate responses.
  + **additionalModelResponseFieldPaths** – Use this field to specify fields to return as a JSON pointer object.
  + **serviceTier** – Use this field to specify the service tier for a particular request
+ **requestMetadata** – Use this field to include metadata that can be filtered on when using invocation logs.

**Note**  
The following restrictions apply when you use a Prompt management prompt with `Converse` or `ConverseStream`:  
You can't include the `additionalModelRequestFields`, `inferenceConfig`, `system`, or `toolConfig` fields.
If you include the `messages` field, the messages are appended after the messages defined in the prompt.
If you include the `guardrailConfig` field, the guardrail is applied to the entire prompt. If you include `guardContent` blocks in the [ContentBlock](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ContentBlock.html) field, the guardrail will only be applied to those blocks.

Expand a section to learn more about a field in the `Converse` request body:

### messages
<a name="converse-messages"></a>

The `messages` field is an array of [Message](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Message.html) objects, each of which defines a message between the user and the model. A `Message` object contains the following fields:
+ **role** – Defines whether the message is from the `user` (the prompt sent to the model) or `assistant` (the model response).
+ **content** – Defines the content in the prompt.
**Note**  
Amazon Bedrock doesn't store any text, images, or documents that you provide as content. The data is only used to generate the response.

You can maintain conversation context by including all the messages in the conversation in subsequent `Converse` requests and using the `role` field to specify whether the message is from the user or the model.

The `content` field maps to an array of [ContentBlock](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ContentBlock.html) objects. Within each [ContentBlock](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ContentBlock.html), you can specify one of the following fields (to see what models support what blocks, see [models at a glance](model-cards.md)):

------
#### [ text ]

The `text` field maps to a string specifying the prompt. The `text` field is interpreted alongside other fields that are specified in the same [ContentBlock](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ContentBlock.html).

The following shows a [Message](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Message.html) object with a `content` array containing only a text [ContentBlock](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ContentBlock.html):

```
{
    "role": "user",
    "content": [
        {
            "text": "string"
        }
    ]
}
```

------
#### [ image ]

The `image` field maps to an [ImageBlock](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ImageBlock.html). Pass the raw bytes, encoded in base64, for an image in the `bytes` field. If you use an AWS SDK, you don't need to encode the bytes in base64.

If you exclude the `text` field, the model describes the image.

The following shows an example [Message](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Message.html) object with a `content` array containing only an image [ContentBlock](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ContentBlock.html):

```
{
    "role": "user",
    "content": [
        {
            "image": {
                "format": "png",
                "source": {
                    "bytes": "image in bytes"
                }
            }
        }
    ]
}
```

You can also specify an Amazon S3 URI instead of passing the bytes directly in the request body. The following shows a sample `Message` object with a content array containing the source passed through an Amazon S3 URI.

```
{
    "role": "user",
    "content": [
        {
            "image": {
                "format": "png",
                "source": {
                    "s3Location": {
                        "uri": "s3://amzn-s3-demo-bucket/myImage",
                        "bucketOwner": "111122223333"
                    }
                }
            }
        }
    ]
}
```

------
#### [ document ]

The `document` field maps to an [DocumentBlock](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_DocumentBlock.html). If you include a `DocumentBlock`, check that your request conforms to the following restrictions:
+ In the `content` field of the [Message](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Message.html) object, you must also include a `text` field with a prompt related to the document.
+ Pass the raw bytes, encoded in base64, for the document in the `bytes` field. If you use an AWS SDK, you don't need to encode the document bytes in base64.
+ The `name` field can only contain the following characters:
  + Alphanumeric characters
  + Whitespace characters (no more than one in a row)
  + Hyphens
  + Parentheses
  + Square brackets
**Note**  
The `name` field is vulnerable to prompt injections, because the model might inadvertently interpret it as instructions. Therefore, we recommend that you specify a neutral name.

When using a document you can enable the `citations` tag, which will provide document specific citations in the response of the API call. See the [DocumentBlock](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_DocumentBlock.html) API for more details.

The following shows a sample [Message](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Message.html) object with a `content` array containing only a document [ContentBlock](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ContentBlock.html) and a required accompanying text [ContentBlock](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ContentBlock.html).

```
{
    "role": "user",
    "content": [
        {
            "text": "string"
        },
        {
            "document": {
                "format": "pdf",
                "name": "MyDocument",
                "source": {
                    "bytes": "document in bytes"
                }
            }
        }
    ]
}
```

You can also specify an Amazon S3 URI instead of passing the bytes directly in the request body. The following shows a sample `Message` object with a content array containing the source passed through an Amazon S3 URI.

```
{
    "role": "user",
    "content": [
        {
            "text": "string"
        },
        {
            "document": {
                "format": "pdf",
                "name": "MyDocument",
                "source": {
                    "s3Location": {
                      "uri": "s3://amzn-s3-demo-bucket/myDocument",
                      "bucketOwner": "111122223333"
                    }
                }
            }
        }
    ]
}
```

------
#### [ video ]

The `video` field maps to a [VideoBlock](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_VideoBlock.html) object. Pass the raw bytes in the `bytes` field, encoded in base64. If you use the AWS SDK, you don't need to encode the bytes in base64.

If you don't include the `text` field, the model will describe the video.

The following shows a sample [Message](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Message.html) object with a `content` array containing only a video [ContentBlock](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ContentBlock.html).

```
{
    "role": "user",
    "content": [
        {
            "video": {
                "format": "mp4",
                "source": {
                    "bytes": "video in bytes"
                }
            }
        }
    ]
}
```

You can also specify an Amazon S3 URI instead of passing the bytes directly in the request body. The following shows a sample `Message` object with a content array containing the source passed through an Amazon S3 URI.

```
{
    "role": "user",
    "content": [
        {
            "video": {
                "format": "mp4",
                "source": {
                    "s3Location": {
                        "uri": "s3://amzn-s3-demo-bucket/myVideo",
                        "bucketOwner": "111122223333"
                    }
                }
            }
        }
    ]
}
```

**Note**  
The assumed role must have the `s3:GetObject` permission to the Amazon S3 URI. The `bucketOwner` field is optional but must be specified if the account making the request does not own the bucket the Amazon S3 URI is found in. For more information, see [Configure access to Amazon S3 buckets](s3-bucket-access.md).

------
#### [ cachePoint ]

You can add cache checkpoints as a block in a message alongside an accompanying prompt by using `cachePoint` fields to utilize prompt caching. Prompt caching is a feature that lets you begin caching the context of conversations to achieve cost and latency savings. For more information, see [Prompt caching for faster model inference](prompt-caching.md).

The following shows a sample [Message](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Message.html) object with a `content` array containing a document [ContentBlock](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ContentBlock.html) and a required accompanying text [ContentBlock](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ContentBlock.html), as well as a **cachePoint** that adds both the document and text contents to the cache.

```
{
    "role": "user",
    "content": [
        {
            "text": "string"
        },
        {
            "document": {
                "format": "pdf",
                "name": "string",
                "source": {
                    "bytes": "document in bytes"
                }
            }
        },
        {
            "cachePoint": {
                "type": "default"
            }
        }
    ]
}
```

------
#### [ guardContent ]

The `guardContent` field maps to a [GuardrailConverseContentBlock](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_GuardrailConverseContentBlock.html) object. You can use this field to target an input to be evaluated by the guardrail defined in the `guardrailConfig` field. If you don't specify this field, the guardrail evaluates all messages in the request body. You can pass the following types of content in a `GuardBlock`:
+ **text** – The following shows an example [Message](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Message.html) object with a `content` array containing only a text [GuardrailConverseContentBlock](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_GuardrailConverseContentBlock.html):

  ```
  {
      "role": "user",
      "content": [
          {
              "text": "Tell me what stocks to buy.",
              "qualifiers": [
                  "guard_content"
              ]
          }
      ]
  }
  ```

  You define the text to be evaluated and include any qualifiers to use for [contextual grounding](guardrails-contextual-grounding-check.md).
+ **image** – The following shows a [Message](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Message.html) object with a `content` array containing only an image [GuardrailConverseContentBlock](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_GuardrailConverseContentBlock.html):

  ```
  {
      "role": "user",
      "content": [
          {
              "format": "png",
              "source": {
                  "bytes": "image in bytes"
              }
          }
      ]
  }
  ```

  You specify the format of the image and define the image in bytes.

For more information about using guardrails, see [Detect and filter harmful content by using Amazon Bedrock Guardrails](guardrails.md).

------
#### [ reasoningContent ]

The `reasoningContent` field maps to a [ReasoningContentBlock](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ReasoningContentBlock.html). This block contains content regarding the reasoning that was carried out by the model to generate the response in the accompanying `ContentBlock`.

The following shows a `Message` object with a `content` array containing only a `ReasoningContentBlock` and an accompanying text `ContentBlock`.

```
{
    "role": "user",
    "content": [
        {
            "text": "string"
        },
        {
            "reasoningContent": {
                "reasoningText": {
                    "text": "string",
                    "signature": "string"
                }
                "redactedContent": "base64-encoded binary data object"
            }
        }
    ]
}
```

The `ReasoningContentBlock` contains the reasoning used to generate the accompanying content in the `reasoningText` field, in addition to any content in the reasoning that was encrypted by the model provider for trust and safety reasons in the `redactedContent` field.

Within the `reasoningText` field, the `text` fields describes the reasoning. The `signature` field is a hash of all the messages in the conversation and is a safeguard against tampering of the reasoning used by the model. You must include the signature and all previous messages in subsequent `Converse` requests. If any of the messages are changed, the response throws an error.

------
#### [ toolUse ]

Contains information about a tool for the model to use. For more information, see [Use a tool to complete an Amazon Bedrock model response](tool-use.md).

------
#### [ toolResult ]

Contains information about the result from the model using a tool. For more information, see [Use a tool to complete an Amazon Bedrock model response](tool-use.md).

------

In the following `messages` example, the user asks for a list of three pop songs, and the model generates a list of songs. 

```
[
    {
        "role": "user",
        "content": [
            {
                "text": "Create a list of 3 pop songs."
            }
        ]
    },
    {
        "role": "assistant",
        "content": [
            {
                "text": "Here is a list of 3 pop songs by artists from the United Kingdom:\n\n1. \"As It Was\" by Harry Styles\n2. \"Easy On Me\" by Adele\n3. \"Unholy\" by Sam Smith and Kim Petras"
            }
        ]
    }
]
```

### system
<a name="converse-system"></a>

A system prompt is a type of prompt that provides instructions or context to the model about the task it should perform, or the persona it should adopt during the conversation. You can specify a list of system prompts for the request in the `system` ([SystemContentBlock](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_SystemContentBlock.html)) field, as shown in the following example.

```
[
    {
        "text": "You are an app that creates play lists for a radio station that plays rock and pop music. Only return song names and the artist. "
    }
]
```

### inferenceConfig
<a name="converse-inference"></a>

The Converse API supports a base set of inference parameters that you set in the `inferenceConfig` field ([InferenceConfiguration](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InferenceConfiguration.html)). The base set of inference parameters are:
+ **maxTokens** – The maximum number of tokens to allow in the generated response. 
+ **stopSequences** – A list of stop sequences. A stop sequence is a sequence of characters that causes the model to stop generating the response. 
+ **temperature** – The likelihood of the model selecting higher-probability options while generating a response. 
+ **topP** – The percentage of most-likely candidates that the model considers for the next token.

For more information, see [Influence response generation with inference parameters](inference-parameters.md).

The following example JSON sets the `temperature` inference parameter. 

```
{"temperature": 0.5}
```

### additionalModelRequestFields
<a name="converse-additional-model-request-fields"></a>

If the model you are using has additional inference parameters, you can set those parameters by specifying them as JSON in the `additionalModelRequestFields` field. The following example JSON shows how to set `top_k`, which is available in Anthropic Claude models, but isn't a base inference parameter in the messages API. 

```
{"top_k": 200}
```

### promptVariables
<a name="converse-prompt-variables"></a>

If you specify a prompt from [Prompt management](prompt-management.md) in the `modelId` as the resource to run inference on, use this field to fill in the prompt variables with actual values. The `promptVariables` field maps to a JSON object with keys that correspond to variables defined in the prompts and values to replace the variables with.

For example, let's say that you have a prompt that says **Make me a *\$1\$1genre\$1\$1* playlist consisting of the following number of songs: *\$1\$1number\$1\$1*.**. The prompt's ID is `PROMPT12345` and its version is `1`. You could send the following `Converse` request to replace the variables:

```
POST /model/arn:aws:bedrock:us-east-1:111122223333:prompt/PROMPT12345:1/converse HTTP/1.1
Content-type: application/json

{
   "promptVariables": { 
      "genre": {
         "text": "pop"
      },
      "number": {
         "text": "3"
      }
   }
}
```

### guardrailConfig
<a name="converse-guardrail"></a>

You can apply a guardrail that you created with [Amazon Bedrock Guardrails](guardrails.md) by including this field. To apply the guardrail to a specific message in the conversation, include the message in a [GuardrailConverseContentBlock](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_GuardrailConverseContentBlock.html). If you don't include any `GuardrailConverseContentBlock`s in the request body, the guardrail is applied to all the messages in the `messages` field. For an example, see [Include a guardrail with the Converse API](guardrails-use-converse-api.md).

### toolConfig
<a name="converse-tool"></a>

This field lets you define a tool for the model to use to help it generate a response. For more information, see [Use a tool to complete an Amazon Bedrock model response](tool-use.md).

### additionalModelResponseFieldPaths
<a name="converse-additional-model-response-field-paths"></a>

You can specify the paths for additional model parameters in the `additionalModelResponseFieldPaths` field, as shown in the following example.

```
[ "/stop_sequence" ]
```

The API returns the additional fields that you request in the `additionalModelResponseFields` field. 

### requestMetadata
<a name="converse-request-metadata"></a>

This field maps to a JSON object. You can specify metadata keys and values that they map to within this object. You can use request metadata to help you filter model invocation logs.

### serviceTier
<a name="inference-service-tiers"></a>

This field maps to a JSON object. You can specify the service tier for a particular request.

The following example shows the `serviceTier` structure:

```
"serviceTier": {
  "type": "reserved" | "priority" | "default" | "flex"
}
```

For detailed information about service tiers, including pricing and performance characteristics, see [Service tiers for optimizing performance and cost](service-tiers-inference.md).

You can also optionally add cache checkpoints to the `system` or `tools` fields to use prompt caching, depending on which model you're using. For more information, see [Prompt caching for faster model inference](prompt-caching.md).

## Response
<a name="conversation-inference-call-response"></a>

The response you get from the Converse API depends on which operation you call, `Converse` or `ConverseStream`.

**Topics**
+ [Converse response](#conversation-inference-call-response-converse)
+ [ConverseStream response](#conversation-inference-call-response-converse-stream)

### Converse response
<a name="conversation-inference-call-response-converse"></a>

In the response from `Converse`, the `output` field ([ConverseOutput](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ConverseOutput.html)) contains the message ([Message](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Message.html)) that the model generates. The message content is in the `content` ([ContentBlock](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ContentBlock.html)) field and the role (`user` or `assistant`) that the message corresponds to is in the `role` field. 

If you used [prompt caching](prompt-caching.md), then in the usage field, `cacheReadInputTokensCount` and `cacheWriteInputTokensCount` tell you how many total tokens were read from the cache and written to the cache, respectively.

If you used [service tiers](#inference-service-tiers), then in the response field, `service tier` would tell you which service tier was used for the request.

The `metrics` field ([ConverseMetrics](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ConverseMetrics.html)) includes metrics for the call. To determine why the model stopped generating content, check the `stopReason` field. You can get information about the tokens passed to the model in the request, and the tokens generated in the response, by checking the `usage` field ([TokenUsage](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_TokenUsage.html)). If you specified additional response fields in the request, the API returns them as JSON in the `additionalModelResponseFields` field. 

The following example shows the response from `Converse` when you pass the prompt discussed in [Request](#conversation-inference-call-request).

```
{
    "output": {
        "message": {
            "role": "assistant",
            "content": [
                {
                    "text": "Here is a list of 3 pop songs by artists from the United Kingdom:\n\n1. \"Wannabe\" by Spice Girls\n2. \"Bitter Sweet Symphony\" by The Verve \n3. \"Don't Look Back in Anger\" by Oasis"
                }
            ]
        }
    },
    "stopReason": "end_turn",
    "usage": {
        "inputTokens": 125,
        "outputTokens": 60,
        "totalTokens": 185
    },
    "metrics": {
        "latencyMs": 1175
    }
}
```

### ConverseStream response
<a name="conversation-inference-call-response-converse-stream"></a>

If you call `ConverseStream` to stream the response from a model, the stream is returned in the `stream` response field. The stream emits the following events in the following order.

1. `messageStart` ([MessageStartEvent](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_MessageStartEvent.html)). The start event for a message. Includes the role for the message.

1. `contentBlockStart` ([ContentBlockStartEvent](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ContentBlockStartEvent.html)). A Content block start event. Tool use only. 

1. `contentBlockDelta` ([ContentBlockDeltaEvent](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ContentBlockDeltaEvent.html)). A Content block delta event. Includes one of the following:
   + `text` – The partial text that the model generates.
   + `reasoningContent` – The partial reasoning carried out by the model to generate the response. You must submit the returned `signature`, in addition to all previous messages in subsequent `Converse` requests. If any of the messages are changed, the response throws an error.
   + `toolUse` – The partial input JSON object for tool use.

1. `contentBlockStop` ([ContentBlockStopEvent](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ContentBlockStopEvent.html)). A Content block stop event.

1. `messageStop` ([MessageStopEvent](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_MessageStopEvent.html)). The stop event for the message. Includes the reason why the model stopped generating output. 

1. `metadata` ([ConverseStreamMetadataEvent](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ConverseStreamMetadataEvent.html)). Metadata for the request. The metadata includes the token usage in `usage` ([TokenUsage](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_TokenUsage.html)) and metrics for the call in `metrics` ([ConverseStreamMetadataEvent](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ConverseStreamMetadataEvent.html)).

ConverseStream streams a complete content block as a `ContentBlockStartEvent` event, one or more `ContentBlockDeltaEvent` events, and a `ContentBlockStopEvent` event. Use the `contentBlockIndex` field as an index to correlate the events that make up a content block.

The following example is a partial response from `ConverseStream`. 

```
{'messageStart': {'role': 'assistant'}}
{'contentBlockDelta': {'delta': {'text': ''}, 'contentBlockIndex': 0}}
{'contentBlockDelta': {'delta': {'text': ' Title'}, 'contentBlockIndex': 0}}
{'contentBlockDelta': {'delta': {'text': ':'}, 'contentBlockIndex': 0}}
.
.
.
{'contentBlockDelta': {'delta': {'text': ' The'}, 'contentBlockIndex': 0}}
{'messageStop': {'stopReason': 'max_tokens'}}
{'metadata': {'usage': {'inputTokens': 47, 'outputTokens': 20, 'totalTokens': 67}, 'metrics': {'latencyMs': 100.0}}}
```

# Converse API examples
<a name="conversation-inference-examples"></a>

The following examples show you how to use the `Converse` and `ConverseStream` operations.

------
#### [ Text ]

This example shows how to call the `Converse` operation with the *Anthropic Claude 3 Sonnet* model. The example shows how to send the input text, inference parameters, and additional parameters that are unique to the model. The code starts a conversation by asking the model to create a list of songs. It then continues the conversation by asking that the songs are by artists from the United Kingdom.

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to use the <noloc>Converse</noloc> API with Anthropic Claude 3 Sonnet (on demand).
"""

import logging
import boto3

from botocore.exceptions import ClientError


logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def generate_conversation(bedrock_client,
                          model_id,
                          system_prompts,
                          messages):
    """
    Sends messages to a model.
    Args:
        bedrock_client: The Boto3 Bedrock runtime client.
        model_id (str): The model ID to use.
        system_prompts (JSON) : The system prompts for the model to use.
        messages (JSON) : The messages to send to the model.

    Returns:
        response (JSON): The conversation that the model generated.

    """

    logger.info("Generating message with model %s", model_id)

    # Inference parameters to use.
    temperature = 0.5
    top_k = 200

    # Base inference parameters to use.
    inference_config = {"temperature": temperature}
    # Additional inference parameters to use.
    additional_model_fields = {"top_k": top_k}

    # Send the message.
    response = bedrock_client.converse(
        modelId=model_id,
        messages=messages,
        system=system_prompts,
        inferenceConfig=inference_config,
        additionalModelRequestFields=additional_model_fields
    )

    # Log token usage.
    token_usage = response['usage']
    logger.info("Input tokens: %s", token_usage['inputTokens'])
    logger.info("Output tokens: %s", token_usage['outputTokens'])
    logger.info("Total tokens: %s", token_usage['totalTokens'])
    logger.info("Stop reason: %s", response['stopReason'])

    return response

def main():
    """
    Entrypoint for Anthropic Claude 3 Sonnet example.
    """

    logging.basicConfig(level=logging.INFO,
                        format="%(levelname)s: %(message)s")

    model_id = "anthropic.claude-3-sonnet-20240229-v1:0"

    # Setup the system prompts and messages to send to the model.
    system_prompts = [{"text": "You are an app that creates playlists for a radio station that plays rock and pop music. Only return song names and the artist."}]
    message_1 = {
        "role": "user",
        "content": [{"text": "Create a list of 3 pop songs."}]
    }
    message_2 = {
        "role": "user",
        "content": [{"text": "Make sure the songs are by artists from the United Kingdom."}]
    }
    messages = []

    try:

        bedrock_client = boto3.client(service_name='bedrock-runtime')

        # Start the conversation with the 1st message.
        messages.append(message_1)
        response = generate_conversation(
            bedrock_client, model_id, system_prompts, messages)

        # Add the response message to the conversation.
        output_message = response['output']['message']
        messages.append(output_message)

        # Continue the conversation with the 2nd message.
        messages.append(message_2)
        response = generate_conversation(
            bedrock_client, model_id, system_prompts, messages)

        output_message = response['output']['message']
        messages.append(output_message)

        # Show the complete conversation.
        for message in messages:
            print(f"Role: {message['role']}")
            for content in message['content']:
                print(f"Text: {content['text']}")
            print()

    except ClientError as err:
        message = err.response['Error']['Message']
        logger.error("A client error occurred: %s", message)
        print(f"A client error occured: {message}")

    else:
        print(
            f"Finished generating text with model {model_id}.")


if __name__ == "__main__":
    main()
```

------
#### [ Image ]

This example shows how to send an image as part of a message and requests that the model describe the image. The example uses `Converse` operation and the *Anthropic Claude 3 Sonnet* model. 

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to send an image with the <noloc>Converse</noloc> API with an accompanying text prompt to Anthropic Claude 3 Sonnet (on demand).
"""

import logging
import boto3


from botocore.exceptions import ClientError


logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def generate_conversation(bedrock_client,
                          model_id,
                          input_text,
                          input_image):
    """
    Sends a message to a model.
    Args:
        bedrock_client: The Boto3 Bedrock runtime client.
        model_id (str): The model ID to use.
        input text : The text prompt accompanying the image.
        input_image : The path to the input image.

    Returns:
        response (JSON): The conversation that the model generated.

    """

    logger.info("Generating message with model %s", model_id)

    # Get image extension and read in image as bytes
    image_ext = input_image.split(".")[-1]
    with open(input_image, "rb") as f:
        image = f.read()

    message = {
        "role": "user",
        "content": [
            {
                "text": input_text
            },
            {
                "image": {
                    "format": image_ext,
                    "source": {
                        "bytes": image
                    }
                }
            }
        ]
    }

    messages = [message]

    # Send the message.
    response = bedrock_client.converse(
        modelId=model_id,
        messages=messages
    )

    return response


def main():
    """
    Entrypoint for Anthropic Claude 3 Sonnet example.
    """

    logging.basicConfig(level=logging.INFO,
                        format="%(levelname)s: %(message)s")

    model_id = "anthropic.claude-3-sonnet-20240229-v1:0"
    input_text = "What's in this image?"
    input_image = "path/to/image"

    try:

        bedrock_client = boto3.client(service_name="bedrock-runtime")

        response = generate_conversation(
            bedrock_client, model_id, input_text, input_image)

        output_message = response['output']['message']

        print(f"Role: {output_message['role']}")

        for content in output_message['content']:
            print(f"Text: {content['text']}")

        token_usage = response['usage']
        print(f"Input tokens:  {token_usage['inputTokens']}")
        print(f"Output tokens:  {token_usage['outputTokens']}")
        print(f"Total tokens:  {token_usage['totalTokens']}")
        print(f"Stop reason: {response['stopReason']}")

    except ClientError as err:
        message = err.response['Error']['Message']
        logger.error("A client error occurred: %s", message)
        print(f"A client error occured: {message}")

    else:
        print(
            f"Finished generating text with model {model_id}.")


if __name__ == "__main__":
    main()
```

------
#### [ Document ]

This example shows how to send a document as part of a message and requests that the model describe the contents of the document. The example uses `Converse` operation and the *Anthropic Claude 3 Sonnet* model. 

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to send an document as part of a message to Anthropic Claude 3 Sonnet (on demand).
"""

import logging
import boto3


from botocore.exceptions import ClientError


logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def generate_message(bedrock_client,
                     model_id,
                     input_text,
                     input_document_path):
    """
    Sends a message to a model.
    Args:
        bedrock_client: The Boto3 Bedrock runtime client.
        model_id (str): The model ID to use.
        input text : The input message.
        input_document_path : The path to the input document.

    Returns:
        response (JSON): The conversation that the model generated.

    """

    logger.info("Generating message with model %s", model_id)

    # Get format from path and read the path
    input_document_format = input_document_path.split(".")[-1]
    with open(input_document_path, 'rb') as input_document_file:
        input_document = input_document_file.read()

    # Message to send.
    message = {
        "role": "user",
        "content": [
            {
                "text": input_text
            },
            {
                "document": {
                    "name": "MyDocument",
                    "format": input_document_format,
                    "source": {
                        "bytes": input_document
                    }
                }
            }
        ]
    }

    messages = [message]

    # Send the message.
    response = bedrock_client.converse(
        modelId=model_id,
        messages=messages
    )

    return response


def main():
    """
    Entrypoint for Anthropic Claude 3 Sonnet example.
    """

    logging.basicConfig(level=logging.INFO,
                        format="%(levelname)s: %(message)s")

    model_id = "anthropic.claude-3-sonnet-20240229-v1:0"
    input_text = "What's in this document?"
    input_document_path = "path/to/document"

    try:

        bedrock_client = boto3.client(service_name="bedrock-runtime")


        response = generate_message(
            bedrock_client, model_id, input_text, input_document_path)

        output_message = response['output']['message']

        print(f"Role: {output_message['role']}")

        for content in output_message['content']:
            print(f"Text: {content['text']}")

        token_usage = response['usage']
        print(f"Input tokens:  {token_usage['inputTokens']}")
        print(f"Output tokens:  {token_usage['outputTokens']}")
        print(f"Total tokens:  {token_usage['totalTokens']}")
        print(f"Stop reason: {response['stopReason']}")

    except ClientError as err:
        message = err.response['Error']['Message']
        logger.error("A client error occurred: %s", message)
        print(f"A client error occured: {message}")

    else:
        print(
            f"Finished generating text with model {model_id}.")


if __name__ == "__main__":
    main()
```

------
#### [ Streaming ]

This example shows how to call the `ConverseStream` operation with the *Anthropic Claude 3 Sonnet* model. The example shows how to send the input text, inference parameters, and additional parameters that are unique to the model.

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to use the <noloc>Converse</noloc> API to stream a response from Anthropic Claude 3 Sonnet (on demand).
"""

import logging
import boto3


from botocore.exceptions import ClientError


logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def stream_conversation(bedrock_client,
                    model_id,
                    messages,
                    system_prompts,
                    inference_config,
                    additional_model_fields):
    """
    Sends messages to a model and streams the response.
    Args:
        bedrock_client: The Boto3 Bedrock runtime client.
        model_id (str): The model ID to use.
        messages (JSON) : The messages to send.
        system_prompts (JSON) : The system prompts to send.
        inference_config (JSON) : The inference configuration to use.
        additional_model_fields (JSON) : Additional model fields to use.

    Returns:
        Nothing.

    """

    logger.info("Streaming messages with model %s", model_id)

    response = bedrock_client.converse_stream(
        modelId=model_id,
        messages=messages,
        system=system_prompts,
        inferenceConfig=inference_config,
        additionalModelRequestFields=additional_model_fields
    )

    stream = response.get('stream')
    if stream:
        for event in stream:

            if 'messageStart' in event:
                print(f"\nRole: {event['messageStart']['role']}")

            if 'contentBlockDelta' in event:
                print(event['contentBlockDelta']['delta']['text'], end="")

            if 'messageStop' in event:
                print(f"\nStop reason: {event['messageStop']['stopReason']}")

            if 'metadata' in event:
                metadata = event['metadata']
                if 'usage' in metadata:
                    print("\nToken usage")
                    print(f"Input tokens: {metadata['usage']['inputTokens']}")
                    print(
                        f":Output tokens: {metadata['usage']['outputTokens']}")
                    print(f":Total tokens: {metadata['usage']['totalTokens']}")
                if 'metrics' in event['metadata']:
                    print(
                        f"Latency: {metadata['metrics']['latencyMs']} milliseconds")


def main():
    """
    Entrypoint for streaming message API response example.
    """

    logging.basicConfig(level=logging.INFO,
                        format="%(levelname)s: %(message)s")

    model_id = "anthropic.claude-3-sonnet-20240229-v1:0"
    system_prompt = """You are an app that creates playlists for a radio station
      that plays rock and pop music. Only return song names and the artist."""

    # Message to send to the model.
    input_text = "Create a list of 3 pop songs."

    message = {
        "role": "user",
        "content": [{"text": input_text}]
    }
    messages = [message]
    
    # System prompts.
    system_prompts = [{"text" : system_prompt}]

    # inference parameters to use.
    temperature = 0.5
    top_k = 200
    # Base inference parameters.
    inference_config = {
        "temperature": temperature
    }
    # Additional model inference parameters.
    additional_model_fields = {"top_k": top_k}

    try:
        bedrock_client = boto3.client(service_name='bedrock-runtime')

        stream_conversation(bedrock_client, model_id, messages,
                        system_prompts, inference_config, additional_model_fields)

    except ClientError as err:
        message = err.response['Error']['Message']
        logger.error("A client error occurred: %s", message)
        print("A client error occured: " +
              format(message))

    else:
        print(
            f"Finished streaming messages with model {model_id}.")


if __name__ == "__main__":
    main()
```

------
#### [ Video ]

This example shows how to send a video as part of a message and requests that the model describes the video. The example uses `Converse` operation and the Amazon Nova Pro model.

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to send a video with the <noloc>Converse</noloc> API to Amazon Nova Pro (on demand).
"""

import logging
import boto3


from botocore.exceptions import ClientError


logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def generate_conversation(bedrock_client,
                          model_id,
                          input_text,
                          input_video):
    """
    Sends a message to a model.
    Args:
        bedrock_client: The Boto3 Bedrock runtime client.
        model_id (str): The model ID to use.
        input text : The input message.
        input_video : The input video.

    Returns:
        response (JSON): The conversation that the model generated.

    """

    logger.info("Generating message with model %s", model_id)

    # Message to send.

    with open(input_video, "rb") as f:
        video = f.read()

    message = {
        "role": "user",
        "content": [
            {
                "text": input_text
            },
            {
                    "video": {
                        "format": 'mp4',
                        "source": {
                            "bytes": video
                        }
                    }
            }
        ]
    }

    messages = [message]

    # Send the message.
    response = bedrock_client.converse(
        modelId=model_id,
        messages=messages
    )

    return response


def main():
    """
    Entrypoint for Amazon Nova Pro example.
    """

    logging.basicConfig(level=logging.INFO,
                        format="%(levelname)s: %(message)s")

    model_id = "amazon.nova-pro-v1:0"
    input_text = "What's in this video?"
    input_video = "path/to/video"

    try:

        bedrock_client = boto3.client(service_name="bedrock-runtime")

        response = generate_conversation(
            bedrock_client, model_id, input_text, input_video)

        output_message = response['output']['message']

        print(f"Role: {output_message['role']}")

        for content in output_message['content']:
            print(f"Text: {content['text']}")

        token_usage = response['usage']
        print(f"Input tokens:  {token_usage['inputTokens']}")
        print(f"Output tokens:  {token_usage['outputTokens']}")
        print(f"Total tokens:  {token_usage['totalTokens']}")
        print(f"Stop reason: {response['stopReason']}")

    except ClientError as err:
        message = err.response['Error']['Message']
        logger.error("A client error occurred: %s", message)
        print(f"A client error occured: {message}")

    else:
        print(
            f"Finished generating text with model {model_id}.")


if __name__ == "__main__":
    main()
```

------

# API restrictions
<a name="inference-api-restrictions"></a>

The following restrictions apply to the `InvokeModel`, `InvokeModelWithResponseStream`, `Converse`, and `ConverseStream` operations. Some restrictions vary by operation or model as noted below:
+ When using these operations, you can only include images and documents if the `role` is `user`.
+ **Video generation:** Video generation is not supported with `InvokeModel` and `InvokeModelWithResponseStream`. Instead, you can use the [StartAsyncInvoke](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_StartAsyncInvoke.html) operation. For an example, see [Use Amazon Nova Reel to generate a video from a text prompt](https://docs.aws.amazon.com/bedrock/latest/userguide/bedrock-runtime_example_bedrock-runtime_Scenario_AmazonNova_TextToVideo_section.html).
+ **Document support in request body:** Including documents in the request body is not supported when using `InvokeModel` and `InvokeModelWithResponseStream`. To include a document during inference, use the [Chat/text playground](playgrounds.md) in the AWS Management Console or use the `Converse` or `ConverseStream` operations.
+ **Document count and size:** You can include up to 5 documents per request. Each document can be no more than 4.5 MB in size. For Claude 4 and subsequent versions, the 4.5 MB document size restriction doesn't apply to `PDF` format. For Nova models, the 4.5 MB document size restriction doesn't apply to `PDF` and `DOCX` formats. These restrictions continue to apply in the Bedrock Console. Individual models may have additional content restrictions beyond those applied by Amazon Bedrock. For more information, see **Third-party model provider requirements**.
+ **Image count and size**: Amazon Bedrock doesn't impose restrictions on image count and size. However, individual models may have specific image requirements. For more information, see **Third-party model provider requirements**.
+ **Third-party model provider requirements:** Third-party model provider requirements apply when you use `InvokeModel`, `InvokeModelWithResponseStream`, `Converse`, and `ConverseStream` operations, and may result in an error if not met. If you use a third-party model through Amazon Bedrock (for example, Anthropic Claude), review the provider's user guide and API documentation to avoid unexpected errors. For example, the Anthropic Messages standard endpoint supports a maximum request size of 32 MB. Claude also has specific content requirements, such as a maximum of 100 `PDF` pages per request and a maximum image size of 8000x8000 px. For the latest information about Anthropic Claude Messages requests and responses, including request size and content requirements, see the following Anthropic Claude documentation: [Anthropic Claude API Overview](https://platform.claude.com/docs/en/api/overview), [Anthropic Claude Messages API Reference](https://docs.anthropic.com/claude/reference/messages_post), [Build with Claude: Vision](https://platform.claude.com/docs/en/build-with-claude/vision) and [Build with Claude: PDF Support](https://platform.claude.com/docs/en/build-with-claude/pdf-support).

**Tip**  
Claude requires PDF documents to be a maximum of 100 pages per request. If you have larger PDF documents, we recommend splitting them into multiple PDFs under 100 pages each or consolidating more text into fewer pages.