

# Customize your model to improve its performance for your use case
<a name="custom-models"></a>

Model customization is the process of providing training data to a model in order to improve its performance for specific use-cases. You can customize Amazon Bedrock foundation models in order to improve their performance and create a better customer experience. Amazon Bedrock currently provides the following customization methods.
+ **Supervised fine-tuning**

  Provide *labeled* data in order to train a model to improve performance on specific tasks. By providing a training dataset of labeled examples, the model learns to associate what types of outputs should be generated for certain types of inputs. The model parameters are adjusted in the process and the model's performance is improved for the tasks represented by the training dataset.

  For more information about using supervised fine-tuning, see [Customize a model with fine-tuning in Amazon Bedrock](custom-model-fine-tuning.md).
+ **Reinforcement fine-tuning**

  Reinforcement fine-tuning improves foundation model alignment with your specific use case through feedback-based learning. Instead of providing labeled input-output pairs, you define reward functions that evaluate response quality. The model learns iteratively by receiving feedback scores from these reward functions.

  You can upload your training prompt datasets or provide existing Bedrock invocation logs. You can define reward functions using AWS Lambda to evaluate response quality. Amazon Bedrock automates the training workflow and provides real-time metrics to monitor model learning progress.

  For more information about using reinforcement fine-tuning, see [Customize a model with reinforcement fine-tuning in Amazon Bedrock](reinforcement-fine-tuning.md).
+ **Distillation**

  Use distillation to transfer knowledge from a larger more intelligent model (known as teacher) to a smaller, faster, and cost-efficient model (known as student). Amazon Bedrock automates the distillation process by using the latest data synthesis techniques to generate diverse, high-quality responses from the teacher model, and fine-tunes the student model. 

  To use distillation, you select a teacher model whose accuracy you want to achieve for your use case, and a student model to fine-tune. Then, you provide use case-specific prompts as input data. Amazon Bedrock generates responses from the teacher model for the given prompts, and then uses the responses to fine-tune the student model. You can optionally provide labeled input data as prompt-response pairs. 

  For more information about using distillation see [Customize a model with distillation in Amazon Bedrock](model-distillation.md).

For information about model customization quotas, see [Amazon Bedrock endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/bedrock.html) in the AWS General Reference.

**Note**  
You are charged for model training based on the number of tokens processed by the model (number of tokens in training data corpus × number of epochs) and model storage charged per month per model. For more information, see [Amazon Bedrock pricing](https://aws.amazon.com/bedrock/pricing/).

# Customize a model with fine-tuning in Amazon Bedrock
<a name="custom-model-fine-tuning"></a>

With Amazon Bedrock, you can train a foundation model to improve performance on specific tasks (known as fine-tuning). For information about fine-tuning Amazon Nova models, see [Fine-tuning Amazon Nova models](https://docs.aws.amazon.com/nova/latest/userguide/customize-fine-tune.html).

## Supported models and Regions for fine-tuning
<a name="custom-model-supported"></a>

The following table shows the foundation models that you can fine-tune:


| Provider | Model | Model ID | Single-region model support | 
| --- | --- | --- | --- | 
| Amazon | Nova 2 Lite | amazon.nova-2-lite-v1:0:256k |  us-east-1  | 
| Amazon | Nova Canvas | amazon.nova-canvas-v1:0 |  us-east-1  | 
| Amazon | Nova Lite | amazon.nova-lite-v1:0:300k |  us-east-1  | 
| Amazon | Nova Micro | amazon.nova-micro-v1:0:128k |  us-east-1  | 
| Amazon | Nova Pro | amazon.nova-pro-v1:0:300k |  us-east-1  | 
| Amazon | Titan Image Generator G1 v2 | amazon.titan-image-generator-v2:0 |  us-east-1 us-west-2  | 
| Amazon | Titan Multimodal Embeddings G1 | amazon.titan-embed-image-v1:0 |  us-east-1 us-west-2  | 
| Anthropic | Claude 3 Haiku | anthropic.claude-3-haiku-20240307-v1:0:200k |  us-west-2  | 
| Meta | Llama 3.1 70B Instruct | meta.llama3-1-70b-instruct-v1:0:128k |  us-west-2  | 
| Meta | Llama 3.1 8B Instruct | meta.llama3-1-8b-instruct-v1:0:128k |  us-west-2  | 
| Meta | Llama 3.2 11B Instruct | meta.llama3-2-11b-instruct-v1:0:128k |  us-west-2  | 
| Meta | Llama 3.2 1B Instruct | meta.llama3-2-1b-instruct-v1:0:128k |  us-west-2  | 
| Meta | Llama 3.2 3B Instruct | meta.llama3-2-3b-instruct-v1:0:128k |  us-west-2  | 
| Meta | Llama 3.2 90B Instruct | meta.llama3-2-90b-instruct-v1:0:128k |  us-west-2  | 
| Meta | Llama 3.3 70B Instruct | meta.llama3-3-70b-instruct-v1:0:128k |  us-west-2  | 

For information about model customization hyperparameters for each model, see [Custom model hyperparameters](custom-models-hp.md).

# Prepare data for fine-tuning your models
<a name="model-customization-prepare"></a>

To prepare training and validation datasets for your custom model, you create `.jsonl` files, where each line is a JSON object corresponding to a record. Before you can begin a model customization job, you must at minimum prepare a training dataset. The files you create must conform to the format for the customization method and model that you choose. The records in it must conform to size requirements depending your model. 

For information about model requirments, see [Model requirements for training and validation datasets](#model-training-validation-requirements). To see the default quotas that apply for training and validation datasets used for customizing different models, see the **Sum of training and validation records** quotas in [Amazon Bedrock endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/bedrock.html) in the AWS General Reference. 

Whether a validation dataset is supported and the format of your training and validation dataset depend on the following factors. 
+ The type of fine-tuning customization job.
+ The input and output modalities of the data.

For information about fine-tuning Amazon Nova models, see [Fine-tuning Amazon Nova models](https://docs.aws.amazon.com/nova/latest/userguide/customize-fine-tune.html).

## Supported modalities for fine-tuning
<a name="model-customization-data-support"></a>

The following sections describe the different fine-tuning capabilities supported by each model, organized by their input and output modalities. For information about fine-tuning Amazon Nova models, see [Fine-tuning Amazon Nova models](https://docs.aws.amazon.com/nova/latest/userguide/customize-fine-tune.html).

**Text-to-Text models**

Text-to-Text models can be fine-tuned for various text-based tasks, including both conversational and non-conversational applications. For information about preparing data for fine-tuning Text-to-Text models, see [Prepare data for fine-tuning text-to-text models](#preparing-text-data). 

The following non-conversational models are optimized for tasks like summarization, translation, and question answering:
+ Amazon Titan Text G1 - Express
+ Amazon Titan Text G1 - Lite
+ Amazon Titan Text Premier
+ Cohere Command
+ Cohere Command Light
+ Meta Llama 3.1 8B Instruct
+ Meta Llama 3.1 70B Instruct

The following conversational models are designed for single-turn and multi-turn interactions. If a model uses the Converse API, your fine-tuning dataset must follow the Converse API message format and include system, user, and assistant messages. For examples, see [Prepare data for fine-tuning text-to-text models](#preparing-text-data). For more information about Converse API operations, see [Carry out a conversation with the Converse API operations](conversation-inference.md).
+ Anthropic Claude 3 Haiku
+ Meta Llama 3.2 1B Instruct (Converse API format)
+ Meta Llama 3.2 3B Instruct (Converse API format)
+ Meta Llama 3.2 11B Instruct Vision (Converse API format)
+ Meta Llama 3.2 90B Instruct Vision (Converse API format)
+ Meta Llama 3.3 70B Vision Instruct (Converse API format)

**Text-Image-to-Text & Text-to-Image model**s

The following models support fine-tuning for image generation and text-image processing. These models process or generate images based on textual input, or generate text based on both textual and image inputs. For information about preparing data for fine-tuning Text-Image-to-Text & Text-to-Image models models, see [Prepare data for fine-tuning image and text processing models](#preparing-image-text-data).
+ Amazon Titan Image Generator G1 V1
+ Meta Llama 3.2 11B Instruct Vision
+ Meta Llama 3.2 90B Instruct Vision
+ Meta Llama 3.3 70B Vision Instruct

**Image-to-Embeddings**

The following models support fine-tuning for tasks like classification and retrieval. These models generate numerical representations (embeddings) from image inputs. For information about preparing data for fine-tuning Image-to-Embeddings models, see [Prepare data for fine-tuning image generation and embedding models](#preparing-image-generation-data).
+ Amazon Titan Multimodal Embeddings G1
+ Amazon Titan Image Generator G1 V1

## Model requirements for training and validation datasets
<a name="model-training-validation-requirements"></a>

The following sections list the requirements for training and validation datasets for a model. For information about dataset constraints for Amazon Nova models, see [Fine-tuning Amazon Nova models](https://docs.aws.amazon.com/nova/latest/userguide/customize-fine-tune.html).

### Amazon Titan Text Premier
<a name="quotas-cm-titan-premier"></a>


****  

| Description | Maximum (Fine-tuning) | 
| --- | --- | 
| Sum of input and output tokens when batch size is 1 | 4,096 | 
| Sum of input and output tokens when batch size is 2, 3, or 4 | N/A | 
| Character quota per sample in dataset | Token quota x 6 (estimated) | 
| Training dataset file size | 1 GB | 
| Validation dataset file size | 100 MB | 

### Amazon Titan Text G1 - Express
<a name="quotas-cm-titan-text"></a>


****  

| Description | Maximum (Fine-tuning) | 
| --- | --- | 
| Sum of input and output tokens when batch size is 1 | 4,096 | 
| Sum of input and output tokens when batch size is 2, 3, or 4 | 2,048 | 
| Character quota per sample in dataset | Token quota x 6 (estimated) | 
| Training dataset file size | 1 GB | 
| Validation dataset file size | 100 MB | 

### Amazon Titan Text G1 - Lite
<a name="quotas-cm-titan-text-lite"></a>


****  

| Description | Maximum (Fine-tuning) | 
| --- | --- | 
| Sum of input and output tokens when batch size is 1 or 2 | 4,096 | 
| Sum of input and output tokens when batch size is 3, 4, 5, or 6 | 2,048 | 
| Character quota per sample in dataset | Token quota x 6 (estimated) | 
| Training dataset file size | 1 GB | 
| Validation dataset file size | 100 MB | 

### Amazon Titan Image Generator G1 V1
<a name="quotas-cm-titan-image"></a>


****  

| Description | Minimum (Fine-tuning) | Maximum (Fine-tuning) | 
| --- | --- | --- | 
| Text prompt length in training sample, in characters | 3 | 1,024 | 
| Records in a training dataset | 5 | 10,000 | 
| Input image size | 0 | 50 MB | 
| Input image height in pixels | 512 | 4,096 | 
| Input image width in pixels | 512 | 4,096 | 
| Input image total pixels | 0 | 12,582,912 | 
| Input image aspect ratio | 1:4 | 4:1 | 

### Amazon Titan Multimodal Embeddings G1
<a name="quotas-cm-titan-mm-embed"></a>


****  

| Description | Minimum (Fine-tuning) | Maximum (Fine-tuning) | 
| --- | --- | --- | 
| Text prompt length in training sample, in characters | 0 | 2,560 | 
| Records in a training dataset | 1,000 | 500,000 | 
| Input image size | 0 | 5 MB | 
| Input image height in pixels | 128 | 4096 | 
| Input image width in pixels | 128 | 4096 | 
| Input image total pixels | 0 | 12,528,912 | 
| Input image aspect ratio | 1:4 | 4:1 | 

### Meta Llama 3.1
<a name="quotas-cm-meta-llama-3-1"></a>


****  

| Description | Minimum (Fine-tuning) | Maximum (Fine-tuning) | 
| --- | --- | --- | 
| Input tokens | 0 | 16,000 | 
| Output tokens | 0 | 16,000 | 
| Character quota per sample in dataset | 0 | Token quota x 6 (estimated) | 
| Sum of Input and Output tokens | 0 | 16,000 | 
| Sum of training and validation records | 100 | 10,000 (adjustable using service quotas) | 

### Meta Llama 3.2
<a name="quotas-cm-meta-llama-3-2"></a>

Supported image formats for Meta Llama-3.2 11B Vision Instruct and Meta Llama-3.2 90B Vision Instruct include: `gif`, `jpeg`, `png`, and `webp`. For estimating the image-to-token conversion during fine-tuning of these models, you can use this formula as an approximation: `Tokens = min(2, max(Height // 560, 1)) * min(2, max(Width // 560, 1)) * 1601`. Images are converted into approximately 1,601 to 6,404 tokens based on their size.


****  

| Description | Minimum (Fine-tuning) | Maximum (Fine-tuning) | 
| --- | --- | --- | 
| Sum of Input and Output tokens | 0 | 16,000 (10000 for Meta Llama 3.2 90B) | 
| Sum of training and validation records | 100 | 10,000 (adjustable using service quotas) | 
| Input image size for Meta Llama 11B and 90B instruct models) | 0 | 10 MB | 
| Input image height in pixels for Meta Llama 11B and 90B instruct models | 10 | 8192 | 
| Input image width in pixels for Meta Llama 11B and 90B90B instruct models | 10 | 8192 | 

### Meta Llama 3.3
<a name="quotas-cm-meta-llama-3-3"></a>


****  

| Description | Minimum (Fine-tuning) | Maximum (Fine-tuning) | 
| --- | --- | --- | 
| Sum of Input and output tokens | 0 | 16000 | 
| Sum of training and validation records | 100 | 10,000 (adjustable using Service Quotas) | 

### CohereCommand
<a name="quotas-cm-cohere-command"></a>


****  

| Description | Maximum (Fine-tuning) | 
| --- | --- | 
| Input tokens | 4,096 | 
| Output tokens | 2,048 | 
| Character quota per sample in dataset | Token quota x 6 (estimated) | 
| Records in a training dataset | 10,000 | 
| Records in a validation dataset | 1,000 | 

### Anthropic Claude 3 Haiku
<a name="anthropic-claude-3-haiku"></a>


****  

| Description | Maximum (Fine-tuning) | 
| --- | --- | 
| Minimum number of records | 32 | 
| Maximum training records | 10,000 | 
| Maximum validation records | 1,000 | 
| Maximum total records | 10,000 (adjustable using service quotas) | 
| Maximum tokens | 32,000 | 
| Maximum training dataset size | 10 GB | 
| Maximum validation dataset size | 1 GB | 

## Prepare data for fine-tuning text-to-text models
<a name="preparing-text-data"></a>

**Note**  
For information about fine-tuning Amazon Nova models, see [Fine-tuning Amazon Nova models](https://docs.aws.amazon.com/nova/latest/userguide/customize-fine-tune.html).

For fine-tuning text-to-text models, each JSON object is a sample containing structured fields designed to guide the model toward generating the desired textual output based on a provided textual prompt. The data format varies depending on the use case, broadly categorized into non-conversational and conversational use cases.

------
#### [ Non-conversational tasks ]

Non-conversational tasks involve generating a single output for a given input. Each dataset sample includes a `prompt` field containing the input text and a `completion` field with the expected output. This format supports a range of tasks such as question-answering, summarizing, translation, text completion, and information extraction.

Example format

```
{"prompt": "What is the capital of France?", "completion": "The capital of France is Paris."}
{"prompt": "Summarize the article about climate change.", "completion": "Climate change refers to the long-term alteration of temperature and typical weather patterns in a place."}
```

Use approximately 6 characters per token to estimate the number of tokens for planning dataset size.

------
#### [ Converse API format (Single turn and Multi turn) ]

To use the Converse API, you call the `Converse` or `ConverseStream` operations to send messages to a model. To call `Converse`, you require permission for the `bedrock:InvokeModel` operation. To call `ConverseStream`, you require permission for the `bedrock:InvokeModelWithResponseStream` operation. For more information, see [Using the Converse API](conversation-inference-call.md). For more information about Converse API operations, see [Carry out a conversation with the Converse API operations](conversation-inference.md)

Example format

```
{
    "schemaVersion": "bedrock-conversation-2024",
    "system": [
        {
            "text": "You are a digital assistant with a friendly personality"
        }
    ],
    "messages": [
        {
            "role": "user",
            "content": [
                {
                    "text": "What is the capital of Mars?"
                }
            ]
        },
        {
            "role": "assistant",
            "content": [
                {
                    "text": "Mars does not have a capital. Perhaps it will one day."
                }
            ]
        }
    ]
}
```

------
#### [ Anthropic Claude 3 Haiku: Single-turn conversations ]

Single-turn conversational tasks involve isolated exchanges, where the model generates a response based solely on the current user input without considering prior context. Each dataset sample uses a messages array, with alternating roles of `user` and `assistant`.

Format

```
{"system": "<system message>","messages":[{"role": "user", "content": "<user query>"},{"role": "assistant", "content": "<expected generated text>"}]}
```

Example

```
{"system": "You are an helpful assistant.","messages":[{"role": "user", "content": "what is AWS"},{"role": "assistant", "content": "it's Amazon Web Services."}]}
```

------
#### [ Anthropic Claude 3 Haiku: Multi-turn conversations ]

Multi-turn conversational tasks involve extended dialogues where the model must generate responses while preserving the context of previous exchanges. This format captures the dynamic nature of interactive tasks, such as customer support or complex discussions.

Format

```
{"system": "<system message>","messages":[{"role": "user", "content": "<user query 1>"},{"role": "assistant", "content": "<expected generated text 1>"}, {"role": "user", "content": "<user query 2>"},{"role": "assistant", "content": "<expected generated text 2>"}]}
```

Example

```
{"system": "system message","messages":[{"role": "user", "content": "Hello there."},{"role": "assistant", "content": "Hi, how can I help you?"},{"role": "user", "content": "what are LLMs?"},{"role": "assistant", "content": "LLM means large language model."},]}  
```

------

## Prepare data for fine-tuning image and text processing models
<a name="preparing-image-text-data"></a>

**Note**  
For information about fine-tuning Amazon Nova models, see [Fine-tuning Amazon Nova models](https://docs.aws.amazon.com/nova/latest/userguide/customize-fine-tune.html).

For fine-tuning image-text-to-text models, each JSON object is a sample containing a conversation structured as a `messages` array, consisting of alternating JSON objects representing the user's inputs and the assistant's responses. User inputs can include both text and images, while assistant responses are always textual. This structure supports both single-turn and multi-turn conversational flows, enabling the model to handle diverse tasks effectively. Supported image formats for Meta Llama-3.2 11B Vision Instruct and Meta Llama-3.2 90B Vision Instruct include: `gif`, `jpeg`, `png`, and `webp`.

To allow Amazon Bedrock access to the image files, add an IAM policy similar to the one in [Permissions to access training and validation files and to write output files in S3](model-customization-iam-role.md#model-customization-iam-role-s3) to the Amazon Bedrock model customization service role that you set up or that was automatically set up for you in the console. The Amazon S3 paths you provide in the training dataset must be in folders that you specify in the policy.

**Single-turn conversations**

Each JSON object for single-turn conversations consists of a user message and an assistant message. The user message includes a role field set to *user*and a *content* field containing an array with a `type` field (*text* or *image*) that describes the input modality. For text inputs, the `content` field includes a `text` field with the user’s question or prompt. For image inputs, the `content` field specifies the image `format` (for example, *jpeg*, *png*) and its `source` with a `uri` pointing to the Amazon S3 location of the image. The `uri` represents the unique path to the image stored in an Amazon S3 bucket, typically in the format `s3://<bucket-name>/<path-to-file>`. The assistant message includes a `role` field set to *assistant* and a `content` field containing an array with a `type` field set to *text* and a `text` field containing the assistant’s generated response.

Example format

```
{
    "schemaVersion": "bedrock-conversation-2024",
    "system": [{
        "text": "You are a smart assistant that answers questions respectfully"
    }],
    "messages": [{
            "role": "user",
            "content": [{
                    "text": "What does the text in this image say?"
                },
                {
                    "image": {
                        "format": "png",
                        "source": {
                            "s3Location": {
                                "uri": "s3://your-bucket/your-path/your-image.png",
                                "bucketOwner": "your-aws-account-id"
                            }
                        }
                    }
                }
            ]
        },
        {
            "role": "assistant",
            "content": [{
                "text": "The text in the attached image says 'LOL'."
            }]
        }
    ]
}
```

**Multi-turn conversations**

Each JSON object for multi-turn conversations contains a sequence of messages with alternating roles, where user messages and assistant messages are structured consistently to enable coherent exchanges. User messages include a `role` field set to *user* and a `content` field that describes the input modality. For text inputs, the `content` field includes a `text` field with the user’s question or follow-up, while for image inputs, it specifies the image `format` and its `source` with a `uri` pointing to the Amazon S3 location of the image. The `uri` serves as a unique identifier in the format s3://<bucket-name>/<path-to-file> and allows the model to access the image from the designated Amazon S3 bucket. Assistant messages include a `role` field set to *assistant* and a `content` field containing an array with a `type` field set to *text* and a `text` field containing the assistant’s generated response. Conversations can span multiple exchanges, allowing the assistant to maintain context and deliver coherent responses throughout.

Example format

```
{
    "schemaVersion": "bedrock-conversation-2024",
    "system": [{
        "text": "You are a smart assistant that answers questions respectfully"
    }],
    "messages": [{
            "role": "user",
            "content": [{
                    "text": "What does the text in this image say?"
                },
                {
                    "image": {
                        "format": "png",
                        "source": {
                            "s3Location": {
                                "uri": "s3://your-bucket/your-path/your-image.png",
                                "bucketOwner": "your-aws-account-id"
                            }
                        }
                    }
                }
            ]
        },
        {
            "role": "assistant",
            "content": [{
                "text": "The text in the attached image says 'LOL'."
            }]
        },
        {
            "role": "user",
            "content": [{
                    "text": "What does the text in this image say?"
                }
            ]
        },
        {
            "role": "assistant",
            "content": [{
                "text": "The text in the attached image says 'LOL'."
            }]
        }
        
    ]
}
```

## Prepare data for fine-tuning image generation and embedding models
<a name="preparing-image-generation-data"></a>

**Note**  
Amazon Nova models have different fine-tuning requirements. To fine-tune these models, follow the instructions at [Fine-tuning Amazon Nova models](https://docs.aws.amazon.com/nova/latest/userguide/customize-fine-tune.html).

For text-to-image or image-to-embedding models, prepare a training dataset. Validation datasets are not supported. Each JSON object is a sample containing an `image-ref`, the Amazon S3 URI for an image, and a `caption` that could be a prompt for the image.

The images must be in JPEG or PNG format.

```
{"image-ref": "s3://bucket/path/to/image001.png", "caption": "<prompt text>"}
{"image-ref": "s3://bucket/path/to/image002.png", "caption": "<prompt text>"}{"image-ref": "s3://bucket/path/to/image003.png", "caption": "<prompt text>"}
```

The following is an example item:

```
{"image-ref": "s3://amzn-s3-demo-bucket/my-pets/cat.png", "caption": "an orange cat with white spots"}
```

To allow Amazon Bedrock access to the image files, add an IAM policy similar to the one in [Permissions to access training and validation files and to write output files in S3](model-customization-iam-role.md#model-customization-iam-role-s3) to the Amazon Bedrock model customization service role that you set up or that was automatically set up for you in the console. The Amazon S3 paths you provide in the training dataset must be in folders that you specify in the policy.

# Fine-tune Amazon Nova models with supervised fine-tuning
<a name="nova-2-sft-data-prep"></a>

Amazon Nova 2.0 SFT data uses the same Converse API format as Amazon Nova 1.0, with the addition of optional reasoning content fields. For complete format specifications, see [ReasoningContentBlock](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ReasoningContentBlock.html) and [Converse API schema](https://docs.aws.amazon.com/bedrock/latest/userguide/conversation-inference-call.html).

**Supported features:**
+ **Input types** – Text, image, or video in user content blocks
+ **Assistant content** – Text-only responses and reasoning content
+ **Dataset composition** – Must be homogeneous. Choose one of the following: text-only turns, text \$1 image turns, or text \$1 video turns

**Important**  
You cannot mix images and videos within the same dataset or across different turns.

**Current limitations:**
+ **Tool usage** – Although tool usage is supported in the input format, it is not currently supported by Amazon Nova 2.0 SFT. Adding tool sections might cause your job to fail.
+ **Multimodal reasoning content** – Although the Converse format supports image-based reasoning content, this is not supported by Amazon Nova 2.0 SFT.
+ **Validation sets** – Providing a validation set might be supported through the UI but will not be supported during SFT training.

**Supported media formats:**
+ **Images** – PNG, JPEG, GIF
+ **Videos** – MOV, MKV, MP4

## Data format examples
<a name="nova-2-sft-data-examples"></a>

------
#### [ Text-only ]

This example shows a basic text-only format compatible with Amazon Nova 1.0.

```
{
  "schemaVersion": "bedrock-conversation-2024",
  "system": [
    {
      "text": "You are a digital assistant with a friendly personality"
    }
  ],
  "messages": [
    {
      "role": "user",
      "content": [
        {
          "text": "What country is right next to Australia?"
        }
      ]
    },
    {
      "role": "assistant",
      "content": [
        {
          "text": "The closest country is New Zealand"
        }
      ]
    }
  ]
}
```

------
#### [ Text with reasoning ]

This example shows text with optional reasoning content for Amazon Nova 2.0.

```
{
  "schemaVersion": "bedrock-conversation-2024",
  "system": [
    {
      "text": "You are a digital assistant with a friendly personality"
    }
  ],
  "messages": [
    {
      "role": "user",
      "content": [
        {
          "text": "What country is right next to Australia?"
        }
      ]
    },
    {
      "role": "assistant",
      "content": [
        {
          "reasoningContent": {
            "reasoningText": {
              "text": "I need to use my world knowledge of geography to answer this question"
            }
          }
        },
        {
          "text": "The closest country to Australia is New Zealand, located to the southeast across the Tasman Sea."
        }
      ]
    }
  ]
}
```

**Note**  
Currently, only `reasoningText` is supported within `reasoningContent`. Multimodal reasoning content is not yet available.

------
#### [ Image \$1 text ]

This example shows how to include image input with text.

```
{
  "schemaVersion": "bedrock-conversation-2024",
  "system": [
    {
      "text": "You are a helpful assistant."
    }
  ],
  "messages": [
    {
      "role": "user",
      "content": [
        {
          "image": {
            "format": "jpeg",
            "source": {
              "s3Location": {
                "uri": "s3://your-bucket/your-path/your-image.jpg",
                "bucketOwner": "your-aws-account-id"
              }
            }
          }
        },
        {
          "text": "Which country is highlighted in the image?"
        }
      ]
    },
    {
      "role": "assistant",
      "content": [
        {
          "reasoningContent": {
            "reasoningText": {
              "text": "I will determine the highlighted country by examining its location on the map and using my geographical knowledge"
            }
          }
        },
        {
          "text": "The highlighted country is New Zealand"
        }
      ]
    }
  ]
}
```

------
#### [ Video \$1 text ]

This example shows how to include video input with text.

```
{
  "schemaVersion": "bedrock-conversation-2024",
  "system": [
    {
      "text": "You are a helpful assistant."
    }
  ],
  "messages": [
    {
      "role": "user",
      "content": [
        {
          "video": {
            "format": "mp4",
            "source": {
              "s3Location": {
                "uri": "s3://your-bucket/your-path/your-video.mp4",
                "bucketOwner": "your-aws-account-id"
              }
            }
          }
        },
        {
          "text": "What is shown in this video?"
        }
      ]
    },
    {
      "role": "assistant",
      "content": [
        {
          "reasoningContent": {
            "reasoningText": {
              "text": "I will analyze the video content to identify key elements"
            }
          }
        },
        {
          "text": "The video shows a map with New Zealand highlighted"
        }
      ]
    }
  ]
}
```

------

## Reasoning content
<a name="nova-2-reasoning-modes"></a>

Reasoning content (also called chain-of-thought) captures the model's intermediate thinking steps before generating a final answer. In the `assistant` turn, use the `reasoningContent` field to include these reasoning traces.

**How loss is calculated:**
+ **With reasoning content** – Training loss includes both reasoning tokens and final output tokens
+ **Without reasoning content** – Training loss is calculated only on the final output tokens

**When to enable reasoning mode:** Set `reasoning_enabled: true` in your training configuration when you want the model to generate thinking tokens before producing final outputs or need improved performance on complex reasoning tasks. Set `reasoning_enabled: false` when you're training on straightforward tasks that don't benefit from explicit reasoning steps.

**Note**  
You can enable reasoning mode regardless of whether your training data contains reasoning content. However, including reasoning traces in your training data is recommended so the model can learn from these examples and improve reasoning quality.

**Formatting guidelines:**
+ Use plain text for reasoning content.
+ Avoid markup tags like `<thinking>` and `</thinking>` unless specifically required by your task.
+ Ensure reasoning content is clear and relevant to the problem-solving process.

**Effective reasoning content should include:**
+ Intermediate thoughts and analysis
+ Logical deductions and inference steps
+ Step-by-step problem-solving approaches
+ Explicit connections between steps and conclusions

If your dataset lacks reasoning traces, you can create them using a reasoning-capable model like Nova Premier. Provide your input-output pairs to the model and capture its reasoning process to build a reasoning-augmented dataset.

## Dataset preparation guidelines
<a name="nova-2-dataset-preparation"></a>

The following table provides guidelines for preparing your training dataset.


**Dataset preparation guidelines**  

| Guideline | Description | 
| --- | --- | 
| Size and quality |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/bedrock/latest/userguide/nova-2-sft-data-prep.html)  | 
| Diversity |  Include diverse examples that do the following: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/bedrock/latest/userguide/nova-2-sft-data-prep.html)  | 
| Output formatting |  Clearly specify the desired output format in assistant responses. Examples include JSON structures, tables, CSV format, or custom formats specific to your application.  | 
| Multi-turn conversations |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/bedrock/latest/userguide/nova-2-sft-data-prep.html)  | 
| Quality checklist |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/bedrock/latest/userguide/nova-2-sft-data-prep.html)  | 

# Custom model hyperparameters
<a name="custom-models-hp"></a>

The following reference content covers the hyperparameters that are available for training each Amazon Bedrock custom model.

A hyperparameter is a parameter that controls the training process, such as the learning rate or epoch count. You set hyperparameters for custom model training when you [submit](model-customization-submit.md) the fine tuning job with the Amazon Bedrock console or by calling the [CreateModelCustomizationJob](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_CreateModelCustomizationJob.html) API operation.

## Amazon Nova Understanding model customization hyperparameters
<a name="cm-hp-nova-mm"></a>

The Amazon Nova Lite, Amazon Nova Micro, and Amazon Nova Pro models support the following three hyperparameters for model customization. For more information, see [Customize your model to improve its performance for your use case](custom-models.md).

 For information about fine tuning Amazon Nova models, see [Fine-tuning Amazon Nova models](https://docs.aws.amazon.com/nova/latest/userguide/customize-fine-tune.html).

The number of epochs you specify increases your model customization cost by processing more tokens. Each epoch processes the entire training dataset once. For information about pricing, see [Amazon Bedrock pricing](https://aws.amazon.com/bedrock/pricing).


****  

| Hyperparameter (console) | Hyperparameter (API) | Definition | Type | Minimum | Maximum | Default | 
| --- | --- | --- | --- | --- | --- | --- | 
| Epochs | epochCount | The number of iterations through the entire training dataset | integer | 1 | 5 | 2 | 
| Learning rate | learningRate | The rate at which model parameters are updated after each batch | float | 1.00E-6 | 1.00E-4 | 1.00E-5 | 
| Learning rate warmup steps | learningRateWarmupSteps | The number of iterations over which the learning rate is gradually increased to the specified rate | integer | 0 | 100 | 10 | 

The default epoch number is 2, which works for most cases. In general, larger data sets require fewer epochs to converge, while smaller data sets require more epochs to converge. A faster convergence might also be achieved by increasing the learning rate, but this is less desirable because it might lead to training instability at convergence. We recommend starting with the default hyperparameters, which are based on our assessment across tasks of different complexity and data sizes.

The learning rate will gradually increase to the set value during warm up. Therefore, we recommend that you avoid a large warm up value when the training sample is small because the learning rate might never reach the set value during the training process. We recommend setting the warmup steps by dividing the dataset size by 640 for Amazon Nova Micro, 160 for Amazon Nova Lite, and 320 for Amazon Nova Pro.

## Amazon Nova Canvas model customization hyperparameters
<a name="cm-hp-nova-canvas"></a>

The Amazon Nova Canvas model supports the following hyperparameters for model customization.


****  

| Hyperparameter (console) | Hyperparameter (API) | Definition | Minimum | Maximum | Default | 
| --- | --- | --- | --- | --- | --- | 
| Batch size | batchSize | Number of samples processed before updating model parameters | 8 | 192 | 8 | 
| Steps | stepCount | Number of times the model is exposed to each batch | 10 | 20,000 | 500 | 
| Learning rate | learningRate | Rate at which model parameters are updated after each batch | 1.00E-7 | 1.00E-4 | 1.00E-5 | 

## Amazon Titan text model customization hyperparameters
<a name="cm-hp-titan-text"></a>

Amazon Titan Text Premier model supports the following hyperparameters for model customization. The number of epochs you specify increases your model customization cost by processing more tokens. Each epoch processes the entire training dataset once. For information about pricing, see [Amazon Bedrock pricing](https://aws.amazon.com/bedrock/pricing).


****  

| Hyperparameter (console) | Hyperparameter (API) | Definition | Type | Minimum | Maximum | Default | 
| --- | --- | --- | --- | --- | --- | --- | 
| Epochs | epochCount | The number of iterations through the entire training dataset | integer | 1 | 5 | 2 | 
| Batch size (micro) | batchSize | The number of samples processed before updating model parameters | integer | 1 | 1 | 1 | 
| Learning rate | learningRate | The rate at which model parameters are updated after each batch | float | 1.00E-07 | 1.00E-05 | 1.00E-06 | 
| Learning rate warmup steps | learningRateWarmupSteps | The number of iterations over which the learning rate is gradually increased to the specified rate | integer | 0 | 20 | 5 | 

Amazon Titan Text models, such as Lite and Express, support the following hyperparameters for model customization. The number of epochs you specify increases your model customization cost by processing more tokens. Each epoch processes the entire training dataset once. For information about pricing, see [Amazon Bedrock pricing](https://aws.amazon.com/bedrock/pricing).


****  

| Hyperparameter (console) | Hyperparameter (API) | Definition | Type | Minimum | Maximum | Default | 
| --- | --- | --- | --- | --- | --- | --- | 
| Epochs | epochCount | The number of iterations through the entire training dataset | integer | 1 | 10 | 5 | 
| Batch size (micro) | batchSize | The number of samples processed before updating model parameters | integer | 1 | 64 | 1 | 
| Learning rate | learningRate | The rate at which model parameters are updated after each batch | float | 0.0 | 1 | 1.00E-5 | 
| Learning rate warmup steps | learningRateWarmupSteps | The number of iterations over which the learning rate is gradually increased to the specified rate | integer | 0 | 250 | 5 | 

## Amazon Titan Image Generator G1 models customization hyperparameters
<a name="cm-hp-titan-image"></a>

The Amazon Titan Image Generator G1 model supports the following hyperparameters for model customization.

**Note**  
`stepCount` has no default value and must be specified. `stepCount` supports the value `auto`. `auto` prioritizes model performance over training cost by automatically determining a number based on the size of your dataset. Training job costs depend on the number that `auto` determines. To understand how job cost is calculated and to see examples, see [Amazon Bedrock Pricing](https://aws.amazon.com/bedrock/pricing).


****  

| Hyperparameter (console) | Hyperparameter (API) | Definition | Minimum | Maximum | Default | 
| --- | --- | --- | --- | --- | --- | 
| Batch size | batchSize | Number of samples processed before updating model parameters | 8 | 192 | 8 | 
| Steps | stepCount | Number of times the model is exposed to each batch | 10 | 40,000 | N/A | 
| Learning rate | learningRate | Rate at which model parameters are updated after each batch | 1.00E-7 | 1 | 1.00E-5 | 

## Amazon Titan Multimodal Embeddings G1 customization hyperparameters
<a name="cm-hp-titan-mm"></a>

The Amazon Titan Multimodal Embeddings G1 model supports the following hyperparameters for model customization. The number of epochs you specify increases your model customization cost by processing more tokens. Each epoch processes the entire training dataset once. For information about pricing, see [Amazon Bedrock pricing](https://aws.amazon.com/bedrock/pricing).

**Note**  
`epochCount` has no default value and must be specified. `epochCount` supports the value `Auto`. `Auto` prioritizes model performance over training cost by automatically determining a number based on the size of your dataset. Training job costs depend on the number that `Auto` determines. To understand how job cost is calculated and to see examples, see [Amazon Bedrock Pricing](https://aws.amazon.com/bedrock/pricing).


****  

| Hyperparameter (console) | Hyperparameter (API) | Definition | Type | Minimum | Maximum | Default | 
| --- | --- | --- | --- | --- | --- | --- | 
| Epochs | epochCount | The number of iterations through the entire training dataset | integer | 1 | 100 | N/A | 
| Batch size | batchSize | The number of samples processed before updating model parameters | integer | 256 | 9,216 | 576 | 
| Learning rate | learningRate | The rate at which model parameters are updated after each batch | float | 5.00E-8 | 1 | 5.00E-5 | 

## Anthropic Claude 3 model customization hyperparameters
<a name="cm-hp-anth-claude-3"></a>

Anthropic Claude 3 models support the following hyperparameters for model customization. The number of epochs you specify increases your model customization cost by processing more tokens. Each epoch processes the entire training dataset once. For information about pricing, see [Amazon Bedrock pricing](https://aws.amazon.com/bedrock/pricing).


****  

| Console Name | API Name | Definition | Default | Minimum | Maximum | 
| --- | --- | --- | --- | --- | --- | 
| Epoch count | epochCount | The maximum number of iterations through the entire training dataset | 2 | 1 | 10 | 
| Batch size  | batchSize | Number of samples processed before updating model parameters | 32 | 4 | 256 | 
| Learning rate multiplier | learningRateMultiplier | Multiplier that influences the learning rate at which model parameters are updated after each batch | 1 | 0.1 | 2 | 
| Early stopping threshold | earlyStoppingThreshold | Minimum improvement in validation loss required to prevent premature termination of the training process | 0.001 | 0 | 0.1 | 
| Early stopping patience | earlyStoppingPatience | Tolerance for stagnation in the validation loss metric before stopping the training process | 2 | 1 | 10 | 

## Cohere Command model customization hyperparameters
<a name="cm-hp-cohere-command"></a>

The Cohere Command and Cohere Command Light models support the following hyperparameters for model customization. The number of epochs you specify increases your model customization cost by processing more tokens. Each epoch processes the entire training dataset once. For information about pricing, see [Amazon Bedrock pricing](https://aws.amazon.com/bedrock/pricing). For more information, see [Customize your model to improve its performance for your use case](custom-models.md).

 For information about fine tuning Cohere models, see the Cohere documentation at [https://docs.cohere.com/docs/fine-tuning](https://docs.cohere.com/docs/fine-tuning).

**Note**  
The `epochCount` quota is adjustable.


****  

| Hyperparameter (console) | Hyperparameter (API) | Definition | Type | Minimum | Maximum | Default | 
| --- | --- | --- | --- | --- | --- | --- | 
| Epochs | epochCount | The number of iterations through the entire training dataset | integer | 1 | 100 | 1 | 
| Batch size | batchSize | The number of samples processed before updating model parameters | integer | 8 | 8 (Command)32 (Light) | 8 | 
| Learning rate | learningRate | The rate at which model parameters are updated after each batch. If you use a validation dataset, we recommend that you don't provide a value for learningRate. | float | 5.00E-6 | 0.1 | 1.00E-5 | 
| Early stopping threshold | earlyStoppingThreshold | The minimum improvement in loss required to prevent premature termination of the training process | float | 0 | 0.1 | 0.01 | 
| Early stopping patience | earlyStoppingPatience | The tolerance for stagnation in the loss metric before stopping the training process | integer | 1 | 10 | 6 | 
| Evaluation percentage | evalPercentage |  The percentage of the dataset allocated for model evaluation, if you don't provide a separate validation dataset  | float | 5 | 50 | 20 | 

## Meta Llama 3.1 model customization hyperparameters
<a name="cm-hp-meta-llama31"></a>

The Meta Llama 3.1 8B and 70B models support the following hyperparameters for model customization. The number of epochs you specify increases your model customization cost by processing more tokens. Each epoch processes the entire training dataset once. For information about pricing, see [Amazon Bedrock pricing](https://aws.amazon.com/bedrock/pricing). For more information, see [Customize your model to improve its performance for your use case](custom-models.md).

 For information about fine tuning Meta Llama models, see the Meta documentation at [https://ai.meta.com/llama/get-started/\$1fine-tuning](https://ai.meta.com/llama/get-started/#fine-tuning).

**Note**  
The `epochCount` quota is adjustable.


****  

| Hyperparameter (console) | Hyperparameter (API) | Definition | Minimum | Maximum | Default | 
| --- | --- | --- | --- | --- | --- | 
| Epochs | epochCount | The number of iterations through the entire training dataset | 1 | 10 | 5 | 
| Batch size | batchSize | The number of samples processed before updating model parameters | 1 | 1 | 1 | 
| Learning rate | learningRate | The rate at which model parameters are updated after each batch | 5.00E-6 | 0.1 | 1.00E-4 | 

## Meta Llama 3.2 model customization hyperparameters
<a name="cm-hp-meta-llama32"></a>

The Meta Llama 3.2 1B, 3B, 11B, and 90B models support the following hyperparameters for model customization. The number of epochs you specify increases your model customization cost by processing more tokens. Each epoch processes the entire training dataset once. For information about pricing, see [Amazon Bedrock pricing](https://aws.amazon.com/bedrock/pricing). For more information, see [Customize your model to improve its performance for your use case](custom-models.md).

 For information about fine tuning Meta Llama models, see the Meta documentation at [https://ai.meta.com/llama/get-started/\$1fine-tuning](https://ai.meta.com/llama/get-started/#fine-tuning).


****  

| Hyperparameter (console) | Hyperparameter (API) | Definition | Minimum | Maximum | Default | 
| --- | --- | --- | --- | --- | --- | 
| Epochs | epochCount | The number of iterations through the entire training dataset | 1 | 10 | 5 | 
| Batch size | batchSize | The number of samples processed before updating model parameters | 1 | 1 | 1 | 
| Learning rate | learningRate | The rate at which model parameters are updated after each batch | 5.00E-6 | 0.1 | 1.00E-4 | 

# Submit a model customization job for fine-tuning
<a name="model-customization-submit"></a>

You can create a custom model by using fine-tuning in the Amazon Bedrock console or API. You can further fine tune an existing custom model. The customization job can take several hours. The duration of the job depends on the size of the training data (number of records, input tokens, and output tokens), number of epochs, and batch size.

## Prerequisites
<a name="submit-model-customization-job-prerequisites"></a>
+ Create an AWS Identity and Access Management (IAM) service role to access the S3 bucket where you want to store your model customization training and validation data. You can create this role automatically by using the AWS Management Console or manually. For more information on the manual option, see [Create an IAM service role for model customization](custom-model-job-access-security.md#custom-model-job-service-role).
+ (Optional) Encrypt input and output data, your customization job, or inference requests made to custom models. For more information, see [Encryption of custom models](encryption-custom-job.md).
+ (Optional) Create a virtual private cloud (VPC) to protect your customization job. For more information, see [(Optional) Protect your model customization jobs using a VPC](custom-model-job-access-security.md#vpc-model-customization).

## Submit your job
<a name="submit-model-customization-job-how-to"></a>

Choose the tab for your preferred method, and then follow the steps:

------
#### [ Console ]

To submit a model customization job in the console, carry out the following steps.

1. Sign in to the AWS Management Console with an IAM identity that has permissions to use the Amazon Bedrock console. Then, open the Amazon Bedrock console at [https://console.aws.amazon.com/bedrock](https://console.aws.amazon.com/bedrock).

1. From the left navigation pane, choose **Custom models** under **Tune**.

1. In the **Models** tab, choose **Customize model** and then **Create Fine-tuning job**.

1. In the **Model details** section, do the following.

   1. Choose the model that you want to customize with your own data and give your resulting model a name. You can choose either a foundation model or a previously customized model (fine-tuned or distilled) as your base model.

   1. (Optional) By default, Amazon Bedrock encrypts your model with a key owned and managed by AWS. To use a [custom KMS key](encryption-custom-job.md), select **Model encryption** and choose a key.

   1. (Optional) To associate [tags](tagging.md) with the custom model, expand the **Tags** section and select **Add new tag**.

1. In the **Job configuration** section, enter a name for the job and optionally add any tags to associate with the job.

1. (Optional) To use a [virtual private cloud (VPC) to protect your training data and customization job](custom-model-job-access-security.md#vpc-model-customization), select a VPC that contains the input data and output data Amazon S3 locations, its subnets, and security groups in the **VPC settings** section.
**Note**  
If you include a VPC configuration, the console cannot create a new service role for the job. [Create a custom service role](model-customization-iam-role.md) and add permissions similar to the example described in [Attach VPC permissions to a model customization role](custom-model-job-access-security.md#vpc-data-access-role).

1. In the **Input data** section, select the S3 location of the training dataset file and, if applicable, the validation dataset file.

1. In the **Hyperparameters** section, input values for [hyperparameters](custom-models-hp.md) to use in training.

1. In the **Output data** section, enter the Amazon S3 location where Amazon Bedrock should save the output of the job. Amazon Bedrock stores the training loss metrics and validation loss metrics for each epoch in separate files in the location that you specify.

1. In the **Service access** section, select one of the following: 
   + **Use an existing service role** – Select a service role from the dropdown list. For more information on setting up a custom role with the appropriate permissions, see [Create a service role for model customization](model-customization-iam-role.md).
   + **Create and use a new service role** – Enter a name for the service role.

1. Choose **Fine-tune model** to begin the job.

------
#### [ API ]

**Request**

Send a [CreateModelCustomizationJob](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_CreateModelCustomizationJob.html) (see link for request and response formats and field details) request with an [Amazon Bedrock control plane endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#br-cp) to submit a model customization job. Minimally, you must provide the following fields.
+ `roleArn` – The ARN of the service role with permissions to customize models. Amazon Bedrock can automatically create a role with the appropriate permissions if you use the console, or you can create a custom role by following the steps at [Create a service role for model customization](model-customization-iam-role.md).
**Note**  
If you include a `vpcConfig` field, make sure that the role has the proper permissions to access the VPC. For an example, see [Attach VPC permissions to a model customization role](custom-model-job-access-security.md#vpc-data-access-role).
+ `baseModelIdentifier` – The [model ID](models-supported.md) or ARN of the foundation model or previously customized model (fine-tuned or distilled) to customize.
+ `customModelName` – The name to give the newly customized model.
+ `jobName` – The name to give the training job.
+ `hyperParameters` – [Hyperparameters](custom-models-hp.md) that affect the model customization process.
+ `trainingDataConfig` – An object containing the Amazon S3 URI of the training dataset. Depending on the customization method and model, you can also include a `validationDataConfig`. For more information about preparing the datasets, see [Prepare data for fine-tuning your models](model-customization-prepare.md).
+ `validationDataconfig` – An object containing the Amazon S3 URI of the validation dataset.
+ `outputDataConfig` – An object containing the Amazon S3 URI to write the output data to.

If you don't specify the `customizationType`, the model customization method defaults to `FINE_TUNING`.

To prevent the request from completing more than once, include a `clientRequestToken`.

You can include the following optional fields for extra configurations.
+ `jobTags` and/or `customModelTags` – Associate [tags](tagging.md) with the customization job or resulting custom model.
+ `customModelKmsKeyId` – Include a [custom KMS key](encryption-custom-job.md) to encrypt your custom model.
+ `vpcConfig` – Include the configuration for a [virtual private cloud (VPC) to protect your training data and customization job](custom-model-job-access-security.md#vpc-model-customization).

**Response**

The response returns a `jobArn` that you can use to [monitor](model-customization-monitor.md) or [stop](model-customization-stop.md) the job.

[See code examples](model-customization-code-samples.md)

------

# Customize a model with reinforcement fine-tuning in Amazon Bedrock
<a name="reinforcement-fine-tuning"></a>

Reinforcement fine-tuning is a model customization technique in Amazon Bedrock that improves foundation model performance by teaching models what constitutes a "good" response through feedback signals called rewards. Unlike traditional fine-tuning methods that depend on labeled datasets, reinforcement fine-tuning uses a feedback-driven approach that iteratively optimizes the model to maximize these rewards.

## Reinforcement fine-tuning applications and scenarios
<a name="reinforcement-fine-tuning-when"></a>

Use reinforcement fine-tuning when you can define clear, measurable success criteria for evaluating response quality. Reinforcement fine-tuning excels in domains where output quality can be objectively measured, especially when multiple valid responses exist or when optimal responses are difficult to define upfront. It's ideal for:
+ Mathematical problem-solving and code generation (using rule-based graders for objective evaluation)
+ Scientific reasoning and structured data analysis
+ Subjective tasks like instruction following, content moderation, and creative writing (using AI-based judges)
+ Tasks requiring step-by-step reasoning or multi-turn problem solving
+ Scenarios with multiple valid solutions where some are clearly better than others
+ Applications balancing multiple objectives (accuracy, efficiency, style)
+ Applications requiring iterative improvement, personalization, or adherence to complex business rules
+ Scenarios where success can be verified programmatically through execution results or performance metrics
+ Cases where collecting high-quality labeled examples is expensive or impractical

## Benefits of reinforcement fine-tuning
<a name="reinforcement-fine-tuning-benefits"></a>
+ **Improved model performance** – Reinforcement fine-tuning improves model accuracy by up to 66% on average compared to base models. This enables optimization for price and performance by fine-tuning smaller, faster, and more efficient model variants.
+ **Ease of use** – Amazon Bedrock automates the complexity of reinforcement fine-tuning, making it accessible to developers building AI applications. You can fine-tune models using your uploaded datasets or existing API invocation logs. You can define reward functions that grade model outputs with custom code using Lambda or model-as-a-judge grader, with built-in templates that help with quick setup.
+ **Security and compliance** – Your proprietary data never leaves AWS's secure, governed environment during the customization process.

## Supported models for reinforcement fine-tuning
<a name="rft-supported-models"></a>

The following table shows the foundation models that you can customize with reinforcement fine-tuning:


**Supported models for reinforcement fine-tuning**  

| Provider | Model | Model ID | Region name | Region | 
| --- | --- | --- | --- | --- | 
| Amazon | Nova 2 Lite | amazon.nova-2-lite-v1:0:256k |  US East (N. Virginia)  |  us-east-1  | 
| OpenAI | gpt-oss-20B | openai.gpt-oss-20b | US West (Oregon) | us-west-2 | 
| Qwen | Qwen3 32B | qwen.qwen3-32b | US West (Oregon) | us-west-2 | 

## How reinforcement fine-tuning works
<a name="rft-how-it-works"></a>

Amazon Bedrock fully automates the reinforcement fine-tuning workflow. The model receives prompts from your training dataset and generates several responses per prompt. These responses are then scored by a reward function. Amazon Bedrock uses the prompt-response pairs with scores to train the model through policy-based learning using Group Relative Policy Optimization (GRPO). The training loop continues until it reaches the end of your training data or you stop the job at a chosen checkpoint, producing a model optimized for the metric that matters to you.

## Reinforcement fine-tuning best practices
<a name="rft-best-practices"></a>
+ **Start small** – Begin with 100-200 examples, validate reward function correctness, and scale gradually based on results
+ **Pre fine-tuning evaluation** – Test baseline model performance before reinforcement fine-tuning. If rewards are consistently 0 percent, use supervised fine-tuning first to establish basic capabilities. If rewards are greater than 95 percent, reinforcement fine-tuning might be unnecessary
+ **Monitor training** – Track average reward scores and distribution. Watch for overfitting (training rewards increase while validation rewards decrease). Look for concerning patterns such as rewards plateauing below 0.15, increasing reward variance over time, and declining validation performance
+ **Optimize reward functions** – Execute within seconds (not minutes), minimize external API calls, use efficient algorithms, implement proper error handling, and take advantage of Lambda's parallel scaling
+ **Iteration strategy** – If rewards aren't improving, adjust reward function design, increase dataset diversity, add more representative examples, and verify reward signals are clear and consistent

**Topics**
+ [

## Reinforcement fine-tuning applications and scenarios
](#reinforcement-fine-tuning-when)
+ [

## Benefits of reinforcement fine-tuning
](#reinforcement-fine-tuning-benefits)
+ [

## Supported models for reinforcement fine-tuning
](#rft-supported-models)
+ [

## How reinforcement fine-tuning works
](#rft-how-it-works)
+ [

## Reinforcement fine-tuning best practices
](#rft-best-practices)
+ [

# Fine-tune Amazon Nova models with reinforcement fine-tuning
](rft-nova-models.md)
+ [

# Fine-tune open-weight models using OpenAI-compatible APIs
](fine-tuning-openai-apis.md)
+ [

# Evaluate your RFT model
](rft-evaluate-model.md)

# Fine-tune Amazon Nova models with reinforcement fine-tuning
<a name="rft-nova-models"></a>

Before fine-tuning, ensure you have the pre-requisites as Amazon Bedrock needs specific permissions to create and manage the fine-tuning process. For comprehensive security and permissions information, see [Access and security for Amazon Nova models](rft-access-security.md).

Run reinforcement fine-tuning for Amazon Nova models in 5 steps:

1. **Provide Training Dataset** – Upload prompts in required format (e.g., JSONL) as the reinforcement fine-tuning training dataset. For more information, see [Prepare data for Amazon Nova models](rft-prepare-data.md).

1. **Configure Reward Function (grader)** – Define a grader to score model responses based on correctness, structure, tone, or other objectives. The reward function can be executed using Lambda to compute objective scores. You can also choose a model as a judge (via console) and grade responses based on criteria and principles you configure (the console converts these into Lambda functions automatically). For more information, see [Setting up reward functions for Amazon Nova models](reward-functions.md).

1. **Submit reinforcement fine-tuning Job** – Launch the reinforcement fine-tuning job by specifying base model, dataset, reward function, and other optional settings such as hyperparameters. For more information, see [Create and manage fine-tuning jobs for Amazon Nova models](rft-submit-job.md).

1. **Monitor Training** – Track job status, reward metrics, and training progress until completion. For more information, see [Monitor your RFT training job](rft-submit-job.md#rft-monitor-job).

1. **Use Fine-Tuned Model** – After job completion, deploy the resulting RFT model with one click for on-demand inference. You can also use Provisioned Throughput for mission-critical workloads that require consistent performance. See [Set up inference for a custom model](model-customization-use.md). Use **Test in Playground** to evaluate and compare responses with the base model.

**Important**  
You can provide a maximum of 20K prompts to Amazon Bedrock for reinforcement fine-tuning the model.

## Supported Nova models
<a name="rft-nova-supported-models"></a>

The following table shows the Amazon Nova models that you can customize with reinforcement fine-tuning:

**Note**  
For information about additional supported models including open-weight models, see [Fine-tune open-weight models using OpenAI-compatible APIs](fine-tuning-openai-apis.md).


**Supported models for reinforcement fine-tuning**  

| Provider | Model | Model ID | Single-region model support | 
| --- | --- | --- | --- | 
| Amazon | Nova 2 Lite | amazon.nova-2-lite-v1:0:256k | us-east-1 | 

# Access and security for Amazon Nova models
<a name="rft-access-security"></a>

Before you begin reinforcement fine-tuning, make sure that you understand what kind of access Amazon Bedrock needs for RFT-specific operations. RFT requires additional permissions beyond standard fine-tuning due to its reward function execution capabilities.

For basic model customization security setup including trust relationships, Amazon S3 permissions, and KMS encryption, see [Create an IAM service role for model customization](custom-model-job-access-security.md#custom-model-job-service-role).

## Prerequisites
<a name="rft-access-prerequisites"></a>

Before adding RFT-specific IAM permissions, you must add the following IAM service roles:
+ [Trust relationship](custom-model-job-access-security.md#custom-model-job-service-role-trust-relationship)
+ [Permissions to access training and validation files and to write output files in S3](custom-model-job-access-security.md#custom-model-job-service-role-s3-permissions)

## RFT-specific IAM permissions
<a name="rft-iam-permissions"></a>

Add these permissions to your existing model customization service role for RFT functionality.

### Lambda permissions for reward functions
<a name="rft-lambda-permissions"></a>

You must add Lambda invocation permissions. The following shows an example policy you can use:

```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "lambda:InvokeFunction"
            ],
            "Resource": [
                "arn:aws:lambda:*:*:function:reward-function-name"
            ]
        }
    ]
}
```

### Invocation log access
<a name="rft-api-log-permissions"></a>

To use existing Amazon Bedrock model invocation logs as training data, add permissions to access your Amazon S3 bucket where invocation logs are stored. 

You need to provide Amazon S3 bucket access permissions for the input bucket. The following shows an example policy you can use:

```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::s3-invocation-logs-bucket",
                "arn:aws:s3:::s3-invocation-logs-bucket/*"
            ]
        }
    ]
}
```

For security setup including basic IAM roles, Amazon S3 permissions, and encryption, see [Create an IAM service role for model customization](custom-model-job-access-security.md#custom-model-job-service-role).

## Grader Lambda function permissions for RLAIF
<a name="rft-grader-lambda-permissions"></a>

If you create your own Lambda function for Reinforcement Learning from AI Feedback (RLAIF) reward functions, you need to add specific permissions to the Lambda execution role.

### Bedrock permissions for LLM judges
<a name="rft-bedrock-permissions"></a>

For LLM-as-Judge reward functions (RLAIF), add permissions to invoke foundation models. The following shows an example policy you can use for your Lambda execution role.

**Note**  
Only add these permissions to your Lambda execution role if you create your own Lambda function. The console handles this automatically when creating Lambda functions through the console.

The following is an example for bedrock LLM as judge invocation using foundation models:

```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "bedrock:InvokeModel"
            ],
            "Resource": [
                "arn:aws:bedrock:*:*:foundation-model/*"
            ]
        }
    ]
}
```

The following is an example for bedrock LLM as judge invocation using inference profile:

```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "bedrock:InvokeModel"
            ],
            "Resource": [
                "arn:aws:bedrock:us-east-1::foundation-model/amazon.nova-premier-v1:0",
                "arn:aws:bedrock:us-east-2::foundation-model/amazon.nova-premier-v1:0",
                "arn:aws:bedrock:us-west-2::foundation-model/amazon.nova-premier-v1:0"
            ],
            "Condition": {
                "StringLike": {
                    "bedrock:InferenceProfileArn": "arn:aws:bedrock:us-east-1:111122223333:inference-profile/us.amazon.nova-premier-v1:0"
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": [
                "bedrock:InvokeModel"
            ],
            "Resource": [
                "arn:aws:bedrock:us-east-1:111122223333:inference-profile/us.amazon.nova-premier-v1:0"
            ]
        }
    ]
}
```

For information about inference profile prerequisites, see [ Prerequisites for inference profiles](https://docs.aws.amazon.com/bedrock/latest/userguide/inference-profiles-prereq.html).

# Prepare data for Amazon Nova models
<a name="rft-prepare-data"></a>

When you fine-tune an Amazon Nova model with reinforcement fine-tuning, you can bring your own prompts or use existing Amazon Bedrock API invocation logs as training data.

## Training data requirements and sources
<a name="rft-data-source-options"></a>

You can provide training data through one of the following options:

**Note**  
We only support the OpenAI chat completion format.

### Option 1: Provide your own prompts
<a name="w2aac15c25c17c15b5b7b1"></a>

Collect your prompts and store them in `.jsonl` file format. You can upload custom datasets in JSONL format or select existing datasets from Amazon S3. Each record in the JSONL must use the OpenAI chat completion format in the following structure:
+ `messages`: In this field, include the user, system or assistant role containing the input prompt provided to the model.
+ `reference_answer`: In this field, it should contain the expected output or evaluation criteria that your reward function uses to score the model's response. It is not limitedto structured outputs—it can contain any format that helps your reward function evaluate quality.
+ [Optional] You can add fields used by grader Lambda for grading.

**Requirements:**
+ JSONL format with prompts in OpenAI chat completion format (one prompt per line)
+ A minimum of 100 records in training dataset
+ Amazon Bedrock automatically validates training dataset format

------
#### [ Example: General question-answering ]

```
{
            "messages": [
                {
                    "role": "system", 
                    "content": "You are a helpful assistant"
                },
                {
                    role": "user", 
                    "content": "What is machine learning?"}
            ],
            "reference_answer": "Machine learning is a subset of artificial intelligence that enables computers to learn and make decisions from data without being explicitly programmed."
            }
```

------
#### [ Example: Math problem ]

```
{
  "id": "sample-001",
  "messages": [
    {
      "role": "system",
      "content": "You are a math tutor"
    },
    {
      "role": "user",
      "content": "Solve: 2x + 5 = 13"
    }
  ],
  "reference_answer": {
    "solution": "x = 4",
    "steps": ["2x = 13 - 5", "2x = 8", "x = 4"]
  }
}
```

------

### Option 2: Use invocation logs
<a name="w2aac15c25c17c15b5b7b3"></a>

When you create a reinforcement fine-tuning job, you can have Amazon Bedrock use existing invocation logs from your S3 bucket as training data. For Amazon Bedrock, an invocation log is a detailed record of model invocations.

You can use customer-side stored Invoke/Converse API invocation logs from Amazon S3 for training.

**Requirements:**
+ API logging must be enabled for your Amazon Bedrock usage
+ Logs must be in a supported format (Amazon Bedrock Invoke/Converse API)
+ A minimum of 100 prompt examples

To use invocation logs for reinforcement fine-tuning, set the model invocation logging on, use one of the model invocation operations, and make sure that you've set up an Amazon S3 bucket as the destination for the logs. For more information about setting up the invocation logs, see [Monitor model invocation using CloudWatch Logs and Amazon S3](https://docs.aws.amazon.com/bedrock/latest/userguide/model-invocation-logging.html).

Before you can start the reinforcement fine-tuning job with invocation logs from an S3 bucket as input, you must provide Amazon Bedrock permissions to access the logs from an S3 Bucket. For more information, see [Model customization access and security](custom-model-job-access-security.md).

You can optionally add request metadata to the prompt-response pairs in the invocation log using one of the model invocation operations and then later use it to filter the logs. Amazon Bedrock can use the filtered logs to fine-tune the model.

#### Add request metadata to prompts and responses in your invocation logs
<a name="rft-request-metadata"></a>

With invocation logs, you can identify the training dataset prompts that you want Amazon Bedrock to use for reinforcement fine-tuning using request metadata attached to invocation logs. 

The following is an example of an invocation log from an `InvokeModel` call with the `requestMetadata`:

```
{
"schemaType": "ModelInvocationLog",
    "schemaVersion": "1.0",
    "timestamp": "2024-09-11T17:16:18Z",
    "accountId": "XXXXX",
    "identity": {
"arn": "arn:aws:sts::XXXXXXX:assumed-role/Admin/XXXXXX"
    },
    "region": "us-east-1",
    "requestId": "7ee514f3-9eff-467c-af75-8fd564ce58e5",
    "operation": "InvokeModel",
    "modelId": "meta.llama3-1-405b-instruct-v1:0",
    "input": {
"inputContentType": "application/json",
        "inputBodyJson": {
"prompt": "story of two dogs",
            "max_tokens_to_sample": 300
        },
        "inputTokenCount": 13
    },
    "output": {
"outputContentType": "application/json",
        "outputBodyJson": {
"type": "completion",
            "completion": " Here is a story about two dogs:\n\nRex and Charlie were best friends who lived next door to each other. Every morning, their owners would let them out into the backyard to play. Rex was a golden retriever with long, shaggy fur. He loved to run and chase sticks. Charlie was a corgi with short legs and a big smile. He was happy just sniffing the grass and rolling around. \n\nThe two dogs spent every day playing together. Rex would grab a stick and bring it over to Charlie, wagging his tail excitedly. Charlie would take the other end of the stick in his mouth and they'd have a friendly game of tug-of-war. After tiring themselves out, they'd flop down in the warm sunshine for a nap. \n\nAt lunchtime, their owners would call them in for food. Rex would gobble up his kibble hungrily while Charlie ate his at a more leisurely pace. After lunch, it was right back outside for more playtime. The afternoon sunbeams would light up their fur as they chased each other around the yard. \n\nWhen it started getting late, their owners called them in for the night. Rex and Charlie would head inside, tired but happy after another fun day of play. After slurping up fresh water from their bowls, they'd curl up on their beds, Rex's fluffy golden tail tucked over his nose and little",
            "stop_reason": "max_tokens",
            "stop": null
        },
        "outputTokenCount": 300
    },
    "requestMetadata": {
"project": "CustomerService",
        "intent": "ComplaintResolution",
        "priority": "High"
    }
}
```

You can specify the invocation log as your input data source when you start a reinforcement fine-tuning job. You can start a reinforcement fine-tuning job through the Amazon Bedrock console, using the API, AWS CLI, or SDK.

##### Requirements for providing request metadata
<a name="rft-metadata-requirements"></a>

The request metadata must meet the following requirements:
+ Provided in the JSON `key:value` format.
+ Key and value pair must be a string of 256 characters maximum.
+ Provide a maximum of 16 key-value pairs.

##### Using request metadata filters
<a name="rft-metadata-filters"></a>

Once invocation logs with request metadata are available, you can apply filters based on the request metadata to selectively choose which prompts to include for fine-tuning the model. For example, you might want to include only those with `"project": "CustomerService"` and `"priority": "High"` request metadata.

To filter the logs using multiple request metadata, use a single Boolean operator `AND` or `OR`. You cannot combine these operators. For single request metadata filtering, use the `Equals` or `Not Equals` operator.

## Characteristics of effective training data
<a name="rft-data-characteristics"></a>

Effective RFT training data requires three key characteristics:
+ **Clarity and consistency** – Use clear, unambiguous prompts with consistent formatting. Avoid contradictory labels, ambiguous instructions, or conflicting reference answers that mislead training.
+ **Diversity** – Include varied input formats, edge cases, and difficulty levels that reflect production usage patterns across different user types and scenarios.
+ **Efficient reward functions** – Design functions that execute quickly (seconds, not minutes), parallelize with AWS Lambda, and return consistent scores for cost-effective training.

## Additional properties
<a name="rft-additional-properties"></a>

The RFT data format supports custom fields beyond the core schema requirements (`messages` and `reference_answer`). This flexibility allows you to add any additional data your reward function needs for proper evaluation.

**Note**  
You don't need to configure this in your recipe. The data format inherently supports additional fields. Simply include them in your training data JSON, and they will be passed to your reward function in the `metadata` field.

**Common additional properties**
+ `task_id` – Unique identifier for tracking
+ `difficulty_level` – Problem complexity indicator
+ `domain` – Subject area or category
+ `expected_reasoning_steps` – Number of steps in solution

These additional fields are passed to your reward function during evaluation, enabling sophisticated scoring logic tailored to your specific use case.

**Examples with additional properties**

------
#### [ Chemistry problem ]

```
{
  "id": "chem-001",
  "messages": [
    {
      "role": "system",
      "content": "You are a helpful chemistry assistant"
    },
    {
      "role": "user",
      "content": "Predict hydrogen bond donors and acceptors for this SMILES: CCN(CC)CCC(=O)c1sc(N)nc1C"
    }
  ],
  "reference_answer": {
    "donor_bond_counts": 2,
    "acceptor_bond_counts": 4
  }
}
```

The `reference_answer` field contains the expected output or evaluation criteria that your reward function uses to score the model's response. It is not limited to structured outputs—it can contain any format that helps your reward function evaluate quality.

------
#### [ Math problem with metadata ]

```
{
  "messages": [
    {
      "role": "system",
      "content": "You are a math tutor"
    },
    {
      "role": "user",
      "content": "Solve: 2x + 5 = 13"
    }
  ],
  "reference_answer": {
    "solution": "x = 4",
    "steps": ["2x = 13 - 5", "2x = 8", "x = 4"]
  },
  "task_id": "algebra_001",
  "difficulty_level": "easy",
  "domain": "algebra",
  "expected_reasoning_steps": 3
}
```

------

# Setting up reward functions for Amazon Nova models
<a name="reward-functions"></a>

Reward functions evaluate response quality and provide feedback signals for model training. You can set up reward functions using custom Lambda functions or Amazon Bedrock-hosted foundation models as judges. Guided templates are available to simplify reward function creation for common tasks like instruction following and format validation. Choose the approach that matches your task requirements.

## Reinforcement Learning via Verifiable Rewards (RLVR)
<a name="rft-rlvr"></a>

RLVR optimizes models for objective tasks such as code generation or math reasoning using verifiable rule-based graders or ready-to-use templates.

You have two options for RLVR (Custom Code):

### Option 1: Use console-provided templates
<a name="w2aac15c25c17c17b5b7b1"></a>

Amazon Bedrock console provides sample templates for grader Lambda functions:
+ Mathematical reasoning with ground truth verification
+ Format validation and constraint checking
+ Generic grader Lambda template with boilerplate code

Follow the instructions in the provided template on the **Create RFT job** page in the [Amazon Bedrock console](https://console.aws.amazon.com/bedrock).

### Option 2: Bring your own Lambda function
<a name="w2aac15c25c17c17b5b7b3"></a>

Create custom reward functions using your own Lambda ARN for complex logic, external APIs, multi-step calculations, or combining multiple evaluation criteria.

**Note**  
If you bring your own Lambda function, keep the following in mind:  
Increase the Lambda timeout from default 3 seconds to maximum 15 minutes for complex evaluations.
The Lambda execution role needs permissions to invoke models as described in [Access and security for Amazon Nova models](rft-access-security.md).

## Reinforcement Learning via AI Feedback (RLAIF)
<a name="rft-rlaif"></a>

RLAIF optimizes models for subjective tasks such as instruction following or chatbot interactions using AI-based judges with ready-to-use templates.

**For RLAIF (Model as Judge):**
+ Select an Amazon Bedrock hosted base Model as Judge
+ Configure instructions for evaluation
+ Define evaluation criteria and scoring guidelines

Available LLM-as-Judge prompt templates in the Amazon Bedrock console:
+ Instruction following (Judge model training)
+ Summarization (Multi-turn dialogs)
+ Reasoning evaluation (CoT for specialized domains)
+ RAG faithfulness (Context-grounded Q&A)

**Note**  
The console's **Model as Judge** option automatically converts your configuration into a Lambda function during training.

## Lambda function implementation details
<a name="rft-lambda-implementation"></a>

When implementing custom Lambda reward functions, your function must accept and return data in the following format.

------
#### [ Input structure ]

```
[{
  "id": "123",
  "messages": [
    {
      "role": "user",
      "content": "Do you have a dedicated security team?"
    },
    {
      "role": "assistant",
      "content": "As an AI developed by Amazon, I don not have a dedicated security team..."
    }
  ],
  "metadata": {
    "reference_answer": {
      "compliant": "No",
      "explanation": "As an AI developed by Company, I do not have a traditional security team..."
    },
    "my_key": "sample-001"
  }
}]
```

------
#### [ Output structure ]

```
[{
  "id": "123",
  "aggregate_reward_score": 0.85,
  "metrics_list": [
    {
      "name": "accuracy",
      "value": 0.9,
      "type": "Reward"
    },
    {
      "name": "policy_compliance",
      "value": 0.8,
      "type": "Metric"
    }
  ]
}]
```

------

**Design guidelines**
+ **Rank responses** – Give the best answer a clearly higher score
+ **Use consistent checks** – Evaluate task completion, format adherence, safety, and reasonable length
+ **Maintain stable scaling** – Keep scores normalized and non-exploitable

# Create and manage fine-tuning jobs for Amazon Nova models
<a name="rft-submit-job"></a>

You can create a reinforcement fine-tuning (RFT) job using the Amazon Bedrock console or API. The RFT job can take a few hours depending on the size of your training data, number of epochs, and complexity of your reward functions.

## Prerequisites
<a name="rft-prerequisites"></a>
+ Create an IAM service role with the required permissions. For comprehensive security and permissions information including RFT-specific permissions, see [Access and security for Amazon Nova models](rft-access-security.md).
+ (Optional) Encrypt input and output data, your RFT job, or inference requests made to custom models. For more information, see [ Encryption of custom models](https://docs.aws.amazon.com/bedrock/latest/userguide/encryption-custom-job.html).

## Create your RFT job
<a name="rft-submit-job-how-to"></a>

Choose the tab for your preferred method, and then follow the steps:

------
#### [ Console ]

To submit an RFT job in the console, carry out the following steps:

1. Open the Amazon Bedrock console and navigate to **Custom models** under **Tune**.

1. Choose **Create**, then **Create reinforcement fine-tuning job**.

1. In the **Model details** section, choose **Amazon Nova 2 Lite** as your base model.

1. In the **Customization details** section, enter the customization name.

1. In the **Training data** section, choose your data source. Either select from your available invocation logs stored in Amazon S3, or select the Amazon S3 location of your training dataset file, or upload a file directly from your device.
**Note**  
Your training dataset should be in the OpenAI Chat Completions data format. If you provide invocation logs in the Amazon Bedrock invoke or converse format, Amazon Bedrock automatically converts them to the Chat Completions format.

1. In the **Reward function** section, set up your reward mechanism:
   + **Model as judge (RLAIF)** - Select a Bedrock hosted base model as judge and configure the instructions for evaluation. Use this for subjective tasks like content moderation.
**Note**  
The console's **Model as judge** option automatically converts your configuration into a Lambda function during training.
   + **Custom code (RLVR)** - Create custom reward functions using Python code executed through Lambda functions. Use this for objective tasks like code generation.

   For more information, see [Setting up reward functions for Amazon Nova models](reward-functions.md).

1. (Optional) In the **Hyperparameters** section, adjust training parameters or use default values.

1. In the **Output data** section, enter the Amazon S3 location where Amazon Bedrock should save job outputs.

1. In the **Role configuration** section, either choose an existing role from the dropdown list or enter a name for the service role to create.

1. (Optional) In the **Additional configuration** section, configure the validation data by pointing to an Amazon S3 bucket, KMS encryption settings, and job and model tags.

1. Choose **Create reinforcement fine-tuning job** to begin the job.

------
#### [ API ]

Send a CreateModelCustomizationJob request with `customizationType` set to `REINFORCEMENT_FINE_TUNING`.

**Required fields:** `roleArn`, `baseModelIdentifier`, `customModelName`, `jobName`, `trainingDataConfig`, `outputDataConfig`, `rftConfig`

**Example request:**

```
{
    "roleArn": "arn:aws:iam::123456789012:role/BedrockRFTRole",
    "baseModelIdentifier": "amazon.nova-2.0",
    "customModelName": "my-rft-model",
    "jobName": "my-rft-job",
    "customizationType": "REINFORCEMENT_FINE_TUNING",
    "trainingDataConfig": {
        "s3Uri": "s3://my-bucket/training-data.jsonl"
    },
    "customizationConfig": {
        "rftConfig" : {
            "graderConfig": {
                "lambdaGrader": {
                    "lambdaArn": "arn:aws:lambda:us-east-1:123456789012:function:function-name"
                }
            },
            "hyperParameters": {
                "batchSize": 64,
                "epochCount": 2,
                "evalInterval": 10,
                "inferenceMaxTokens": 8192,
                "learningRate": 0.00001,
                "maxPromptLength": 4096,
                "reasoningEffort": "high",
                "trainingSamplePerPrompt": 4
            }
        }
    },
    "outputDataConfig": {
        "s3Uri": "s3://my-bucket/rft-output/"
    }
}
```

**Python API sample request:**

```
import boto3

bedrock = boto3.client(service_name='bedrock')
    
# Set parameters
customizationType = "REINFORCEMENT_FINE_TUNING"
baseModelIdentifier = "arn:aws:bedrock:us-east-1::foundation-model/amazon.nova-2-lite-v1:0:256k"
roleArn = "${your-customization-role-arn}"
jobName = "MyFineTuningJob"
customModelName = "MyCustomModel"

customizationConfig = {
    'rftConfig' : {
        'graderConfig': {
            'lambdaGrader': {
                'lambdaArn': 'arn:aws:lambda:us-east-1:123456789012:function:function-name'
            }
        },
        'hyperParameters': {
            'batchSize': 64,
            'epochCount': 2,
            'evalInterval': 10,
            'inferenceMaxTokens': 8192,
            'learningRate':0.00001,
            'maxPromptLength': 4096,
            'reasoningEffort': 'high',
            'trainingSamplePerPrompt':4
        }
    }
}

trainingDataConfig = {"s3Uri": "s3://${training-bucket}/myInputData/train.jsonl"}
outputDataConfig = {"s3Uri": "s3://${output-bucket}/myOutputData"}

# Create job
response_ft = bedrock.create_model_customization_job(
    jobName=jobName, 
    customModelName=customModelName,
    roleArn=roleArn,
    baseModelIdentifier=baseModelIdentifier,
    customizationConfig=customizationConfig,
    trainingDataConfig=trainingDataConfig,
    outputDataConfig=outputDataConfig,
    customizationType=customizationType
)

jobArn = response_ft['jobArn']
```

------

## Monitor your RFT training job
<a name="rft-monitor-job"></a>

Amazon Bedrock provides real-time monitoring with visual graphs and metrics during RFT training. These metrics help you understand whether the model converges properly and if the reward function effectively guides the learning process.

### Job status tracking
<a name="rft-job-status"></a>

You can monitor your RFT job status through the validation and training phases in the Amazon Bedrock console.

**Completion indicators:**
+ Job status changes to **Completed** when training completes successfully
+ Custom model ARN becomes available for deployment
+ Training metrics reach convergence thresholds

### Real-time training metrics
<a name="rft-real-time-metrics"></a>

Amazon Bedrock provides real-time monitoring during RFT training with visual graphs displaying training and validation metrics.

#### Core training metrics
<a name="rft-core-metrics"></a>
+ **Training loss** - Measures how well the model is learning from the training data
+ **Training reward statistics** - Shows reward scores assigned by your reward functions
+ **Reward margin** - Measures the difference between good and bad response rewards
+ **Accuracy on training and validation sets** - Shows model performance on both the training and held-out data

**Detailed metric categories**
+ **Reward metrics** – `critic/rewards/mean`, `critic/rewards/max`, `critic/rewards/min` (reward distribution), and `val-score/rewards/mean@1` (validation rewards)
+ **Model behavior** – `actor/entropy` (policy variation; higher equals more exploratory)
+ **Training health** – `actor/pg_loss` (policy gradient loss), `actor/pg_clipfrac` (frequency of clipped updates), and `actor/grad_norm` (gradient magnitude)
+ **Response characteristics** – `prompt_length/mean`, `prompt_length/max`, `prompt_length/min` (input token statistics), `response_length/mean`, `response_length/max`, `response_length/min` (output token statistics), and `response/aborted_ratio` (incomplete generation rate; 0 equals all completed)
+ **Performance** – `perf/throughput` (training throughput), `perf/time_per_step` (time per training step), and `timing_per_token_ms/*` (per-token processing times)
+ **Resource usage** – `perf/max_memory_allocated_gb`, `perf/max_memory_reserved_gb` (GPU memory), and `perf/cpu_memory_used_gb` (CPU memory)

#### Training progress visualization
<a name="rft-progress-visualization"></a>

The console displays interactive graphs that update in real-time as your RFT job progresses. These visualizations can help you:
+ Track convergence toward optimal performance
+ Identify potential training issues early
+ Determine optimal stopping points
+ Compare performance across different epochs

## Set up inference
<a name="rft-setup-inference"></a>

After job completion, deploy the RFT model for on-demand inference or use Provisioned Throughput for consistent performance. For setting up inference, see [Set up inference for a custom model](model-customization-use.md).

Use **Test in Playground** to evaluate and compare responses with the base model. For evaluating your completed RFT model, see [Evaluate your RFT model](rft-evaluate-model.md).

# Fine-tune open-weight models using OpenAI-compatible APIs
<a name="fine-tuning-openai-apis"></a>

Amazon Bedrock provides OpenAI compatible API endpoints for fine-tuning foundation models. These endpoints allow you to use familiar OpenAI SDKs and tools to create, monitor, and manage fine-tuning jobs with Amazon Bedrock models. This page highlights using these APIs for reinforcement fine-tuning.

## Key capabilities
<a name="fine-tuning-openai-key-capabilities"></a>
+ **Upload training files** – Use the Files API to upload and manage training data for fine-tuning jobs
+ **Create fine-tuning jobs** – Start fine-tuning jobs with custom training data and reward functions
+ **List and retrieve jobs** – View all fine-tuning jobs and get detailed information about specific jobs
+ **Monitor job events** – Track fine-tuning progress through detailed event logs
+ **Access checkpoints** – Retrieve intermediate model checkpoints created during training
+ **Immediate inference** – After fine-tuning completes, use the resulting fine-tuned model for on-demand inference through Amazon Bedrock's OpenAI-compatible APIs (Responses/chat completions API) without additional deployment steps
+ **Easy migration** – Compatible with existing OpenAI SDK codebases

## Reinforcement fine-tuning workflow for open-weight models
<a name="fine-tuning-openai-workflow"></a>

Before fine-tuning, ensure you have the pre-requisites as Amazon Bedrock needs specific permissions to create and manage the fine-tuning process. For comprehensive security and permissions information, see [Access and security for open-weight models](rft-open-weight-access-security.md).

Run reinforcement fine-tuning for open-weight models in 5 steps:

1. **Upload Training Dataset** – Use the Files API to upload prompts in required format (e.g., JSONL) with purpose "fine-tune" as the reinforcement fine-tuning training dataset. For more information, see [Prepare data for open-weight models](rft-prepare-data-open-weight.md).

1. **Configure Reward Function** – Define a grader to score model responses based on correctness, structure, tone, or other objectives using Lambda functions. For more information, see [Setting up reward functions for open-weight models](reward-functions-open-weight.md).

1. **Create Fine-tuning Job** – Launch the reinforcement fine-tuning job using the OpenAI-compatible API by specifying base model, dataset, reward function, and other optional settings such as hyperparameters. For more information, see [Create fine-tuning job](fine-tuning-openai-job-create.md#fine-tuning-openai-create-job).

1. **Monitor Training Progress** – Track job status, events, and training metrics using the fine-tuning jobs APIs. For more information, see [List fine-tuning events](fine-tuning-openai-job-create.md#fine-tuning-openai-list-events). Access intermediate model checkpoints to evaluate performance at different training stages, see [List fine-tuning checkpoints](fine-tuning-openai-job-create.md#fine-tuning-openai-list-checkpoints).

1. **Run Inference** – Use the fine-tuned model ID directly for inference through Amazon Bedrock's OpenAI-compatible Responses or Chat Completions APIs. For more information, see [Run inference with fine-tuned model](fine-tuning-openai-job-create.md#fine-tuning-openai-inference).

## Supported regions and endpoints
<a name="fine-tuning-openai-supported-regions"></a>

The following table shows the foundation models and regions that support OpenAI compatible fine-tuning APIs:


**Supported models and regions for OpenAI compatible fine-tuning APIs**  

| Provider | Model | Model ID | Region name | Region | Endpoint | 
| --- | --- | --- | --- | --- | --- | 
| OpenAI | Gpt-oss-20B | openai.gpt-oss-20b | US West (Oregon) | us-west-2 | bedrock-mantle.us-west-2.api.aws | 
| Qwen | Qwen3 32B | qwen.qwen3-32b | US West (Oregon) | us-west-2 | bedrock-mantle.us-west-2.api.aws | 

# Access and security for open-weight models
<a name="rft-open-weight-access-security"></a>

Before you begin reinforcement fine-tuning (RFT), make sure that you understand what kind of access Amazon Bedrock needs for RFT-specific operations. RFT requires additional permissions beyond standard fine-tuning due to its reward function execution capabilities.

## Prerequisites
<a name="fine-tuning-openai-prereq"></a>

Before using Amazon Bedrock's OpenAI-compatible fine-tuning APIs, ensure you have the following:

1. An AWS account with appropriate permissions to access Amazon Bedrock

1. **Authentication** – You can authenticate using:
   + Amazon Bedrock API key (required for OpenAI SDK and available for HTTP requests)
   + AWS credentials (supported for HTTP requests)
**Note**  
If you are using Amazon Bedrock short-term/long-term API keys, then make sure that your role has access to the following IAM policy permissions: `AmazonBedrockMantleFullAccess` and [AWSLambdaRole](https://docs.aws.amazon.com/bedrock/latest/ug/rft-open-weight-access-security#openai-fine-tuning-lambda-permissions).

1. **OpenAI SDK (optional)** – Install the OpenAI Python SDK if using SDK-based requests.

1. **Environment variables** – Set the following environment variables:
   + `OPENAI_API_KEY` – Set to your Amazon Bedrock API key
   + `OPENAI_BASE_URL` – Set to the Amazon Bedrock endpoint for your region (for example, `https://bedrock-mantle.us-west-2.api.aws/v1`)

   For more information, see [Responses API](bedrock-mantle.md#bedrock-mantle-responses).

1. **Training data** formatted as JSONL files with the purpose `fine-tune`. For more information, see [Prepare data for open-weight models](rft-prepare-data-open-weight.md).

## Lambda permissions for reward functions
<a name="openai-fine-tuning-lambda-permissions"></a>

You must add Lambda invocation permissions. The following shows an example policy you can use:

```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "lambda:InvokeFunction"
            ],
            "Resource": [
                "arn:aws:lambda:*:*:function:reward-function-name"
            ]
        }
    ]
}
```

You can also use Amazon Bedrock hosted models as Judges for setting up reward functions. You will need to add specific permissions to invoke foundation models to the Lambda execution role. In your lambda role, you can configure these managed policies for LLMs for grading. See [AmazonBedrockLimitedAccess](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonBedrockLimitedAccess.html).

The following is an example for invoking Amazon Bedrock foundation models as judge using the Invoke API:

```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "bedrock:InvokeModel"
            ],
            "Resource": [
                "arn:aws:bedrock:*:*:foundation-model/*"
            ]
        }
    ]
}
```

# Prepare data for open-weight models
<a name="rft-prepare-data-open-weight"></a>

When you fine-tune open-weight models with reinforcement fine-tuning using OpenAI-compatible APIs, provide training data by bringing your own prompts in JSONL format with the purpose `fine-tune`.

## Training data format and requirements
<a name="rft-data-format-open-weight"></a>

Training data must follow the OpenAI chat completions format with 100-20K examples. Each training example contains:
+ `messages`: In this field, include the user, system or assistant role containing the input prompt provided to the model.
+ `reference_answer`: In this field, it should contain the expected output or evaluation criteria that your reward function uses to score the model's response. It is not limitedto structured outputs—it can contain any format that helps your reward function evaluate quality.
+ [Optional] You can add fields used by grader Lambda for grading.

**Requirements:**
+ JSONL format with prompts in OpenAI chat completion format (one prompt per line)
+ Purpose must be set to `fine-tune`
+ A minimum of 100 records in training dataset
+ Amazon Bedrock automatically validates training dataset format

------
#### [ Example: General question-answering ]

```
{
            "messages": [
                {
                    "role": "system", 
                    "content": "You are a helpful assistant"
                },
                {
                    role": "user", 
                    "content": "What is machine learning?"}
            ],
            "reference_answer": "Machine learning is a subset of artificial intelligence that enables computers to learn and make decisions from data without being explicitly programmed."
            }
```

------
#### [ Example: Math problem ]

```
{
  "id": "sample-001",
  "messages": [
    {
      "role": "system",
      "content": "You are a math tutor"
    },
    {
      "role": "user",
      "content": "Solve: 2x + 5 = 13"
    }
  ],
  "reference_answer": {
    "solution": "x = 4",
    "steps": ["2x = 13 - 5", "2x = 8", "x = 4"]
  }
}
```

------

## Files API
<a name="fine-tuning-openai-files-api"></a>

You can use OpenAI-compatible files api to upload your training data for fine-tuning jobs. Files are stored securely in Amazon Bedrock, and are used when creating fine-tuning jobs. For complete API details, see the [OpenAI Files documentation](https://platform.openai.com/docs/api-reference/files).

### Upload training file
<a name="fine-tuning-openai-upload-file"></a>

To upload a training file, choose the tab for your preferred method, and then follow the steps:

------
#### [ OpenAI SDK (Python) ]

```
# Requires OPENAI_API_KEY and OPENAI_BASE_URL environment variables
from openai import OpenAI
client = OpenAI()

# Upload training file
with open(TRAINING_FILE_PATH, 'rb') as f:
    file_response = client.files.create(
        file=f,
        purpose='fine-tune'
    )

# Store file ID for next steps
training_file_id = file_response.id
print(f"✅ Training file uploaded successfully: {training_file_id}")
```

------
#### [ HTTP request ]

Make a POST request to `/v1/files`:

```
curl https://bedrock-mantle.us-west-2.api.aws/v1/files \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -F purpose="fine-tune" \
  -F file="@training_data.jsonl"
```

------

### Retrieve file details
<a name="fine-tuning-openai-retrieve-file"></a>

To retrieve details about a specific file, choose the tab for your preferred method, and then follow the steps:

------
#### [ OpenAI SDK (Python) ]

```
# Requires OPENAI_API_KEY and OPENAI_BASE_URL environment variables
from openai import OpenAI
client = OpenAI()

# Retrieve file details
file_details = client.files.retrieve(training_file_id)

# Print raw response
print(json.dumps(file_details.model_dump(), indent=2))
```

------
#### [ HTTP request ]

Make a GET request to `/v1/files/{file_id}`:

```
curl https://bedrock-mantle.us-west-2.api.aws/v1/files/file-abc123 \
  -H "Authorization: Bearer $OPENAI_API_KEY"
```

------

### List files
<a name="fine-tuning-openai-list-files"></a>

To list uploaded files, choose the tab for your preferred method, and then follow the steps:

------
#### [ OpenAI SDK (Python) ]

```
# Requires OPENAI_API_KEY and OPENAI_BASE_URL environment variables
from openai import OpenAI
client = OpenAI()

# List files
files_response = client.files.list(purpose='fine-tune')

# Print raw response
print(json.dumps(files_response.model_dump(), indent=2))
```

------
#### [ HTTP request ]

Make a GET request to `/v1/files`:

```
curl https://bedrock-mantle.us-west-2.api.aws/v1/files?purpose=fine-tune \
  -H "Authorization: Bearer $OPENAI_API_KEY"
```

------

### Delete file
<a name="fine-tuning-openai-delete-file"></a>

To delete a file, choose the tab for your preferred method, and then follow the steps:

------
#### [ OpenAI SDK (Python) ]

```
# Requires OPENAI_API_KEY and OPENAI_BASE_URL environment variables
from openai import OpenAI
client = OpenAI()

# Delete file
delete_response = client.files.delete(training_file_id)
```

------
#### [ HTTP request ]

Make a DELETE request to `/v1/files/{file_id}`:

```
curl -X DELETE https://bedrock-mantle.us-west-2.api.aws/v1/files/file-abc123 \
  -H "Authorization: Bearer $OPENAI_API_KEY"
```

------

## Characteristics of effective training data
<a name="rft-data-characteristics-open-weight"></a>

Effective RFT training data requires three key characteristics:
+ **Clarity and consistency** – Use clear, unambiguous prompts with consistent formatting. Avoid contradictory labels, ambiguous instructions, or conflicting reference answers that mislead training.
+ **Diversity** – Include varied input formats, edge cases, and difficulty levels that reflect production usage patterns across different user types and scenarios.
+ **Efficient reward functions** – Design functions that execute quickly (seconds, not minutes), parallelize with AWS Lambda, and return consistent scores for cost-effective training.

## Additional properties
<a name="rft-additional-properties-open-weight"></a>

The RFT data format supports custom fields beyond the core schema requirements (`messages` and `reference_answer`). This flexibility allows you to add any additional data your reward function needs for proper evaluation.

**Note**  
You don't need to configure this in your recipe. The data format inherently supports additional fields. Simply include them in your training data JSON, and they will be passed to your reward function in the `metadata` field.

**Common additional properties**
+ `task_id` – Unique identifier for tracking
+ `difficulty_level` – Problem complexity indicator
+ `domain` – Subject area or category
+ `expected_reasoning_steps` – Number of steps in solution

These additional fields are passed to your reward function during evaluation, enabling sophisticated scoring logic tailored to your specific use case.

**Examples with additional properties**

------
#### [ Chemistry problem ]

```
{
  "id": "chem-001",
  "messages": [
    {
      "role": "system",
      "content": "You are a helpful chemistry assistant"
    },
    {
      "role": "user",
      "content": "Predict hydrogen bond donors and acceptors for this SMILES: CCN(CC)CCC(=O)c1sc(N)nc1C"
    }
  ],
  "reference_answer": {
    "donor_bond_counts": 2,
    "acceptor_bond_counts": 4
  }
}
```

The `reference_answer` field contains the expected output or evaluation criteria that your reward function uses to score the model's response. It is not limited to structured outputs—it can contain any format that helps your reward function evaluate quality.

------
#### [ Math problem with metadata ]

```
{
  "messages": [
    {
      "role": "system",
      "content": "You are a math tutor"
    },
    {
      "role": "user",
      "content": "Solve: 2x + 5 = 13"
    }
  ],
  "reference_answer": {
    "solution": "x = 4",
    "steps": ["2x = 13 - 5", "2x = 8", "x = 4"]
  },
  "task_id": "algebra_001",
  "difficulty_level": "easy",
  "domain": "algebra",
  "expected_reasoning_steps": 3
}
```

------

# Setting up reward functions for open-weight models
<a name="reward-functions-open-weight"></a>

Reward functions evaluate response quality and provide feedback signals for model training. You can set up reward functions using custom Lambda functions. Choose the approach that matches your task requirements.

## Custom Lambda functions for reward evaluation
<a name="rft-custom-lambda-functions"></a>

You can set up reward functions using custom Lambda functions. Within your Lambda function, you have flexibility in how you implement the evaluation logic:
+ **Objective tasks** – For objective tasks like code generation or math reasoning, use verifiable rule-based graders that check correctness against known standards or test cases.
+ **Subjective tasks** – For subjective tasks like instruction following or chatbot interactions, call Amazon Bedrock foundation models as judges within your Lambda function to evaluate response quality based on your criteria.

Your Lambda function can implement complex logic, integrate external APIs, perform multi-step calculations, or combine multiple evaluation criteria depending on your task requirements.

**Note**  
When using custom Lambda functions:  
Increase the Lambda timeout from default 3 seconds to maximum 15 minutes for complex evaluations.
The Lambda execution role needs permissions to invoke the Lambda function as described in [Lambda permissions for reward functions](rft-open-weight-access-security.md#openai-fine-tuning-lambda-permissions).

## Lambda function implementation details
<a name="rft-lambda-implementation-open-weight"></a>

When implementing custom Lambda reward functions, your function must accept and return data in the following format.

------
#### [ Input structure ]

```
[{
  "id": "123",
  "messages": [
    {
      "role": "user",
      "content": "Do you have a dedicated security team?"
    },
    {
      "role": "assistant",
      "content": "As an AI developed by Amazon, I don not have a dedicated security team..."
    }
  ],
  "metadata": {
    "reference_answer": {
      "compliant": "No",
      "explanation": "As an AI developed by Company, I do not have a traditional security team..."
    },
    "my_key": "sample-001"
  }
}]
```

------
#### [ Output structure ]

```
[{
  "id": "123",
  "aggregate_reward_score": 0.85,
  "metrics_list": [
    {
      "name": "accuracy",
      "value": 0.9,
      "type": "Reward"
    },
    {
      "name": "policy_compliance",
      "value": 0.8,
      "type": "Metric"
    }
  ]
}]
```

------

**Design guidelines**
+ **Rank responses** – Give the best answer a clearly higher score
+ **Use consistent checks** – Evaluate task completion, format adherence, safety, and reasonable length
+ **Maintain stable scaling** – Keep scores normalized and non-exploitable

# Create and manage fine-tuning jobs for open-weight models using OpenAI APIs
<a name="fine-tuning-openai-job-create"></a>

The OpenAI-compatible fine-tuning job APIs allow you to create, monitor, and manage fine-tuning jobs. This page highlights using these APIs for reinforcement fine-tuning. For complete API details, see the [OpenAI Fine-tuning documentation](https://platform.openai.com/docs/api-reference/fine-tuning).

## Create fine-tuning job
<a name="fine-tuning-openai-create-job"></a>

Creates a fine-tuning job that begins the process of creating a new model from a given dataset. For complete API details, see the [OpenAI Create fine-tuning jobs documentation](https://developers.openai.com/api/reference/resources/fine_tuning/subresources/jobs/methods/create).

### Examples
<a name="fine-tuning-openai-create-job-examples"></a>

To create a fine-tuning job with RFT method, choose the tab for your preferred method, and then follow the steps:

------
#### [ OpenAI SDK (Python) ]

```
# Requires OPENAI_API_KEY and OPENAI_BASE_URL environment variables
from openai import OpenAI
client = OpenAI()

# Create fine-tuning job with RFT method
job_response = client.fine_tuning.jobs.create(
    model=MODEL_ID,
    training_file=training_file_id,
    # Suffix field is not supported so commenting for now.
    # suffix="rft-example",  # Optional: suffix for fine-tuned model name
    extra_body={
        "method": {
            "type": "reinforcement", 
            "reinforcement": {
                "grader": {
                    "type": "lambda",
                    "lambda": {
                        "function": "arn:aws:lambda:us-west-2:123456789012:function:my-reward-function"  # Replace with your Lambda ARN
                    }
                },
                "hyperparameters": {
                    "n_epochs": 1,  # Number of training epochs
                    "batch_size": 4,  # Batch size
                    "learning_rate_multiplier": 1.0  # Learning rate multiplier
                }
            }
        }
    }
)

# Store job ID for next steps
job_id = job_response.id
print({job_id})
```

------
#### [ HTTP request ]

Make a POST request to `/v1/fine_tuning/jobs`:

```
curl https://bedrock-mantle.us-west-2.api.aws/v1/fine_tuning/jobs \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{
    "training_file": "file-abc123",
    "model": "gpt-4o-mini",
    "method": {
      "type": "reinforcement",
      "reinforcement": {
        "grader": {
          "type": "lambda",
          "lambda": {
            "function": "arn:aws:lambda:us-west-2:123456789012:function:my-grader"
          }
        },
        "hyperparameters": {
          "n_epochs": 1,
          "batch_size": 4,
          "learning_rate_multiplier": 1.0
        }
      }
    }
  }'
```

------

## List fine-tuning events
<a name="fine-tuning-openai-list-events"></a>

Lists events for a fine-tuning job. Fine-tuning events provide detailed information about the progress of your job, including training metrics, checkpoint creation, and error messages. For complete API details, see the [OpenAI List fine-tuning events documentation](https://developers.openai.com/api/reference/resources/fine_tuning/subresources/jobs/methods/list_events).

### Examples
<a name="fine-tuning-openai-list-events-examples"></a>

To list fine-tuning events, choose the tab for your preferred method, and then follow the steps:

------
#### [ OpenAI SDK (Python) ]

```
# Requires OPENAI_API_KEY and OPENAI_BASE_URL environment variables
from openai import OpenAI
client = OpenAI()

# List fine-tuning events
events = client.fine_tuning.jobs.list_events(
    fine_tuning_job_id="ftjob-abc123",
    limit=50
)

for event in events.data:
    print(f"[{event.created_at}] {event.level}: {event.message}")
    if event.data:
        print(f"  Metrics: {event.data}")
```

------
#### [ HTTP request ]

Make a GET request to `/v1/fine_tuning/jobs/{fine_tuning_job_id}/events`:

```
curl https://bedrock-mantle.us-west-2.api.aws/v1/fine_tuning/jobs/ftjob-abc123/events?limit=50
```

------

Events include information such as:
+ Training started and completed messages
+ Checkpoint creation notifications
+ Training metrics (loss, accuracy) at each step
+ Error messages if the job fails

To paginate through all events, choose the tab for your preferred method, and then follow the steps:

------
#### [ OpenAI SDK (Python) ]

```
# Requires OPENAI_API_KEY and OPENAI_BASE_URL environment variables
from openai import OpenAI
client = OpenAI()

# Paginate through all events
all_events = []
after = None

while True:
    events = client.fine_tuning.jobs.list_events(
        fine_tuning_job_id="ftjob-abc123",
        limit=100,
        after=after
    )
    
    all_events.extend(events.data)
    
    if not events.has_more:
        break
    
    after = events.data[-1].id
```

------
#### [ HTTP request ]

Make multiple GET requests with the `after` parameter:

```
# First request
curl https://bedrock-mantle.us-west-2.api.aws/v1/fine_tuning/jobs/ftjob-abc123/events?limit=100

# Subsequent requests with 'after' parameter
curl "https://bedrock-mantle.us-west-2.api.aws/v1/fine_tuning/jobs/ftjob-abc123/events?limit=100&after=ft-event-abc123"
```

------

## Retrieve fine-tuning job
<a name="fine-tuning-openai-retrieve-job"></a>

Get detailed information about a fine-tuning job. For complete API details, see the [OpenAI Retrieve fine-tuning job documentation](https://developers.openai.com/api/reference/resources/fine_tuning/subresources/jobs/methods/retrieve).

### Examples
<a name="fine-tuning-openai-retrieve-job-examples"></a>

To retrieve specific job details, choose the tab for your preferred method, and then follow the steps:

------
#### [ OpenAI SDK (Python) ]

```
# Requires OPENAI_API_KEY and OPENAI_BASE_URL environment variables
from openai import OpenAI
client = OpenAI()

# Retrieve specific job details
job_details = client.fine_tuning.jobs.retrieve(job_id)

# Print raw response
print(json.dumps(job_details.model_dump(), indent=2))
```

------
#### [ HTTP request ]

Make a GET request to `/v1/fine_tuning/jobs/{fine_tuning_job_id}`:

```
curl https://bedrock-mantle.us-west-2.api.aws/v1/fine_tuning/jobs/ftjob-abc123 \
  -H "Authorization: Bearer $OPENAI_API_KEY"
```

------

## List fine-tuning jobs
<a name="fine-tuning-openai-list-jobs"></a>

Lists your organization's fine-tuning jobs with pagination support. For complete API details, see the [OpenAI List fine-tuning jobs documentation](https://developers.openai.com/api/reference/resources/fine_tuning/subresources/jobs/methods/list).

### Examples
<a name="fine-tuning-openai-list-jobs-examples"></a>

To list fine-tuning jobs with limit and pagination, choose the tab for your preferred method, and then follow the steps:

------
#### [ OpenAI SDK (Python) ]

```
# Requires OPENAI_API_KEY and OPENAI_BASE_URL environment variables
from openai import OpenAI
client = OpenAI()

# List fine-tuning jobs with limit and pagination
response = client.fine_tuning.jobs.list(
    limit=20  # Maximum number of jobs to return
)

# Print raw response
print(json.dumps(response.model_dump(), indent=2))
```

------
#### [ HTTP request ]

Make a GET request to `/v1/fine_tuning/jobs`:

```
curl https://bedrock-mantle.us-west-2.api.aws/v1/fine_tuning/jobs?limit=20 \
  -H "Authorization: Bearer $OPENAI_API_KEY"
```

------

## Cancel fine-tuning job
<a name="fine-tuning-openai-cancel-job"></a>

Cancels a fine-tuning job that is in progress. Once cancelled, the job cannot be resumed. For complete API details, see the [OpenAI Cancel fine-tuning job documentation](https://developers.openai.com/api/reference/resources/fine_tuning/subresources/jobs/methods/cancel).

### Examples
<a name="fine-tuning-openai-cancel-job-examples"></a>

To cancel a fine-tuning job, choose the tab for your preferred method, and then follow the steps:

------
#### [ OpenAI SDK (Python) ]

```
# Requires OPENAI_API_KEY and OPENAI_BASE_URL environment variables
from openai import OpenAI
client = OpenAI()

# Cancel fine-tuning job
cancel_response = client.fine_tuning.jobs.cancel("ftjob-abc123")

print(f"Job ID: {cancel_response.id}")
print(f"Status: {cancel_response.status}")  # Should be "cancelled"
```

------
#### [ HTTP request ]

Make a POST request to `/v1/fine_tuning/jobs/{fine_tuning_job_id}/cancel`:

```
curl -X POST https://bedrock-mantle.us-west-2.api.aws/v1/fine_tuning/jobs/ftjob-abc123/cancel \
  -H "Authorization: Bearer $OPENAI_API_KEY"
```

------

## List fine-tuning checkpoints
<a name="fine-tuning-openai-list-checkpoints"></a>

Lists checkpoints for a fine-tuning job. Checkpoints are intermediate model snapshots created during fine-tuning that can be used for inference to evaluate performance at different training stages. For more information, see the [OpenAI List fine-tuning checkpoints documentation](https://developers.openai.com/api/reference/resources/fine_tuning/subresources/jobs/subresources/checkpoints/methods/list).

### Examples
<a name="fine-tuning-openai-list-checkpoints-examples"></a>

To list checkpoints for a fine-tuning job, choose the tab for your preferred method, and then follow the steps:

------
#### [ OpenAI SDK (Python) ]

```
# Requires OPENAI_API_KEY and OPENAI_BASE_URL environment variables
from openai import OpenAI
client = OpenAI()

# List checkpoints for a fine-tuning job
checkpoints = client.fine_tuning.jobs.checkpoints.list(
    fine_tuning_job_id="ftjob-abc123",
    limit=10
)

for checkpoint in checkpoints.data:
    print(f"Checkpoint ID: {checkpoint.id}")
    print(f"Step: {checkpoint.step_number}")
    print(f"Model: {checkpoint.fine_tuned_model_checkpoint}")
    print(f"Metrics: {checkpoint.metrics}")
    print("---")
```

------
#### [ HTTP request ]

Make a GET request to `/v1/fine_tuning/jobs/{fine_tuning_job_id}/checkpoints`:

```
curl https://bedrock-mantle.us-west-2.api.aws/v1/fine_tuning/jobs/ftjob-abc123/checkpoints?limit=10
```

------

Each checkpoint includes:
+ **Checkpoint ID** – Unique identifier for the checkpoint
+ **Step number** – Training step at which the checkpoint was created
+ **Model checkpoint** – Model identifier that can be used for inference
+ **Metrics** – Validation loss and accuracy at this checkpoint

To use a checkpoint model for inference, choose the tab for your preferred method, and then follow the steps:

------
#### [ OpenAI SDK (Python) ]

```
# Requires OPENAI_API_KEY and OPENAI_BASE_URL environment variables
from openai import OpenAI
client = OpenAI()

# Test inference with a checkpoint
response = client.chat.completions.create(
    model=checkpoint.fine_tuned_model_checkpoint,
    messages=[{"role": "user", "content": "What is AI?"}],
    max_tokens=100
)

print(response.choices[0].message.content)
```

------
#### [ HTTP request ]

Make a POST request to `/v1/chat/completions`:

```
curl https://bedrock-mantle.us-west-2.api.aws/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "ft:gpt-4o-mini:openai:custom:7p4lURel:ckpt-step-1000",
    "messages": [{"role": "user", "content": "What is AI?"}],
    "max_tokens": 100
  }'
```

------

## Run inference with fine-tuned model
<a name="fine-tuning-openai-inference"></a>

Once your fine-tuning job is complete, you can use the fine-tuned model for inference through the Responses API or Chat Completions API. For complete API details, see [Generate responses using OpenAI APIs](bedrock-mantle.md).

### Responses API
<a name="fine-tuning-openai-responses-api"></a>

Use the Responses API for single-turn text generation with your fine-tuned model:

------
#### [ OpenAI SDK (Python) ]

```
# Requires OPENAI_API_KEY and OPENAI_BASE_URL environment variables
from openai import OpenAI
client = OpenAI()

# Get the fine-tuned model ID
job_details = client.fine_tuning.jobs.retrieve("ftjob-abc123")

if job_details.status == 'succeeded' and job_details.fine_tuned_model:
    fine_tuned_model = job_details.fine_tuned_model
    print(f"Using fine-tuned model: {fine_tuned_model}")
    
    # Run inference with Responses API
    response = client.completions.create(
        model=fine_tuned_model,
        prompt="What is the capital of France?",
        max_tokens=100,
        temperature=0.7
    )
    
    print(f"Response: {response.choices[0].text}")
else:
    print(f"Job status: {job_details.status}")
    print("Job must be in 'succeeded' status to run inference")
```

------
#### [ HTTP request ]

Make a POST request to `/v1/completions`:

```
curl https://bedrock-mantle.us-west-2.api.aws/v1/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{
    "model": "ft:gpt-4o-mini:openai:custom-model:7p4lURel",
    "prompt": "What is the capital of France?",
    "max_tokens": 100,
    "temperature": 0.7
  }'
```

------

### Chat Completions API
<a name="fine-tuning-openai-inference-examples"></a>

Use the Chat Completions API for conversational interactions with your fine-tuned model:

------
#### [ OpenAI SDK (Python) ]

```
# Requires OPENAI_API_KEY and OPENAI_BASE_URL environment variables
from openai import OpenAI
client = OpenAI()

# Get the fine-tuned model ID
job_details = client.fine_tuning.jobs.retrieve("ftjob-abc123")

if job_details.status == 'succeeded' and job_details.fine_tuned_model:
    fine_tuned_model = job_details.fine_tuned_model
    print(f"Using fine-tuned model: {fine_tuned_model}")
    
    # Run inference
    inference_response = client.chat.completions.create(
        model=fine_tuned_model,
        messages=[
            {"role": "user", "content": "What is the capital of France?"}
        ],
        max_tokens=100
    )
    
    print(f"Response: {inference_response.choices[0].message.content}")
else:
    print(f"Job status: {job_details.status}")
    print("Job must be in 'succeeded' status to run inference")
```

------
#### [ HTTP request ]

Make a POST request to `/v1/chat/completions`:

```
curl https://bedrock-mantle.us-west-2.api.aws/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{
    "model": "ft:gpt-4o-mini:openai:custom-model:7p4lURel",
    "messages": [
      {"role": "user", "content": "What is the capital of France?"}
    ],
    "max_tokens": 100
  }'
```

------

# Evaluate your RFT model
<a name="rft-evaluate-model"></a>

After your reinforcement fine-tuning job completes successfully, you can evaluate your custom model's performance using multiple assessment methods. Amazon Bedrock provides built-in evaluation tools to help you compare your RFT model against the base model and validate improvements.

**Topics**
+ [

## Evaluation methods
](#rft-evaluation-methods)
+ [

## Setting up inference for evaluation
](#rft-setup-inference-evaluation)
+ [

## Evaluation best practices
](#rft-evaluation-best-practices)

## Evaluation methods
<a name="rft-evaluation-methods"></a>

Amazon Bedrock offers several ways to assess your RFT model performance.

### Validation metrics
<a name="rft-validation-metrics"></a>

If you upload a validation dataset, you will see two additional graphs in training metrics.
+ **Validation rewards** - Shows how well your model generalizes beyond training examples. Lower scores than training rewards are normal and expected.
+ **Validation episode lengths** - Average response length on unseen validation data. Shows how efficiently your model responds to new inputs compared to the training examples.

### Test in Playground
<a name="rft-test-playground"></a>

Use the Test in Playground feature for quick, ad-hoc evaluations. To use the Test in Playground feature, inference needs to be set up. For more information, see [Setting up inference for evaluation](#rft-setup-inference-evaluation).

This interactive tool allows you to:
+ Test prompts directly with your RFT model
+ Compare responses side-by-side between your custom model and the base model
+ Evaluate response quality improvements in real-time
+ Experiment with different prompts to assess model capabilities

### Bedrock Model Evaluation
<a name="rft-model-evaluation"></a>

Use Amazon Bedrock's Model Evaluation to assess your RFT model using your own datasets. This provides comprehensive performance analysis with standardized metrics and benchmarks. Here are some examples of the Amazon Bedrock Model Evaluation benefits.
+ Systematic evaluation using custom test datasets
+ Quantitative performance comparisons
+ Standardized metrics for consistent assessment
+ Integration with existing Amazon Bedrock evaluation workflows

## Setting up inference for evaluation
<a name="rft-setup-inference-evaluation"></a>

Before evaluating your RFT model, set up inference using one of these options:

### On-demand inference
<a name="rft-on-demand-inference"></a>

Create a custom model on-demand deployment for flexible, pay-per-use evaluation. This option includes token-based pricing that charges based on the number of tokens processed during inference.

## Evaluation best practices
<a name="rft-evaluation-best-practices"></a>
+ **Compare systematically** - Always evaluate your RFT model against the base model using the same test prompts and evaluation criteria.
+ **Use diverse test cases** - Include various prompt types and scenarios that represent your real-world use cases.
+ **Validate reward alignment** - Ensure your model improvements align with the reward functions used during training.
+ **Test edge cases** - Evaluate model behavior on challenging or unusual inputs to assess robustness.
+ **Monitor response consistency** - Check that your model provides consistent quality across multiple runs with similar prompts.

# Customize a model with distillation in Amazon Bedrock
<a name="model-distillation"></a>

*Model distillation* is the process of transferring knowledge from a larger more intelligent model (known as teacher) to a smaller, faster, cost-efficient model (known as student). In this process, the student model's performance improves for a specific use case. Amazon Bedrock Model Distillation uses the latest data synthesis techniques to generate diverse, high-quality responses (known as synthetic data) from the teacher model, and fine-tunes the student model.

## How to use Amazon Bedrock Model Distillation
<a name="how-md-use"></a>

 To use Amazon Bedrock Model Distillation, you do the following: 

1. **Choose a teacher model and a student model** – For more information, see [Prerequisites for model distillation](prequisites-model-distillation.md).

1. **Prepare your training data for distillation** – Your training data is a collection of prompts stored in `.jsonl` files. Amazon Bedrock uses the input data to generate responses from the teacher model and uses the responses to fine-tune the student model.
   + **Optimize prompts** – Format your input prompts for the use case that you want. For more information, see [Optimize your input prompts for synthetic data generation](distillation-prepare-datasets.md#distillation-data-prep-prompt-optimization). 
   + **Use labeled examples** – Prepare labeled input data as prompt-response pairs. Amazon Bedrock can use these pairs as golden examples while generating responses from the teacher model. For more information, see [Option 1: Provide your own prompts for data preparation](distillation-data-prep-option-1.md). 
   + **Use invocation logs** – If you enable CloudWatch Logs invocation logging, you can use existing teacher responses from invocation logs stored in Amazon S3 as training data. An invocation log in Amazon Bedrock is a detailed record of model invocations. For more information, see [Option 2: Use invocation logs for data preparation](distillation-data-prep-option-2.md). 

1. **Create a distillation job** – This job creates a smaller, faster, and more cost-effective model for your use case. Only you can access the final distilled model. Amazon Bedrock doesn't use your data to train any other teacher or student model for public use. For more information, see [Submit a model distillation job in Amazon Bedrock](submit-model-distillation-job.md). For more information on setting up on demand inference, see [Set up inference for a custom model](model-customization-use.md).

## How Amazon Bedrock Model Distillation works
<a name="how-md-works"></a>

Amazon Bedrock Model Distillation is a single workflow that automates the process of creating a distilled model. In this workflow, Amazon Bedrock generates responses from a teacher model, adds data synthesis techniques to improve response generation, and fine-tunes the student model with the generated responses. The augmented dataset is split into separate datasets to use for training and validation. Amazon Bedrock uses only the data in the training dataset to fine-tune the student model.

After you've identified your teacher and student models, you can choose how you want Amazon Bedrock to create a distilled model for your use case. Amazon Bedrock can either generate teacher responses by using the prompts that you provide, or you can use responses from your production data via invocation logs. Amazon Bedrock Model Distillation uses these responses to fine-tune the student model.

**Note**  
If Amazon Bedrock Model Distillation uses its proprietary data synthesis techniques to generate higher-quality teacher responses, then your AWS account will incur additional charges for inference calls to the teacher model. These charges will be billed at the on-demand inference rates of the teacher model. Data synthesis techniques may increase the size of the fine-tuning dataset to a maximum of 15k prompt-response pairs. For more information about Amazon Bedrock charges, see [Amazon Bedrock Pricing](https://aws.amazon.com/bedrock/pricing/).

### Creating a distilled model using prompts that you provide
<a name="using-prompts"></a>

Amazon Bedrock uses the input prompts that you provide to generate responses from the teacher model. Amazon Bedrock then uses the responses to fine-tune the student model that you've identified. Depending on your use case, Amazon Bedrock might add proprietary data synthesis techniques to generate diverse and higher-quality responses. For example, Amazon Bedrock might generate similar prompts to generate more diverse responses from the teacher model. Or, if you optionally provide a handful of labeled input data as prompt-response pairs, then Amazon Bedrock might use these pairs as golden examples to instruct the teacher to generate similar high-quality responses.

### Creating a distilled model using production data
<a name="using-prod-data"></a>

If you already have responses generated by the teacher model and stored them in the invocation logs, you can use those existing teacher responses to fine-tune the student model. For this, you will need to provide Amazon Bedrock access to your invocation logs. An invocation log in Amazon Bedrock is a detailed record of model invocations. For more information, see [Monitor model invocation using CloudWatch Logs](https://docs.aws.amazon.com//bedrock/latest/userguide/model-invocation-logging.html).

If you choose this option, then you can continue to use Amazon Bedrocks inference API operations, such as [InvokeModel](https://docs.aws.amazon.com//bedrock/latest/APIReference/API_runtime_InvokeModel.html) or [Converse](https://docs.aws.amazon.com//bedrock/latest/APIReference/API_runtime_Converse.html) API, and collect the invocation logs, model input data (prompts), and model output data (responses) for all invocations used in Amazon Bedrock.

When you generate responses from the model using the `InvokeModel` or `Converse` API operations, you can optionally add `requestMetadata` to the responses. When you create a Distillation job, you can filter by this metadata as part of the invocation logs configuration. You can filter by your specific use cases, and then Amazon Bedrock only uses the filtered responses to fine-tune your student model. When you choose to use invocation logs to fine-tune your student model, you can have Amazon Bedrock use the prompts only, or use prompt-response pairs. 

------
#### [ Choosing prompts with invocation logs ]

If you choose to have Amazon Bedrock use only the prompts from the invocation logs, then Amazon Bedrock uses the prompts to generate responses from the teacher model. In this case, Amazon Bedrock uses the responses to fine-tune the student model that you've identified. Depending on your use case, Amazon Bedrock Model Distillation might add proprietary data synthesis techniques to generate diverse and higher-quality responses. 

------
#### [ Choosing prompt-response pairs with invocation logs ]

If you choose to have Amazon Bedrock use prompt-response pairs from the invocation logs, then Amazon Bedrock won't re-generate responses from the teacher model and use the responses from the invocation log to fine-tune the student model. For Amazon Bedrock to read the responses from the invocation logs, the teacher model specified in your model distillation job must match the model used in the invocation log. If they don't match, the invocation logs aren't used. If you've added request metadata to the responses in the invocation log, then to fine-tune the student model, you can specify the request metadata filters so that Amazon Bedrock reads only specific logs that are valid for your use case.

------

# Prerequisites for model distillation
<a name="prequisites-model-distillation"></a>

Before you can begin, make sure that you understand access and security controls for Model Distillation. You must also choose a teacher and student model for your distillation job.

## Permissions
<a name="model-distillation-access-security"></a>

Before you can begin, make sure that you understand access and security controls for Model Distillation. You must have an IAM service role that can access the Amazon S3 bucket where you want to store your Model Distillation training and validation data. Amazon Bedrock also has options for encrypting and further securing your distillation jobs and artifacts. For more information, see [Model customization access and security](custom-model-job-access-security.md). 

To use a cross-region inference profile for a teacher model in a Distillation job, your service role must have permissions to invoke the inference profile in an AWS Region, in addition to the model in each Region in the inference profile. For a policy example, see [(Optional) Permissions to create a Distillation job with a cross-region inference profile](custom-model-job-access-security.md#custom-models-cross-region-inference-profile-permissions). For more information about cross-region inference, see [Increase throughput with cross-Region inference](cross-region-inference.md).

## Choose teacher and student models for distillation
<a name="prequisites-teacher-student-models"></a>


****  

| Model Type | Selection Criteria | Key Considerations | Requirements | 
| --- | --- | --- | --- | 
| Teacher Model | Choose a teacher model that's significantly larger and more capable than the student model, and whose accuracy you want to achieve for your use case. | To make distillation more effective, choose a model that's already trained on tasks similar to your use case. For some teacher models, you can choose a Cross-Region inference profile. | Must have permissions to invoke inference profiles and models in each Region. See cross-region inference documentation for policy examples. | 
| Student Model | Choose a student model that's significantly smaller in size than the teacher model. | The student model must be one of the student models paired with your teacher model in the supported models table. | Must be compatible with selected teacher model as shown in the following table. | 

The following section lists the supported models and regions for Amazon Bedrock Model Distillation. After you choose your teacher and student models, you prepare and optimize your training datasets for distillation. For more information, see [Prepare your training datasets for distillation](distillation-prepare-datasets.md).

### Supported models and Regions for Amazon Bedrock Model Distillation
<a name="model-distillation-supported"></a>

The following table shows which models and AWS Regions Amazon Bedrock Model Distillation supports for teacher and student models. If you use a Cross Region Inference Profile, only System Inference Profiles are supported for model distillation. For more information, see [Increase throughput with cross-Region inference](cross-region-inference.md). 


****  
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/bedrock/latest/userguide/prequisites-model-distillation.html)

**Note**  
For Claude  and Llama models, the distillation job is run in US West (Oregon). You can either buy [provisioned throughput](https://docs.aws.amazon.com//bedrock/latest/userguide/prov-throughput.html) in US West (Oregon) or [copy distilled model](https://docs.aws.amazon.com//bedrock/latest/userguide/copy-model.html) to another Region and then buy [provisioned throughput](https://docs.aws.amazon.com//bedrock/latest/userguide/prov-throughput.html).
For Nova models, you run distillation job in US East (N. Virginia). For inference, you need to buy [provisioned throughput](https://docs.aws.amazon.com//bedrock/latest/userguide/prov-throughput.html) in US East (N. Virginia). You can't copy Nova models to other Regions.

# Prepare your training datasets for distillation
<a name="distillation-prepare-datasets"></a>

Before you can begin a model customization job, you need to minimally prepare a training dataset. To prepare input datasets for your custom model, you create `.jsonl` files, each line of which is a JSON object corresponding to a record. The files you create must conform to the format for model distillation and model that you choose. The records in it must also conform to size requirements.

Provide the input data as prompts. Amazon Bedrock uses the input data to generate responses from the teacher model and uses the generated responses to fine-tune the student model. For more information about inputs Amazon Bedrock uses, and for choosing an option that works best for your use case, see [How Amazon Bedrock Model Distillation works](model-distillation.md#how-md-works). There are a couple options for preparing your input dataset.

**Note**  
Amazon Nova models have different requirements for distillation. For more information, see [Distilling Amazon Nova models](https://docs.aws.amazon.com/nova/latest/userguide/customize-distill.html).

## Supported modalities for distillation
<a name="distillation-supported-modalities"></a>

The models listed in [Supported models and Regions for Amazon Bedrock Model Distillation](prequisites-model-distillation.md#model-distillation-supported) support only the text-to-text modality.

## Optimize your input prompts for synthetic data generation
<a name="distillation-data-prep-prompt-optimization"></a>

During model distillation, Amazon Bedrock generates a synthetic dataset that it uses to fine tune your student model for your specific use case. For more information, see [How Amazon Bedrock Model Distillation works](model-distillation.md#how-md-works).

You can optimize the synthetic data generation process by formatting your input prompts for the use case that you want. For example, if your distilled model's use case is retrieval augmented generation (RAG), you would format your prompts differently than if you want the model to focus on agent use cases.

The following are examples for how you can format your input prompts for RAG or agent use cases.

------
#### [ RAG prompt example ]

```
{
  "schemaVersion": "bedrock-conversation-2024",
  "system": [
    {
      "text": "You are a financial analyst charged with answering questions about 10K and 10Q SEC filings. Given the context below, answer the following question."
    }
  ],
  "messages": [
    {
      "role": "user",
      "content": [
        {
          "text": "<context>\nDocument 1: Multiple legal actions have been filed against us as a result of the October 29, 2018 accident of Lion Air Flight 610 and the March 10, 2019 accident of Ethiopian Airlines Flight 302.\n</context>\n\n<question>Has Boeing reported any materially important ongoing legal battles from FY2022?</question>"
        }
      ]
    }
  ]
}
```

------
#### [ Agent prompt example ]

```
{
    "schemaVersion": "bedrock-conversation-2024",
    "system": [
        {
            "text": 'You are an expert in composing functions. You are given a question and a set of possible functions. Based on the question, you will need to make one or more function/tool calls to achieve the purpose.
                    Here is a list of functions in JSON format that you can invoke.
                    [
                        {
                            "name": "lookup_weather",
                            "description: "Lookup weather to a specific location",
                            "parameters": {
                                "type": "dict",
                                "required": [
                                    "city"
                                ],
                                "properties": {
                                    "location": {
                                        "type": "string",
                                    },
                                    "date": {
                                        "type": "string",
                                    }
                                }
                            }
                        }
                    ]'
        }
    ],
    "messages": [
        {
            "role": "user",
            "content": [
                {
                    "text": "What's the weather tomorrow?"
                }
            ]
        },
        {
            "role": "assistant",
            "content": [
               {
                   "text": "[lookup_weather(location=\"san francisco\", date=\"tomorrow\")]"
               }
            ]
        }
    ]
}
```

------

# Option 1: Provide your own prompts for data preparation
<a name="distillation-data-prep-option-1"></a>

Collect your prompts and store them in `.jsonl` file format. Each record in the JSONL must use the following structure.
+ Include the `schemaVersion` field that must have the value `bedrock-conversion-2024`.
+ [Optional] Include a system prompt that indicates the role assigned to the model.
+ In `messages` field, include the user role containing the input prompt provided to the model.
+ [Optional] In the `messages` field, include assistant role containing the desired response.

Anthropic and Meta Llama models support only single-turn conversation prompts, meaning you can only have one user prompt. The Amazon Nova models support multi-turn conversations, allowing you to provide multiple user and assistant exchanges within one record.

**Example format**

```
{
    "schemaVersion": "bedrock-conversation-2024",
    "system": [{
        "text": "A chat between a curious User and an artificial intelligence Bot. The Bot gives helpful, detailed, and polite answers to the User's questions."
    }],
    "messages": [{
            "role": "user",
            "content": [{
                "text": "why is the sky blue"
            }]
        },
        {
            "role": "assistant",
            "content": [{
                "text": "The sky is blue because molecules in the air scatter blue light from the Sun more than other colors."
            }]
        }
    ]
}}
```

## Validate your dataset
<a name="distillation-data-prep-option-1-validate"></a>

Before you run your distillation job, you can validate your input dataset using a [Python script](https://github.com/aws-samples/amazon-bedrock-samples/blob/main/custom-models/model_distillation/dataset-validation/README.md).

# Option 2: Use invocation logs for data preparation
<a name="distillation-data-prep-option-2"></a>

When you create a distillation job, you can have Amazon Bedrock use existing teacher responses from CloudWatch Logs invocation logs as training data. For Amazon Bedrock, an invocation log is a detailed record of model invocations. 

To use invocation logs for model distillation, set the model invocation logging on, use one of the model invocation operations, and make sure that you've set up an Amazon S3 bucket as the destination for the logs. Before you can start the model distillation job, you must provide Amazon Bedrock permissions to access the logs. For more information about setting up the invocation logs, see [Monitor model invocation using Amazon CloudWatch Logs](https://docs.aws.amazon.com//bedrock/latest/userguide/model-invocation-logging.html).

With this option, you can specify if you want Amazon Bedrock to use only the prompts, or to use prompt-response pairs from the invocation log. If you want Amazon Bedrock to use only prompts, then Amazon Bedrock might add proprietary data synthesis techniques to generate diverse and higher-quality responses from the teacher model. If you want Amazon Bedrock to use prompt-response pairs, then Amazon Bedrock won't re-generate responses from the teacher model. Amazon Bedrock will directly use the responses from the invocation log to fine-tune the student model.

**Important**  
You can provide a maximum of 15K prompts or prompt-response pairs to Amazon Bedrock for fine-tuning the student model. To ensure that the student model is fine-tuned to meet your specific requirements, we highly recommend the following:  
If you want Amazon Bedrock to use prompts only, make sure that there are at least 100 prompt-response pairs generated from across all models.
If you want Amazon Bedrock to use responses from your invocation logs, make sure that you have at least 100 prompt-response pairs generated from the model in your invocation logs that exactly match with the teacher model you've chosen.

You can optionally [add request metadata](#add-metadata-to-prompts) to the prompt-response pairs in the invocation log using one of the model invocation operations and then later use it to filter the logs. Amazon Bedrock can use the filtered logs to fine-tune the student model.

To filter the logs using multiple request metadata, use a single operation Boolean operator AND, OR, or NOT. You cannot combine operations. For single request metadata filtering, use the Boolean operator NOT.

## Add request metadata to prompts and responses in your invocation logs for model distillation
<a name="add-metadata-to-prompts"></a>

The model invocation logging collects invocation logs, model input data (prompts), and model output data(responses) for all invocations used in Amazon Bedrock. If you've enabled logging, you can collect the logs whenever you interact with Amazon Bedrock foundation models through any `Invoke` or `Converse` API operations. If you want Amazon Bedrock to use the prompts and associated responses from the invocation log to fine-tune the student model, then you must give Amazon Bedrock access to these logs. Using the responses that a model has already generated makes it quicker to fine-tune the student model. Using responses from the invocation logs also makes model distillation more cost-effective, however, Amazon Bedrock's proprietary data synthesis techniques are not added which may result in a more performant distilled model. 

With invocation logs, you can identify the prompt-response pairs that you want Amazon Bedrock to use for model distillation. These prompt-response pairs can be for specific use cases. To be able to identify the prompt-response pairs to use for distillation, you must add a request metadata string to the prompt-response pairs when you invoke a model or use the `Converse` API operation. The request metadata is provided in the format: `"priority" : "High"`. 

As you continue to interact with the model using the `Invoke` or `Converse` API operations, the interactions and the request metadata are logged in the invocation logs. The following is an example of an invocation log from an `InvokeModel` call that shows the request metadata:

```
{
    "schemaType": "ModelInvocationLog",
    "schemaVersion": "1.0",
    "timestamp": "2024-09-11T17:16:18Z",
    "accountId": "XXXXX",
    "identity": {
        "arn": "arn:aws:sts::XXXXXXX:assumed-role/Admin/XXXXXX"
    },
    "region": "us-east-1",
    "requestId": "7ee514f3-9eff-467c-af75-8fd564ce58e5",
    "operation": "InvokeModel",
    "modelId": "meta.llama3-1-405b-instruct-v1:0",
    "input": {
        "inputContentType": "application/json",
        "inputBodyJson": {
            "prompt": "story of two dogs",
            "max_tokens_to_sample": 300
        },
        "inputTokenCount": 13
    },
    "output": {
        "outputContentType": "application/json",
        "outputBodyJson": {
            "type": "completion",
            "completion": " Here is a story about two dogs:\n\nRex and Charlie were best friends who lived next door to each other. Every morning, their owners would let them out into the backyard to play. Rex was a golden retriever with long, shaggy fur. He loved to run and chase sticks. Charlie was a corgi with short legs and a big smile. He was happy just sniffing the grass and rolling around. \n\nThe two dogs spent every day playing together. Rex would grab a stick and bring it over to Charlie, wagging his tail excitedly. Charlie would take the other end of the stick in his mouth and they'd have a friendly game of tug-of-war. After tiring themselves out, they'd flop down in the warm sunshine for a nap. \n\nAt lunchtime, their owners would call them in for food. Rex would gobble up his kibble hungrily while Charlie ate his at a more leisurely pace. After lunch, it was right back outside for more playtime. The afternoon sunbeams would light up their fur as they chased each other around the yard. \n\nWhen it started getting late, their owners called them in for the night. Rex and Charlie would head inside, tired but happy after another fun day of play. After slurping up fresh water from their bowls, they'd curl up on their beds, Rex's fluffy golden tail tucked over his nose and little",
            "stop_reason": "max_tokens",
            "stop": null
        },
        "outputTokenCount": 300
    },
    "requestMetadata": {
        "project": "CustomerService",
        "intent": "ComplaintResolution",
        "priority": "High"
    }
}
```

You can specify the invocation log as your input data source when you start a model distillation job. You can start model distillation job in the Amazon Bedrock console, using the API, AWS CLI, or AWS SDK. 

**Requirements for providing request metadata**

The request metadata must meet the following requirements:
+ Provided in the JSON `key:value` format.
+ Key and value pair must be a string of 256 characters maximum.
+ Provide a maximum of 16 key-value pairs.

### Using request metadata filters
<a name="request-metadata-filter"></a>

You can apply filters to the request metadata to selectively choose which prompt-response pairs to include in distillation for fine-tuning the student model. For example, you might want to include only those with "project" : "CustomerService" and "priority" : "High" request metadata.

To filter the logs using multiple request metadata, use a single Boolean operator AND, OR, or NOT. You cannot combine operations. For single request metadata filtering, use the Boolean operator NOT. 

You can specify the invocation log as your input data source and what filters to use to select the prompt-response pairs when you start a model distillation job. You can start model distillation job in the Amazon Bedrock console, using the API, AWS CLI, or AWS SDK. For more information, see [Submit a model distillation job in Amazon Bedrock](submit-model-distillation-job.md).

## Validate your dataset
<a name="distillation-data-prep-option-2-validate"></a>

Before you run your distillation job, you can validate your input dataset using a [Python script](https://github.com/aws-samples/amazon-bedrock-samples/blob/main/custom-models/model_distillation/dataset-validation/README.md).

# Submit a model distillation job in Amazon Bedrock
<a name="submit-model-distillation-job"></a>

You can perform model distillation through the Amazon Bedrock console or by sending a [CreateModelCustomizationJob](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_CreateModelCustomizationJob.html) request with an [Amazon Bedrock control plane endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#br-cp).

## Prerequisites
<a name="submit-model-distillation-job-prerequisites"></a>
+ Create an IAM service role with the required permissions. For comprehensive security and permissions information, see [Model customization access and security](custom-model-job-access-security.md).
+ (Optional) Encrypt input and output data, your customization job, or inference requests made to custom models. For more information, see [Encryption of custom models](encryption-custom-job.md).
+ (Optional) Create a Virtual Private Cloud (VPC) to protect your customization job. For more information, see [(Optional) Protect your model customization jobs using a VPC](custom-model-job-access-security.md#vpc-model-customization).

For more information on setting up on demand inference, see [Set up inference for a custom model](model-customization-use.md).

## Submit your job
<a name="submit-model-distillation-job-how-to"></a>

------
#### [ Console ]

1. Sign in to the AWS Management Console with an IAM identity that has permissions to use the Amazon Bedrock console. Then, open the Amazon Bedrock console at [https://console.aws.amazon.com/bedrock](https://console.aws.amazon.com/bedrock).

1. From the left navigation pane, choose **Custom models** under **Tune**.

1. Choose **Create distillation job**.

1. For **Distilled model details**, do the following:

   1. For **Distilled model name**, enter a name for your distilled model.

   1. (Optional) For **Model encryption**, select the checkbox if you want to provide a KMS key for encrypting your job and its related artifacts. 

      For more information, see [Encryption of custom models](encryption-custom-job.md).

   1. (Optional) Apply **Tags** to your distilled model.

1. For **Job configuration**, do the following:

   1. For **Job name**, enter a name for your distillation job.

   1. (Optional) For **Model encryption**, select the checkbox if you want to provide a KMS key for encrypting your job and its related artifacts. 

      For more information, see [Encryption of custom models](encryption-custom-job.md).

   1. (Optional) Apply **Tags** to your job.

1. For **Teacher model – Student model details**, choose the teacher and student models for creating your distilled model.

   For more information, see [Prerequisites for model distillation](prequisites-model-distillation.md).

1. For **Synthetic data generation**, do the following:

   1. For **Max response length**, specify the maximum length of the synthetic responses generated by the teacher model.

   1. For **Distillation input dataset**, choose one of the following options:
      + **Directly upload to S3 location** – Specify the S3 location where you're storing the input dataset (prompts) that'll be used for distillation. For more information, see [Option 1: Provide your own prompts for data preparation](distillation-data-prep-option-1.md).
      + **Provide access to invocation logs** – Specify the S3 location where you're storing the invocation logs with the input dataset (prompts) that'll be used for distillation. For more information, see [Option 2: Use invocation logs for data preparation](distillation-data-prep-option-2.md).
        + (Optional) For **Request Metadata Filters**, specify filters if you want Amazon Bedrock to only use certain prompts in your logs for distillation.
        + Choose **Read prompts** or **Read prompt-response pairs** depending on what you want Amazon Bedrock to access from your logs. Keep in mind that responses are read only if your teacher model matches the model in your logs.

1. For **Distillation output**, specify the S3 location where you want to upload the metrics and reports about your distillation job.

   For more information, see [Analyze the results of a model customization jobAnalyze model customization job results](model-customization-analyze.md).

1. For **VPC settings**, choose a VPC configuration for accessing the S3 bucket with your training data.

   For more information, see [(Optional) Protect your model customization jobs using a VPC](custom-model-job-access-security.md#vpc-model-customization).

1. For **Service access**, specify the IAM role for accessing the S3 bucket with your training data. Unless you use a Cross Region inference profile or VPC configurations, you can create the role in the Amazon Bedrock console with the correct permissions automatically configured. Or you can use an existing service role. 

    For a job that has Amazon VPC configurations or uses a Cross Region inference profile, you must create a new service role in IAM that has the required permissions. 

   For more information, see [Create an IAM service role for model customization](custom-model-job-access-security.md#custom-model-job-service-role).

1. Choose **Create distillation job** to start the distillation job. After you customize a model, you can set up inference for the model. For more information, see [Set up inference for a custom model](model-customization-use.md). 

------
#### [ API ]

At minimum, you must provide the following fields to submit your model distillation job when using the Amazon Bedrock API.


****  

| Field | Description | 
| --- | --- | 
| baseModelIdentifier | The model identifier of the student model | 
| customModelName | The name of the new distilled model | 
| jobName | The name of the model distillation job | 
| roleArn | Role that gives Amazon Bedrock permissions to read training and validation files and write to the output path | 
| trainingDataConfig | The Amazon S3 path that has your training data | 
| outputDataConfig | The Amazon S3 path that contains your training and validation metrics | 
| distillationConfig | Inputs required for distillation job | 
| customModelKmsKeyId | To encrypt the custom model | 
| clientRequestToken | Token to prevent the request from completing more than once | 

The following fields are optional:


****  

| Field | Description | 
| --- | --- | 
| customizationType | Set to DISTILLATION  by default for distillation jobs | 
| validationDataConfig | List of validation data Amazon S3 paths | 
| jobTags | To associate tags with the job | 
| customModelTags | To associate tags with the resulting custom model | 
| vpcConfig | VPC to protect your training data and distillation job | 

To prevent the request from completing more than once, include a `clientRequestToken`.

You can include the following optional fields for extra configurations.
+ `jobTags` and/or `customModelTags` – Associate [tags](tagging.md) with the customization job or resulting custom model.
+ `vpcConfig` – Include the configuration for a [virtual private cloud (VPC) to protect your training data and customization job](custom-model-job-access-security.md#vpc-model-customization).

The following is an example snippet from [CreateModelCustomizationJob](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_CreateModelCustomizationJob.html) API. This example uses the prompt-response pairs in the invocation log as the input data source and specifies the filter for selecting prompt-response pairs. 

```
"trainingDataConfig": {
    "invocationLogsConfig": {
        "usePromptResponse": true,
        "invocationLogSource": {
            "s3Uri": "string"
        },
        "requestMetadataFilters": {
            "equals": {
                "priority": "High"
            }
        }
    }
}
```

**Response**

The response returns a `jobArn` of the model distillation job.

------

## Next steps
<a name="submit-model-distillation-next-steps"></a>
+ [Monitor your distillation job](model-customization-monitor.md). For more information on setting up on demand inference, see [Set up inference for a custom model](model-customization-use.md).

# Clone a distillation job
<a name="clone-model-distillation-job"></a>

You can use the Amazon Bedrock console to clone your distillation job. Clone your distillation job to run multiple jobs with similar settings. Use cloning to try again with a job that you stopped or had an issue. The new job inherits all settings except Service access configuration, any VPC settings, and any Tags.

1. Sign in to the AWS Management Console with an IAM identity that has permissions to use the Amazon Bedrock console. Then, open the Amazon Bedrock console at [https://console.aws.amazon.com/bedrock](https://console.aws.amazon.com/bedrock).

1. From the left navigation pane, choose **Custom models** under **Tune**.

1. Choose the distillation job that you want to clone, then choose **Clone job**.

1. If needed, adjust your job's settings. 

1. Choose **Create distillation job** to start the new job.

# Import a pre-trained model into Amazon Bedrock
<a name="import-pre-trained-model"></a>

 You can create a custom model in Amazon Bedrock by importing a model you customized in other environments, such as Amazon SageMaker AI. For example, you might have a model that you have created in Amazon SageMaker AI that has proprietary model weights. You can now import that model into Amazon Bedrock and then leverage Amazon Bedrock features to make inference calls to the model. 

Amazon Bedrock provides two methods for importing custom models. You can import open-source foundation models that you customized, or you can import Amazon Nova models that you fine-tuned in SageMaker AI.
+ **Import open-source models with custom model import jobs** — Choose this method to import customized open-source foundation models. For example, Mistral AI or Llama models. It supports on-demand inference and multiple customization patterns.
+ **Import a customized Amazon Nova model by creating a custom model in Amazon Bedrock** — Choose this method to import an Amazon Nova model that you fine-tuned in SageMaker AI. It requires specialized SageMaker AI training recipes.

**Topics**
+ [

# Use Custom model import to import a customized open-source model into Amazon Bedrock
](model-customization-import-model.md)
+ [

# Import a SageMaker AI-trained Amazon Nova model
](import-with-create-custom-model.md)

# Use Custom model import to import a customized open-source model into Amazon Bedrock
<a name="model-customization-import-model"></a>

You can create a custom model in Amazon Bedrock by using the Amazon Bedrock Custom Model Import feature to import Foundation Models that you have customized in other environments, such as Amazon SageMaker AI. For example, you might have a model that you have created in Amazon SageMaker AI that has proprietary model weights. You can now import that model into Amazon Bedrock and then leverage Amazon Bedrock features to make inference calls to the model. 

You can use a model that you import with on demand throughput. Use the [InvokeModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html) or [InvokeModelWithResponseStream](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModelWithResponseStream.html) operations to make inference calls to the model. For more information, see [Submit a single prompt with InvokeModel](inference-invoke.md).

Custom model import is supported in the following regions:
+ eu-central-1
+ us-east-1
+ us-east-2
+ us-west-2

**Note**  
Make sure that your import and use of the models in Amazon Bedrock complies with the terms or licenses applicable to the models.  
You can't use Custom Model Import with the following Amazon Bedrock features.  
Batch inference
CloudFormation

With Custom Model Import you can create a custom model that supports the following patterns.
+ **Fine-tuned model** — You can customize the model weights using proprietary data, but retain the configuration of the base model. 
+ **Adaptation** You can customize the model to your domain for use cases where the model doesn't generalize well. Domain adaptation modifies a model to generalize for a target domain and deal with discrepancies across domains, such as a financial industry wanting to create a model which generalizes well on pricing. Another example is language adaptation. For example you could customize a model to generate responses in Portuguese or Tamil. Most often, this involves changes to the vocabulary of the model that you are using.
+ **Pretrained from scratch** — In addition to customizing the weights and vocabulary of the model, you can also change model configuration parameters such as the number of attention heads, hidden layers, or context length.

For information regarding pricing for custom model import, select the Custom Model Import tab in the Model pricing details section of [Amazon Bedrock pricing](https://aws.amazon.com/bedrock/pricing).

**Topics**
+ [

## Supported architectures
](#model-customization-import-model-architecture)
+ [

## Import a model source from Amazon S3
](#model-customization-import-model-source)
+ [

# Prerequisites for importing custom model
](custom-model-import-prereq.md)
+ [

# Submit a model import job
](model-customization-import-model-job.md)
+ [

# Invoke your imported model
](invoke-imported-model.md)
+ [

# Handling Custom Chat Templates and Tokenizers
](custom-chat-templates-tokenizers.md)
+ [

# Calculate the cost of running a custom model
](import-model-calculate-cost.md)
+ [

# Code samples for custom model import
](custom-model-import-code-samples.md)

## Supported architectures
<a name="model-customization-import-model-architecture"></a>

The model you import must be in one of the following architectures.
+ **Mistral** — A decoder-only Transformer based architecture with Sliding Window Attention (SWA) and options for Grouped Query Attention (GQA). For more information, see [https://huggingface.co/docs/transformers/en/model_doc/mistral](https://huggingface.co/docs/transformers/en/model_doc/mistral) in the Hugging Face documentation. 
+ **Mixtral** — A decoder-only transformer model with sparse Mixture of Experts (MoE) models. For more information, see [Mixtral](https://huggingface.co/docs/transformers/en/model_doc/mixtral) in the Hugging Face documentation. 
+  **Flan ** — An enhanced version of the T5 architecture, an encoder-decoder based transformer model. For more information, see [https://huggingface.co/docs/transformers/model_doc/flan-t5](https://huggingface.co/docs/transformers/model_doc/flan-t5) in the Hugging Face documentation. 
+ **Llama 2, Llama3, Llama3.1, Llama3.2, Llama 3.3, and Mllama** — An improved version of Llama with Grouped Query Attention (GQA). For more information, see [https://huggingface.co/blog/llama2](https://huggingface.co/blog/llama2), [https://huggingface.co/blog/llama3 ](https://huggingface.co/blog/llama3 ), [https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct#model-information](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct#model-information), [https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct), [https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct), and [https://huggingface.co/docs/transformers/main/en/model_doc/mllama](https://huggingface.co/docs/transformers/main/en/model_doc/mllama) in the Hugging Face documentation.
+ **GPTBigCode** — An optimized version of GPT-2 with Multi-Query action. For more information, see [GPTBigCode](https://huggingface.co/docs/transformers/en/model_doc/gpt_bigcode) in the Hugging Face documentation.
+ **Qwen2, Qwen2.5, Qwen2-VL, Qwen2.5-VL, Qwen3** — An LLM family with comprehensive multimodal perception and high speed vision encoding. Any model using the Qwen2, Qwen2-VL, and Qwen2.5-VL architectures can be imported. For Qwen3 architecture, only Qwen3ForCausalLM and Qwen3MoeForCausalLM are supported. Converse is also not supported for Qwen3 models. For more information, see [Qwen2](https://huggingface.co/collections/Qwen/qwen2-6659360b33528ced941e557f), [Qwen2.5](https://huggingface.co/collections/Qwen/qwen25-66e81a666513e518adb90d9e), [Qwen2-VL](https://huggingface.co/collections/Qwen/qwen2-vl-66cee7455501d7126940800d), [Qwen2.5-VL](https://huggingface.co/collections/Qwen/qwen25-vl-6795ffac22b334a837c0f9a5), and [Qwen3](https://huggingface.co/docs/transformers/en/model_doc/qwen3) in the Hugging Face documentation.
+ **GPT-OSS** — OpenAI based GPT-OSS architecture. Amazon Bedrock supports both 20B and 120B based custom models. GPT-OSS model architecture is only supported in the US East (N. Virginia) region.

**Note**  
The size of the imported model weights must be less than 100GB for multimodal models and 200GB for text models.
The maximum positional embeddings or the maximum context length supported by the model should be less than 128K.
Amazon Bedrock supports transformer version 4.51.3. Ensure that you are using transformer version 4.51.3 when you fine tune your model.
Custom Model Import does not support embedding models.

## Import a model source from Amazon S3
<a name="model-customization-import-model-source"></a>

You import a model into Amazon Bedrock by creating a model import job in the Amazon Bedrock console or API. In the job you specify the Amazon S3 URI for the source of the model files. During model training, the import job automatically detects your model's architecture.

You need to supply the model files in the Hugging Face weights format. You can create the files by using the Hugging Face transformer library. To create model files for a Llama model, see [convert\$1llama\$1weights\$1to\$1hf.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py). To create the files for a Mistral AI model, see [convert\$1mistral\$1weights\$1to\$1hf.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/mistral/convert_mistral_weights_to_hf.py).

To import the model from Amazon S3, you minimally need the following files that the Hugging Face transformer library creates.
+ **.safetensor** — the model weights in *Safetensor* format. Safetensors is a format created by Hugging Face that stores a model weights as tensors. You must store the tensors for your model in a file with the extension `.safetensors`. For more information, see [Safetensors](https://huggingface.co/docs/safetensors/en/index). For information about converting model weights to Safetensor format, see [Convert weights to safetensors](https://huggingface.co/docs/safetensors/en/convert-weights).
+ **config.json** — For examples, see [LlamaConfig](https://huggingface.co/docs/transformers/model_doc/llama2#transformers.LlamaConfig) and [MistralConfig](https://huggingface.co/docs/transformers/model_doc/mistral#transformers.MistralConfig).
**Note**  
Amazon Bedrock overrides llama3 `rope_scaling` value with the following values:  
`original_max_position_embeddings=8192`
`high_freq_factor=4`
`low_freq_factor=1`
`factor=8`
+ **tokenizer\$1config.json** For an example, see [LlamaTokenizer](https://huggingface.co/docs/transformers/model_doc/llama2#transformers.LlamaTokenizer).
+ **tokenizer.json **
+ **tokenizer.model** 

### Supported tokenizers
<a name="tokenizers"></a>

Amazon Bedrock Custom Model Import supports the following tokenizers. You can use these tokenizers with any model.
+ T5Tokenizer
+ T5TokenizerFast
+ LlamaTokenizer
+ LlamaTokenizerFast
+ CodeLlamaTokenizer
+ CodeLlamaTokenizerFast
+ GPT2Tokenizer
+ GPT2TokenizerFast
+ GPTNeoXTokenizer
+ GPTNeoXTokenizerFast
+ PreTrainedTokenizer
+ PreTrainedTokenizerFast
+ Qwen2Tokenizer
+ Qwen2TokenizerFast

# Prerequisites for importing custom model
<a name="custom-model-import-prereq"></a>

Before you can start a custom model import job, you need to fulfill the following prerequisites:

1. If you are importing your model from Amazon S3 bucket, prepare your model files in the Hugging Face weights format. For more information see, [Import a model source from Amazon S3](model-customization-import-model.md#model-customization-import-model-source).

1. If you are using cross-account Amazon S3 or KMS keys, make sure to grant access to Amazon S3 bucket or the KMS key. For more information, see [Cross-account access to Amazon S3 bucket for custom model import jobs](cross-account-access-cmi.md).

1. (Optional) Create a custom AWS Identity and Access Management (IAM) [service role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_terms-and-concepts.html#iam-term-service-role) with the proper permissions by following the instructions at [Create a service role for importing pre-trained models](model-import-iam-role.md) to set up the role. You can skip this prerequisite if you plan to use the AWS Management Console to automatically create a service role for you.

1. (Optional) Set up extra security configurations.
   + You can encrypt input and output data, import jobs, or inference requests made to imported models. For more information see [Encryption of custom model import](https://docs.aws.amazon.com//bedrock/latest/userguide/encryption-import-model.html).
   + You can create a virtual private cloud (VPC) to protect your customization jobs. For more information, see [(Optional) Protect custom model import jobs using a VPC](vpc-custom-model-import.md).

# (Optional) Protect custom model import jobs using a VPC
<a name="vpc-custom-model-import"></a>

When you run a custom model import job, the job accesses your Amazon S3 bucket to download the input data and to upload job metrics. To control access to your data, we recommend that you use a virtual private cloud (VPC) with [Amazon VPC](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html). You can further protect your data by configuring your VPC so that your data isn't available over the internet and instead creating a VPC interface endpoint with [AWS PrivateLink](https://docs.aws.amazon.com/vpc/latest/privatelink/what-is-privatelink.html) to establish a private connection to your data. For more information about how Amazon VPC and AWS PrivateLink integrate with Amazon Bedrock, see [Protect your data using Amazon VPC and AWS PrivateLink](usingVPC.md).

Carry out the following steps to configure and use a VPC for importing your custom models.

**Topics**
+ [

## Set up a VPC
](#create-vpc-cmi)
+ [

## Create an Amazon S3 VPC Endpoint
](#train-vpc-s3-cmi)
+ [

## (Optional) Use IAM policies to restrict access to your S3 files
](#train-vpc-policy-cmi)
+ [

## Attach VPC permissions to a custom model import role.
](#vpc-data-access-role-cmi)
+ [

## Add the VPC configuration when submitting a model import job
](#vpc-config-cmi)

## Set up a VPC
<a name="create-vpc-cmi"></a>

You can use a [default VPC](https://docs.aws.amazon.com/vpc/latest/userguide/default-vpc.html) for your model import data or create a new VPC by following the guidance at [Get started with Amazon VPC](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-getting-started.html) and [Create a VPC](https://docs.aws.amazon.com/vpc/latest/userguide/create-vpc.html).

When you create your VPC, we recommend that you use the default DNS settings for your endpoint route table, so that standard Amazon S3 URLs (for example, `http://s3-aws-region.amazonaws.com/model-bucket`) resolve.

## Create an Amazon S3 VPC Endpoint
<a name="train-vpc-s3-cmi"></a>

If you configure your VPC with no internet access, you need to create an [Amazon S3 VPC endpoint](https://docs.aws.amazon.com/AmazonS3/latest/userguide/privatelink-interface-endpoints.html) to allow your model import jobs to access the S3 buckets that store your training and validation data and that will store the model artifacts.

Create the S3 VPC endpoint by following the steps at [Create a gateway endpoint for Amazon S3](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html#create-gateway-endpoint-s3).

**Note**  
If you don't use the default DNS settings for your VPC, you need to ensure that the URLs for the locations of the data in your training jobs resolve by configuring the endpoint route tables. For information about VPC endpoint route tables, see [Routing for Gateway endpoints](https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpce-gateway.html#vpc-endpoints-routing).

## (Optional) Use IAM policies to restrict access to your S3 files
<a name="train-vpc-policy-cmi"></a>

You can use [resource-based policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_identity-vs-resource.html) to more tightly control access to your S3 files. You can the following type of resource-based policy.
+ **Endpoint policies** – Endpoint policies restrict access through the VPC endpoint. The default endpoint policy allows full access to Amazon S3 for any user or service in your VPC. While creating or after you create the endpoint, you can optionally attach a resource-based policy to the endpoint to add restrictions, such as only allowing the endpoint to access a specific bucket or only allowing a specific IAM role to access the endpoint. For examples, see [Edit the VPC endpoint policy](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html#edit-vpc-endpoint-policy-s3).

  The following is an example policy you can attach to your VPC endpoint to only allow it to access the bucket containing your model weights.

------
#### [ JSON ]

****  

  ```
  {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
          {
              "Sid": "RestrictAccessToModelWeightsBucket",
              "Effect": "Allow",
              "Principal": "*",
              "Action": [
                  "s3:GetObject",
                  "s3:ListBucket"
              ],
              "Resource": [
                  "arn:aws:s3:::model-weights-bucket",
                  "arn:aws:s3:::model-weights-bucket/*"
              ]
          }
      ]
  }
  ```

------

## Attach VPC permissions to a custom model import role.
<a name="vpc-data-access-role-cmi"></a>

After you finish setting up your VPC and endpoint, you need to attach the following permissions to your [model import IAM role](model-import-iam-role.md). Modify this policy to allow access to only the VPC resources that your job needs. Replace the *subnet-ids* and *security-group-id* with the values from your VPC.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeNetworkInterfaces",
                "ec2:DescribeVpcs",
                "ec2:DescribeDhcpOptions",
                "ec2:DescribeSubnets",
                "ec2:DescribeSecurityGroups"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ec2:CreateNetworkInterface"
            ],
            "Resource": [
                "arn:aws:ec2:us-east-1:123456789012:network-interface/*"
            ],
            "Condition": {
                "StringEquals": {
                    "aws:RequestTag/BedrockManaged": [
                        "true"
                    ]
                },
                "ArnEquals": {
                    "aws:RequestTag/BedrockModelImportJobArn": [
                        "arn:aws:bedrock:us-east-1:123456789012:model-import-job/*"
                    ]
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": [
                "ec2:CreateNetworkInterface"
            ],
            "Resource": [
                "arn:aws:ec2:us-east-1:123456789012:subnet/subnet-id",
                "arn:aws:ec2:us-east-1:123456789012:subnet/subnet-id2",
                "arn:aws:ec2:us-east-1:123456789012:security-group/security-group-id"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "ec2:CreateNetworkInterfacePermission",
                "ec2:DeleteNetworkInterface",
                "ec2:DeleteNetworkInterfacePermission"
            ],
            "Resource": "*",
            "Condition": {
                "ArnEquals": {
                    "ec2:Subnet": [
                        "arn:aws:ec2:us-east-1:123456789012:subnet/subnet-id",
                        "arn:aws:ec2:us-east-1:123456789012:subnet/subnet-id2"
                    ],
                    "ec2:ResourceTag/BedrockModelImportJobArn": [
                        "arn:aws:bedrock:us-east-1:123456789012:model-import-job/*"
                    ]
                },
                "StringEquals": {
                    "ec2:ResourceTag/BedrockManaged": "true"
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": [
                "ec2:CreateTags"
            ],
            "Resource": "arn:aws:ec2:us-east-1:123456789012:network-interface/*",
            "Condition": {
                "StringEquals": {
                    "ec2:CreateAction": [
                        "CreateNetworkInterface"
                    ]
                },
                "ForAllValues:StringEquals": {
                    "aws:TagKeys": [
                        "BedrockManaged",
                        "BedrockModelImportJobArn"
                    ]
                }
            }
        }
    ]
}
```

------

## Add the VPC configuration when submitting a model import job
<a name="vpc-config-cmi"></a>

After you configure the VPC and the required roles and permissions as described in the previous sections, you can create a model import job that uses this VPC.

When you specify the VPC subnets and security groups for a job, Amazon Bedrock creates *elastic network interfaces* (ENIs) that are associated with your security groups in one of the subnets. ENIs allow the Amazon Bedrock job to connect to resources in your VPC. For information about ENIs, see [Elastic Network Interfaces](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_ElasticNetworkInterfaces.html) in the *Amazon VPC User Guide*. Amazon Bedrock tags ENIs that it creates with `BedrockManaged` and `BedrockModelImportJobArn` tags. 

We recommend that you provide at least one subnet in each Availability Zone.

You can use security groups to establish rules for controlling Amazon Bedrock access to your VPC resources.

You can configure the VPC to use in either the console or through the API. Choose the tab for your preferred method, and then follow the steps:

------
#### [ Console ]

For the Amazon Bedrock console, you specify VPC subnets and security groups in the optional **VPC settings** section when you create the model import job. For more information about configuring model import jobs, see [Submit a model import job](model-customization-import-model-job.md).

------
#### [ API ]

When you submit a [CreateModelCustomizationJob](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_CreateModelCustomizationJob.html) request, you can include a `VpcConfig` as a request parameter to specify the VPC subnets and security groups to use, as in the following example.

```
"VpcConfig": { 
"SecurityGroupIds": [
    "sg-0123456789abcdef0"
    ],
    "Subnets": [
          "subnet-0123456789abcdef0",
          "subnet-0123456789abcdef1",
          "subnet-0123456789abcdef2"
     ]
 }
```

------

# Submit a model import job
<a name="model-customization-import-model-job"></a>

You import a model into Amazon Bedrock by submitting a model import job in the Amazon Bedrock console, using the API, using the AWS CLI or using AWS SDK. In the job you specify the Amazon S3 URI for the source of the model files. Alternatively, if you've created the model in Amazon SageMaker AI, you can specify the SageMaker AI model. During model import, the import job automatically detects your model's architecture. The model import job can take several minutes. During the job, Amazon Bedrock validates that the model that is being imported is using a compatible the model architecture. 

The following procedure shows you how to create a custom model by importing a model that you have already customized. Select the tab corresponding to your method of choice and follow the steps. 

------
#### [ Console ]

To submit a model import job in the console, complete the following steps.

1. If you are importing your model files from Amazon S3, convert the model to the Hugging Face format. 

   1. If your model is a Mistral AI model, use [convert\$1mistral\$1weights\$1to\$1hf.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/mistral/convert_mistral_weights_to_hf.py). 

   1. If your model is a Llama model, see [convert\$1llama\$1weights\$1to\$1hf.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py). 

   1. Upload the model files to an Amazon S3 bucket in your AWS account. For more information, see [Upload an object to your bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/uploading-an-object-bucket.html).

   1. If you are using cross-account Amazon S3 or KMS keys to import your custom model, give Amazon Bedrock access to your AWS account's Amazon S3 or KMS key. For more information, see [Cross-account access to Amazon S3 bucket for custom model import jobs](cross-account-access-cmi.md).

1. In the Amazon Bedrock console, choose **Imported models** under **Foundation models** from the left navigation pane.

1. Choose the **Models** tab.

1. Choose **Import model**.

1. In the **Imported** tab, choose **Import model** to open the **Import model** page.

1. In the **Model details** section, do the following: 

   1. In **Model name** enter a name for the model.

   1. (Optional) To associate [tags](tagging.md) with the model, expand the **Tags** section and select **Add new tag**.

1. In the **Import job name** section, do the following: 

   1. In **Job name** enter a name for the model import job.

   1. (Optional) To associate [tags](tagging.md) with the custom model, expand the **Tags** section and select **Add new tag**.

1. In **Model import settings**, select the import options you want to use.
   + Select **Amazon S3 bucket** or ** Amazon SageMaker AI model** to specify the import source.
   + If you are importing your model files from an Amazon S3 bucket, enter the Amazon S3 location in **S3 location**. Optionally, you can choose **Browse S3** to choose the file location. 
   + If you are importing your model from Amazon SageMaker AI, choose **Amazon SageMaker AI model** and then choose the SageMaker AI model that you want to import in **SageMaker AI models**. 

1. Enter **VPC settings** (optional) to choose a VPC configuration to access your Amazon Amazon S3 data source located in your VPC. You can create and manage a VPC, subnets, and security groups in Amazon VPC. For more information on Amazon VPC, see [(Optional) Protect custom model import jobs using a VPC](vpc-custom-model-import.md).

1. Select **Encryption**, to encrypt your data by default with an AWS key that is owned and managed by you. You can also choose a different key if you select **Customize encryption settings (advanced).**.

1. In the **Service access** section, select one of the following: 
   + **Create and use a new service role** – Enter a name for the service role.
   + **Use an existing service role** – Select a service role from the drop-down list. To see the permissions that your existing service role needs, choose **View permission details**.

     For more information on setting up a service role with the appropriate permissions, see [Create a service role for importing pre-trained models](model-import-iam-role.md).
**Note**  
if you are using cross-account Amazon S3 or KMS keys, edit the service role policy and replace the account id specified for `aws:ResourceAccount` with the AWS account id of the bucket owner. 

1. Choose **Import**.

1. On the **Custom models** page, choose **Imported**.

1. In the **Jobs** section, check the status of the import job. The model name you chose identifies the model import job. The job is complete if the value of **Status** for the model is **Complete**. 

1. Get the model ID for your model by doing the following.

   1. On the **Imported models** page, choose the **Models** tab.

   1. Copy the ARN for the model that you want to use from the **ARN** column.

1. Use your model for inference calls. For more information, see [Submit a single prompt with InvokeModel](inference-invoke.md). You can use the model with on demand throughput. 

   You can also use your model in the Amazon Bedrock text [playground](playgrounds.md).

------
#### [ API ]

**Request**

Send a [CreateModelImportJob](https://docs.aws.amazon.com//bedrock/latest/APIReference/API_CreateModelImportJob.html) (see link for request and response format and field details) request with an [Amazon Bedrock control plane endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#br-cp) to submit a custom model import job. Minimally, you must provide the following fields.
+ `roleArn` – The ARN of the service role with permissions to import models. Amazon Bedrock can automatically create a role with the appropriate permissions if you use the console, or you can create a custom role by following the steps at [Create a service role for importing pre-trained models](model-import-iam-role.md).
**Note**  
If you include a `vpcConfig` field, make sure that the role has the proper permissions to access the VPC. For an example, see [Attach VPC permissions to a custom model import role.](vpc-custom-model-import.md#vpc-data-access-role-cmi)
+ `importedModelName` – The name to give the newly imported model.
+ `jobName` – The name to give the import job.
+ `modelDataSource` – The data source for the imported model.

To prevent the request from completing more than once, include a `clientRequestToken`.

You can include the following optional fields for extra configurations.
+ `jobTags` and/or `importedModelTags` – Associate [tags](tagging.md) with the import job or the imported model.
+ `importedModelKmsKeyId` – Include a [Encryption of custom model import](https://docs.aws.amazon.com//bedrock/latest/userguide/encryption-import-model.html) KMS key to encrypt your imported model.
+ `vpcConfig` – Include the vpc configuration to [(Optional) Protect custom model import jobs using a VPC](vpc-custom-model-import.md).

**Response**

The response returns a `jobArn` for the import job that you use to identify the import job in other operations.

The import job will take a while to complete. You can check the current status by calling the [GetModelImportJob](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_GetModelImportJob.html) operation and checking the `Status` field in the response. You can list the current import jobs with the [ListModelImportJobs](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_ListModelImportJobs.html). 

To get a list of models that you have imported, call [ListImportedModels](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_ListImportedModels.html). To get information about a specific imported model, call [GetImportedModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_GetImportedModel.html).

To delete an imported model, call [DeleteImportedModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_DeleteImportedModel.html).

------

# Invoke your imported model
<a name="invoke-imported-model"></a>

The model import job can take several minutes to import your model after you send [CreateModelImportJob](https://docs.aws.amazon.com//bedrock/latest/APIReference/API_CreateModelImportJob.html) request. You can check the status of your import job in the console or by calling the [GetModelImportJob](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_GetModelImportJob.html) operation and checking the `Status` field in the response. The import job is complete if the Status for the model is **Complete**. 

After your imported model is available in Amazon Bedrock, you can use the model with on demand throughput by sending [InvokeModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html) or [InvokeModelWithResponseStream](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModelWithResponseStream.html) requests to make inference calls to the model. For more information, see [Submit a single prompt with InvokeModel](inference-invoke.md).

To interface with your imported model using the messages format, you can call the [Converse](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html) or [ConverseStream](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ConverseStream.html) operations. For more information, see [Using the Converse API](conversation-inference-call.md).

**Note**  
Converse API is not supported for Qwen2.5, Qwen2-VL, Qwen2.5-VL, and GPT-OSS models.

## Enhanced API Support: Multiple API Formats
<a name="enhanced-api-support"></a>

Starting November 17, 2025, Amazon Bedrock Custom Model Import supports comprehensive OpenAI-compatible API formats, providing flexibility in how you integrate and deploy your custom models. All models imported after November 11, 2025, will automatically benefit from these enhanced capabilities with no additional configuration required.

Custom Model Import now supports three API formats:
+ **BedrockCompletion (Text)** - Compatible with current Bedrock workflows
+ **OpenAICompletion (Text)** - OpenAI Completions Schema compatibility
+ **OpenAIChatCompletion (Text and Images)** - Full conversational Schema compatibility

These enhanced capabilities include structured outputs for enforcing JSON schemas and patterns, enhanced vision support with multi-image processing, log probabilities for model confidence insights, and tool calling capabilities for GPT-OSS models.

For detailed API reference documentation, see the official OpenAI documentation:
+ Completion: [OpenAI Completions API](https://platform.openai.com/docs/api-reference/completions)
+ ChatCompletion: [OpenAI Chat API](https://platform.openai.com/docs/api-reference/chat)

### API Format Examples
<a name="api-format-examples"></a>

The following examples demonstrate how to use each of the four supported API formats with your imported models.

------
#### [ BedrockCompletion ]

**BedrockCompletion** format is compatible with current Bedrock workflows and supports text-based inference requests.

Example request:

```
import json
import boto3

client = boto3.client('bedrock-runtime', region_name='us-east-1')

payload = {
    "prompt": "How is the rainbow formed?",
    "max_gen_len": 100,
    "temperature": 0.5
}

response = client.invoke_model(
    modelId='your-model-arn',
    body=json.dumps(payload),
    accept='application/json',
    contentType='application/json'
)

response_body = json.loads(response['body'].read())
```

Example response:

```
{
    "generation": " – A scientific explanation\nA rainbow is a beautiful natural phenomenon that occurs when sunlight passes through water droplets in the air. It is formed through a process called refraction, which is the bending of light as it passes from one medium to another.\nHere's a step-by-step explanation of how a rainbow is formed:\n1. Sunlight enters the Earth's atmosphere: The first step in forming a rainbow is for sunlight to enter the Earth's atmosphere. This sunlight is made up of a spectrum of",
    "prompt_token_count": 7,
    "generation_token_count": 100,
    "stop_reason": "length",
    "logprobs": null
}
```

BedrockCompletion supports structured outputs using `response_format` parameter with `json_object` and `json_schema` types.

------
#### [ OpenAICompletion ]

**OpenAICompletion** format provides OpenAI Completions Schema compatibility. To use this format, include the `max_tokens` parameter instead of `max_gen_len`.

Example request:

```
import json
import boto3

client = boto3.client('bedrock-runtime', region_name='us-east-1')

payload = {
    "prompt": "How is the rainbow formed?",
    "max_tokens": 100,
    "temperature": 0.5
}

response = client.invoke_model(
    modelId='your-model-arn',
    body=json.dumps(payload),
    accept='application/json',
    contentType='application/json'
)

response_body = json.loads(response['body'].read())
```

Example response:

```
{
    "id": "cmpl-b09d5810bd64428f8a853be71c31f912",
    "object": "text_completion",
    "created": 1763166682,
    "choices": [
        {
            "index": 0,
            "text": " The formation of a rainbow is a complex process that involves the interaction of sunlight with water droplets in the air. Here's a simplified explanation: 1. Sunlight enters the Earth's atmosphere and is refracted, or bent, as it passes through the air. 2. When sunlight encounters a water droplet, such as a cloud, mist, or fog, it is refracted again and split into its individual colors, a process known as dispersion. 3. The refracted and",
            "finish_reason": "length"
        }
    ],
    "usage": {
        "prompt_tokens": 7,
        "total_tokens": 107,
        "completion_tokens": 100
    }
}
```

OpenAICompletion supports full structured outputs capabilities including `json`, `regex`, `choice`, and `grammar` constraints using the `structured_outputs` parameter.

------
#### [ OpenAIChatCompletion ]

**OpenAIChatCompletion** format provides full conversational Schema compatibility and supports both text and image inputs.

Example request:

```
import json
import boto3

client = boto3.client('bedrock-runtime', region_name='us-east-1')

payload = {
    "messages": [
        {
            "role": "user",
            "content": "How is the rainbow formed?"
        }
    ],
    "max_tokens": 100,
    "temperature": 0.5
}

response = client.invoke_model(
    modelId='your-model-arn',
    body=json.dumps(payload),
    accept='application/json',
    contentType='application/json'
)

response_body = json.loads(response['body'].read())
```

Example response:

```
{
    "id": "chatcmpl-1d84ce1d3d61418e8c6d1973f87173db",
    "object": "chat.completion",
    "created": 1763166683,
    "choices": [
        {
            "index": 0,
            "message": {
                "role": "assistant",
                "content": "A rainbow is a beautiful natural phenomenon that occurs when sunlight passes through water droplets in the air. The process of forming a rainbow involves several steps:\n\n1. **Sunlight**: The first requirement for a rainbow is sunlight. The sun should be shining brightly, but not directly overhead.\n2. **Water droplets**: The second requirement is water droplets in the air..."
            },
            "finish_reason": "length"
        }
    ],
    "usage": {
        "prompt_tokens": 41,
        "completion_tokens": 100,
        "total_tokens": 141
    }
}
```

OpenAIChatCompletion supports structured outputs using both `response_format` and `structured_outputs` parameters. For vision capabilities, include images in the content array with base64-encoded image data.

**Note**  
To use ChatCompletion format, the chat template needs to be part of the `tokenizer_config.json`. Custom Model Import will not apply any default chat templates to the request.

------

# Advanced API features for imported models
<a name="custom-model-import-advanced-features"></a>

This page provides detailed examples of advanced features available for models imported after November 11, 2025. These capabilities include structured outputs for controlled generation, enhanced vision support for multi-image processing, log probabilities for confidence insights, and tool calling for GPT-OSS models.

## Structured Outputs
<a name="structured-outputs"></a>

Structured outputs enable controlled generation following specific formats, schemas, or patterns. This feature ensures that the model's response adheres to predefined constraints, making it ideal for applications requiring consistent data formats, API integrations, or automated processing pipelines.

Structured outputs on Custom Model Import are supported via two parameters:
+ `response_format` - Supports `json_object` and `json_schema` types
+ `structured_outputs` - Supports `json`, `regex`, `choice`, and `grammar` types

**Note**  
When using structured outputs on Custom Model Import, customers should expect performance trade-offs due to constraint validation during generation. Simple constraints like `choice` and `json_object` have minimal impact, while complex constraints like `json_schema` and `grammar` can significantly increase latency and reduce throughput. For optimal performance, use simpler constraint types when possible and keep schemas flat rather than deeply nested.

The following examples demonstrate structured outputs support across different API formats. The Pydantic model definition is:

```
from pydantic import BaseModel
from enum import Enum

class CarType(str, Enum):
    sedan = "sedan"
    suv = "SUV"
    truck = "Truck"
    coupe = "Coupe"

class CarDescription(BaseModel):
    brand: str
    model: str
    car_type: CarType
```

------
#### [ BedrockCompletion ]

BedrockCompletion supports structured outputs using the `response_format` parameter with `json_object` and `json_schema` types only.

**Example: JSON Schema**

```
payload = {
    "prompt": "Generate a JSON with the brand, model and car_type of the most iconic car from the 90's",
    "response_format": {
        "type": "json_schema",
        "json_schema": CarDescription.model_json_schema()
    }
}

response = client.invoke_model(
    modelId='your-model-arn',
    body=json.dumps(payload),
    accept='application/json',
    contentType='application/json'
)

response_body = json.loads(response['body'].read())
```

Example response:

```
{
    "generation": "{\n    \"brand\": \"Ferrari\",\n    \"model\": \"F40\",\n    \"car_type\": \"SUV\"\n  }",
    "prompt_token_count": 22,
    "generation_token_count": 30,
    "stop_reason": "stop",
    "logprobs": null
}
```

------
#### [ OpenAICompletion ]

OpenAICompletion supports both `response_format` (json\$1object, json\$1schema) and `structured_outputs` (json, regex, choice, grammar) parameters. Use `max_tokens` instead of `max_gen_len` to route requests to OpenAICompletion.

**Example: Structured Outputs - Choice**

```
payload = {
    "prompt": "Classify the sentiment of this sentence. Amazon Bedrock CMI is Amazing!",
    "max_tokens": 10,
    "structured_outputs": {
        "choice": ["positive", "negative"]
    }
}

response = client.invoke_model(
    modelId='your-model-arn',
    body=json.dumps(payload),
    accept='application/json',
    contentType='application/json'
)

response_body = json.loads(response['body'].read())
```

Example response:

```
{
    "id": "cmpl-01f94c4652d24870bbb4d5418a01c384",
    "object": "text_completion",
    "choices": [
        {
            "index": 0,
            "text": "positive",
            "finish_reason": "stop"
        }
    ],
    "usage": {
        "prompt_tokens": 17,
        "completion_tokens": 4
    }
}
```

------
#### [ OpenAIChatCompletion ]

OpenAIChatCompletion supports both `response_format` (json\$1object, json\$1schema) and `structured_outputs` (json, regex, choice, grammar) parameters.

**Example: Response Format - JSON Schema**

```
payload = {
    "messages": [
        {"role": "user", "content": "Generate a JSON with the brand, model and car_type of the most iconic car from the 90's"}
    ],
    "max_tokens": 100,
    "response_format": {
        "type": "json_schema",
        "json_schema": {
            "name": "car-description",
            "schema": CarDescription.model_json_schema()
        }
    }
}

response = client.invoke_model(
    modelId='your-model-arn',
    body=json.dumps(payload),
    accept='application/json',
    contentType='application/json'
)

response_body = json.loads(response['body'].read())
```

Example response:

```
{
    "id": "chatcmpl-cae5a43b0a924b8eb434510cbf978a19",
    "object": "chat.completion",
    "choices": [
        {
            "index": 0,
            "message": {
                "role": "assistant",
                "content": "{\"brand\": \"Dodge\", \"model\": \"Viper\", \"car_type\": \"Coupe\"}"
            },
            "finish_reason": "stop"
        }
    ],
    "usage": {
        "prompt_tokens": 56,
        "completion_tokens": 23
    }
}
```

------

## Vision Support
<a name="vision-support"></a>

Vision capabilities enable processing of images alongside text inputs, with enhanced multi-image support for complex visual analysis tasks. Custom Model Import now supports up to 3 images per request, enhanced from previous single-image limitation.

**Supported API:** OpenAIChatCompletion only. All models imported after November 11, 2025 will default to this API for vision capabilities.

**Image Requirements:**
+ Base64 encoding required - Image URLs will cause request failures
+ Maximum 3 images per request
+ High resolution images significantly increase processing time and memory usage

**Warning**  
High resolution images significantly increase processing time, memory usage, and may cause timeout errors. Multiple high-resolution images compound performance impact exponentially. For optimal performance, resize images appropriately and use lower detail levels when possible.

------
#### [ OpenAIChatCompletion ]

**Example: Multi-Image Processing**

```
import json
import boto3
import base64

client = boto3.client('bedrock-runtime', region_name='us-east-1')

# Load and encode images
with open('/path/to/car_image_1.jpg', 'rb') as f:
    image_data_1 = base64.b64encode(f.read()).decode('utf-8')

with open('/path/to/car_image_2.jpg', 'rb') as f:
    image_data_2 = base64.b64encode(f.read()).decode('utf-8')

payload = {
    "messages": [
        {
            "role": "system",
            "content": "You are a helpful assistant that can analyze images."
        },
        {
            "role": "user",
            "content": [
                {
                    "type": "text",
                    "text": "Spot the difference between the two images?"
                },
                {
                    "type": "image_url",
                    "image_url": {
                        "url": f"data:image/jpeg;base64,{image_data_1}"
                    }
                },
                {
                    "type": "image_url",
                    "image_url": {
                        "url": f"data:image/jpeg;base64,{image_data_2}"
                    }
                }
            ]
        }
    ],
    "max_tokens": 300,
    "temperature": 0.5
}

response = client.invoke_model(
    modelId='your-model-arn',
    body=json.dumps(payload),
    accept='application/json',
    contentType='application/json'
)

response_body = json.loads(response['body'].read())
```

Example response:

```
{
    "id": "chatcmpl-ccae8a67e62f4014a9ffcbedfff96f44",
    "object": "chat.completion",
    "created": 1763167018,
    "model": "667387627229-g6vkuhd609s4",
    "choices": [
        {
            "index": 0,
            "message": {
                "role": "assistant",
                "content": "There are no differences between the two images provided. They appear to be identical.",
                "refusal": null,
                "annotations": null,
                "audio": null,
                "function_call": null,
                "tool_calls": [],
                "reasoning_content": null
            },
            "logprobs": null,
            "finish_reason": "stop",
            "stop_reason": null,
            "token_ids": null
        }
    ],
    "service_tier": null,
    "system_fingerprint": null,
    "usage": {
        "prompt_tokens": 2795,
        "total_tokens": 2812,
        "completion_tokens": 17,
        "prompt_tokens_details": null
    },
    "prompt_logprobs": null,
    "prompt_token_ids": null,
    "kv_transfer_params": null
}
```

------

## Log Probabilities
<a name="log-probabilities"></a>

Log probabilities represent the likelihood of each token in a sequence, calculated as log(p) where p is the probability of a token at any position given its previous token in the context. Since log probs are additive, sequence probability equals the sum of individual token log probs, making them useful for ranking generations by average per-token scores. Custom Model Import will always return the raw logprob values for requested tokens.

Key applications include classification tasks where log probs enable custom confidence thresholds, retrieval Q&A systems that use confidence scores to reduce hallucinations, autocomplete suggestions based on token likelihood, and perplexity calculations for comparing model performance across prompts. Log probs also provide token-level analysis capabilities, allowing developers to examine alternative tokens the model considered.

**Note**  
Logprobs are not cached. For a request requiring prompt logprobs, the system will ignore the prefix cache and recompute the prefill of full prompt to generate the logprobs. This presents an obvious performance tradeoff when using logprobs.

Log probability support varies by API format:
+ BedrockCompletion - Output tokens only
+ OpenAICompletion - Prompt and output tokens
+ OpenAIChatCompletion - Prompt and output tokens

------
#### [ BedrockCompletion ]

BedrockCompletion only supports output token logprobs. This will return top 1 logprob for each output token.

```
payload = {
    "prompt": "How is the rainbow formed?",
    "max_gen_len": 10,
    "temperature": 0.5,
    "return_logprobs": True
}

response = client.invoke_model(
    modelId='your-model-arn',
    body=json.dumps(payload),
    accept='application/json',
    contentType='application/json'
)

response_body = json.loads(response['body'].read())
```

Example response (truncated):

```
{
    "generation": " A rainbow is formed when sunlight passes through water dro",
    "prompt_token_count": 7,
    "generation_token_count": 10,
    "stop_reason": "length",
    "logprobs": [
        {
            "362": -2.1413702964782715
        },
        {
            "48713": -0.8180374503135681
        },
        {
            "374": -0.09657637774944305
        },
        ...
    ]
}
```

------
#### [ OpenAIChatCompletion ]

OpenAIChatCompletion supports both prompt and output token logprobs. You can set `top_logprobs=N` and `prompt_logprobs=N` where N is an integer representing log probabilities for the N most likely tokens at each position.

```
payload = {
    "messages": [
        {
            "role": "user",
            "content": "How is the rainbow formed?"
        }
    ],
    "max_tokens": 10,
    "temperature": 0.5,
    "logprobs": True,
    "top_logprobs": 1,
    "prompt_logprobs": 1
}

response = client.invoke_model(
    modelId='your-model-arn',
    body=json.dumps(payload),
    accept='application/json',
    contentType='application/json'
)

response_body = json.loads(response['body'].read())
```

Example response (truncated):

```
{
    "id": "chatcmpl-xxx",
    "object": "chat.completion",
    "choices": [
        {
            "index": 0,
            "message": {
                "role": "assistant",
                "content": "A rainbow is formed..."
            },
            "logprobs": {
                "content": [
                    {
                        "token": "A",
                        "logprob": -0.07903262227773666,
                        "bytes": [65],
                        "top_logprobs": [
                            {
                                "token": "A",
                                "logprob": -0.07903262227773666,
                                "bytes": [65]
                            }
                        ]
                    },
                    {
                        "token": " rainbow",
                        "logprob": -0.20187227427959442,
                        "bytes": [32, 114, 97, 105, 110, 98, 111, 119],
                        "top_logprobs": [...]
                    },
                    ...
                ]
            },
            "finish_reason": "length"
        }
    ],
    "usage": {
        "prompt_tokens": 41,
        "completion_tokens": 10,
        "total_tokens": 51
    }
}
```

------

You'll need the model ARN to make inference calls to your newly imported model. After the successful completion of the import job and after your imported model is active, you can get the model ARN of your imported model in the console or by sending a [ListImportedModels](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_ListImportedModels.html) request. 

When you invoke your imported model using `InvokeModel` or `InvokeModelWithStream`, your request is served within 5 minutes or you might get `ModelNotReadyException`. To understand the ModelNotReadyException, follow the steps in this next section for handling ModelNotreadyException. 

## Frequently Asked Questions
<a name="api-format-faq"></a>

**Q: What API format should I use?**

A: For maximum compatibility with various SDKs, we recommend using OpenAICompletion or OpenAIChatCompletion formats as they provide OpenAI-compatible schemas that are widely supported across different tools and libraries.

**Q: Does GPT-OSS on Amazon Bedrock Custom Model Import support the Converse API?**

A: No. GPT-OSS based custom model import models do not support the Converse API or ConverseStream API. You must use the [InvokeModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html) API with OpenAI-compatible schemas when working with GPT-OSS based custom models.

**Q: What models support tool calling?**

A: GPT-OSS based custom models support tool calling capabilities. Tool calling enables function calling for complex workflows.

**Q: What about models imported before November 11, 2025?**

A: Models imported before November 11, 2025, continue to work as is with their existing API formats and capabilities.

**Q: What about `generation_config.json` for OpenAI-based models?**

A: It is critical that you include the correct `generation_config.json` file when importing OpenAI-based models such as GPT-OSS. You must use the updated configuration file (updated August 13, 2024) available at [https://huggingface.co/openai/gpt-oss-20b/blob/main/generation\$1config.json](https://huggingface.co/openai/gpt-oss-20b/blob/main/generation_config.json). The updated configuration includes three end-of-sequence token IDs (`[200002, 199999, 200012]`), whereas older versions only included two tokens (`[200002, 199999]`). Using an outdated `generation_config.json` file will cause runtime errors during model invocation. This file is essential for proper model behavior and must be included with your OpenAI-based model imports.

## Handling ModelNotReadyException
<a name="handle-model-not-ready-exception"></a>

Amazon Bedrock Custom Model Import optimizes the hardware utilization by removing the models that are not active. If you try to invoke a model that has been removed, you'll get a `ModelNotReadyException`. After the model is removed and you invoke the model for the first time, Custom Model Import starts to restore the model. The restoration time depends on the on-demand fleet size and the model size.

If your `InvokeModel` or `InvokeModelWithStream` request returns `ModelNotReadyException`, follow the steps to handle the exception.

1. 

**Configure retries**

   By default, the request is automatically retried with exponential backoff. You can configure the maximum number of retries.

   The following example shows how to configure the retry. Replace *\$1\$1region-name\$1*, *\$1\$1model-arn\$1*, and *10* with your Region, model ARN, and maximum attempts.

   ```
   import json
   import boto3
   from botocore.config import Config
   
   
   REGION_NAME = ${region-name}
   MODEL_ID= '${model-arn}'
   
   config = Config(
       retries={
           'total_max_attempts': 10, //customizable
           'mode': 'standard'
       }
   )
   message = "Hello"
   
   
   session = boto3.session.Session()
   br_runtime = session.client(service_name = 'bedrock-runtime', 
                                    region_name=REGION_NAME, 
                                    config=config)
       
   try:
       invoke_response = br_runtime.invoke_model(modelId=MODEL_ID, 
                                               body=json.dumps({'prompt': message}), 
                                               accept="application/json", 
                                               contentType="application/json")
       invoke_response["body"] = json.loads(invoke_response["body"].read().decode("utf-8"))
       print(json.dumps(invoke_response, indent=4))
   except Exception as e:
       print(e)
       print(e.__repr__())
   ```

1. 

**Monitor response codes during retry attempts**

   Each retry attempt starts model restoration process. The restoration time depends on the availability of the on-demand fleet and the model size. Monitor the response codes while the restoration process is going on. 

   If the retries are consistently failing, continue with the next steps.

1. 

**Verify model was successfully imported**

   You can verify if the model was successfully imported by checking the status of your import job in the console or by calling the [GetModelImportJob](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_GetModelImportJob.html) operation. Check the `Status` field in the response. The import job is successful if the Status for the model is **Complete**. 

1. 

**Contact Support for further investigation**

   Open a ticket with Support For more information, see [Creating support cases](https://docs.aws.amazon.com//awssupport/latest/user/case-management.html).

   Include relevant details such as model ID and timestamps in the support ticket.

# Handling Custom Chat Templates and Tokenizers
<a name="custom-chat-templates-tokenizers"></a>

Custom chat templates and tokenizers are essential components for properly formatting conversational inputs and managing token generation in your imported models. Custom Model Import supports importing models with custom chat templates that define how multi-turn conversations are structured and tokenized.

## Importing Models with Custom Chat Templates
<a name="import-chat-template"></a>

When importing a model with custom chat templates, ensure that you follow the best practices outlined in the main Custom Model Import documentation, including bringing in model weights in safe tensors format and providing all necessary configuration files.

### Chat Template Format Requirements
<a name="chat-template-format-requirements"></a>

If your model uses a custom chat template and you want to use that template with Amazon Bedrock, you must include the chat template file in one of the following formats:
+ **`chat_template.jinja`** — A Jinja2-based template file that defines how messages are formatted.
+ **`chat_template.json`** — A JSON file containing the chat template as a string value.
+ **`tokenizer_config.json` with embedded chat template** — Alternatively, you can include the chat template directly within your `tokenizer_config.json` file as a `chat_template` field. For an example, see the [Hermes-2-Pro tokenizer\$1config.json](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B/blob/main/tokenizer_config.json#L2059) on Hugging Face.

Custom Model Import will automatically process these files and include them in the correct directories during the import process.

#### Chat Template Precedence
<a name="chat-template-precedence"></a>

If you provide multiple chat template sources, Amazon Bedrock applies the following precedence rules:

1. **Separate chat template files take precedence** — If you include both a separate chat template file (`chat_template.jinja` or `chat_template.json`) AND a `chat_template` field in your `tokenizer_config.json`, the separate file will be used and the embedded template in `tokenizer_config.json` will be ignored.

1. **Embedded template as fallback** — If you do not provide a separate chat template file, Amazon Bedrock will use the `chat_template` field from your `tokenizer_config.json` if present.

**Warning**  
**Choose one approach:** To avoid confusion and ensure predictable behavior, we strongly recommend using only ONE of these approaches:  
**Option 1:** Provide separate chat template files (`chat_template.jinja` or `chat_template.json`) without including a `chat_template` field in your `tokenizer_config.json`.
**Option 2:** Include the `chat_template` field directly in your `tokenizer_config.json` and do not provide separate template files.
If you have custom tool templates or complex chat template configurations, we recommend using Option 2 (embedding the template in `tokenizer_config.json`) as it allows you to define multiple named templates (such as "default" and "tool\$1use") within a single configuration file.

**Note**  
The chat template file must follow the Hugging Face format and naming conventions. Ensure that your template is compatible with the Transformers library.

#### Chat Template Format Examples
<a name="chat-template-examples"></a>

Here are examples of the two supported chat template formats:

------
#### [ Jinja Format (chat\$1template.jinja) ]

A simplified example of a Jinja2-based chat template:

```
{% for message in messages %}
{% if loop.first and message['role'] != 'system' %}
<|im_start|>system
You are a helpful assistant.<|im_end|>
{% endif %}
<|im_start|>{{ message['role'] }}
{{ message['content'] }}<|im_end|>
{% endfor %}
{% if add_generation_prompt %}
<|im_start|>assistant
{% endif %}
```

For a complete example, see the [GPT-OSS chat template](https://huggingface.co/openai/gpt-oss-20b/blob/main/chat_template.jinja) on Hugging Face.

------
#### [ JSON Format (chat\$1template.json) ]

A simplified example of a JSON-based chat template with vision support:

```
{
    "chat_template": "{% for message in messages %}{% if loop.first and message['role'] != 'system' %}<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n{% endif %}<|im_start|>{{ message['role'] }}\n{% if message['content'] is string %}{{ message['content'] }}<|im_end|>\n{% else %}{% for content in message['content'] %}{% if content['type'] == 'image' %}<|vision_start|><|image_pad|><|vision_end|>{% elif 'text' in content %}{{ content['text'] }}{% endif %}{% endfor %}<|im_end|>\n{% endif %}{% endfor %}{% if add_generation_prompt %}<|im_start|>assistant\n{% endif %}"
}
```

For a complete example with multimodal support, see the [Qwen2-VL chat template](https://huggingface.co/Qwen/Qwen2-VL-72B-Instruct/blob/main/chat_template.json) on Hugging Face.

------

**Important**  
Ensure that your chat template file follows the exact naming convention (`chat_template.jinja` or `chat_template.json`) and format shown in the examples above. Incorrectly formatted templates may cause import or inference failures.

## Invoking Models with Custom Chat Templates
<a name="invoke-custom-chat-templates"></a>

Once you have imported a model with a custom chat template, you have two options for invoking the model with properly formatted conversational inputs:

### Using OpenAI ChatCompletion API with Messages
<a name="chatcompletion-api-approach"></a>

If you are providing inputs in the messages schema format, you should use the **OpenAI ChatCompletion API**. When you use this API format with a model that has an uploaded chat template (`chat_template.jinja` or `chat_template.json`), Amazon Bedrock will automatically convert your input messages into the correct format using the chat template.

This is the recommended approach as it provides the most seamless integration and allows Amazon Bedrock to handle the chat template application automatically.

**Example: Using OpenAI ChatCompletion API with custom chat template**

```
import json
import boto3

# Initialize Bedrock Runtime client
client = boto3.client('bedrock-runtime', region_name='us-east-1')

# Define the model ARN for your imported model with custom chat template
model_id = 'arn:aws:bedrock:us-east-1:123456789012:imported-model/your-model-id'

# Prepare the request payload using messages format
payload = {
    "messages": [
        {
            "role": "system",
            "content": "You are a helpful assistant."
        },
        {
            "role": "user",
            "content": "Hello, how are you?"
        }
    ],
    "max_tokens": 150,
    "temperature": 0.7
}

# Invoke the model
response = client.invoke_model(
    modelId=model_id,
    body=json.dumps(payload),
    accept='application/json',
    contentType='application/json'
)

# Parse and display the response
response_body = json.loads(response['body'].read())
print(json.dumps(response_body, indent=2))
```

In this example, Amazon Bedrock will automatically apply your custom chat template to the messages array, converting it into the appropriate format for your model.

### Manual Tokenization Approach
<a name="manual-tokenization-approach"></a>

Alternatively, if you prefer to have full control over the chat template application and tokenization process, you can manually apply the chat template to your conversation and then use the **Completion API** (not ChatCompletion) with the pre-formatted text.

This approach is useful when you need to customize the template application logic or when working with specialized tokenization requirements.

#### Step 1: Apply Chat Template Locally
<a name="apply-chat-template-locally"></a>

Use the following code snippet to load your custom chat template and apply it to a conversation locally:

```
from transformers import AutoTokenizer

# Configuration paths - update these with your actual paths
TOKENIZER_PATH = ""  # Path to tokenizer directory

# Load tokenizer with updated chat template
tokenizer = AutoTokenizer.from_pretrained(TOKENIZER_PATH)

# Test chat template with sample conversation
chat_history = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello, how are you?"},
]

# Apply chat template and display formatted output to make sure chat template works as expected
formatted_chat = tokenizer.apply_chat_template(chat_history, tokenize=False)
print(formatted_chat)
```

This code demonstrates how to:

1. Load your tokenizer configuration from the model files

1. Add the custom chat template to the tokenizer configuration

1. Apply the chat template to a conversation history

1. Generate the formatted text that can be sent to the model

#### Step 2: Invoke with Completion API
<a name="invoke-with-completion-api"></a>

After applying the chat template locally, use the formatted text with the Completion API:

```
import json
import boto3

# Initialize Bedrock Runtime client
client = boto3.client('bedrock-runtime', region_name='us-east-1')

# Define the model ARN for your imported model
model_id = 'arn:aws:bedrock:us-east-1:123456789012:imported-model/your-model-id'

# Use the formatted_chat output from Step 1 as the prompt
payload = {
    "prompt": formatted_chat,
    "max_tokens": 150,
    "temperature": 0.7
}

# Invoke the model using Completion format (not ChatCompletion)
response = client.invoke_model(
    modelId=model_id,
    body=json.dumps(payload),
    accept='application/json',
    contentType='application/json'
)

# Parse and display the response
response_body = json.loads(response['body'].read())
print(json.dumps(response_body, indent=2))
```

**Warning**  
**Always use `max_tokens` parameter:** When using the Completion API with custom model imports, always use the `max_tokens` parameter to ensure OpenAI completion schema compatibility. This avoids any translation confusion and ensures consistent behavior across different SDK implementations. Do not use model-specific parameter names like `max_gen_len` or similar variants.

**Important**  
When using the manual tokenization approach, you must use the **Completion API** format (with `prompt` field), not the ChatCompletion API format (with `messages` field). The ChatCompletion API expects raw messages and will attempt to apply the chat template again, resulting in incorrect formatting.

### Best Practices
<a name="chat-template-best-practices"></a>
+ **Use ChatCompletion API when possible** — The OpenAI ChatCompletion API with messages format provides the most seamless experience and allows Amazon Bedrock to handle chat template application automatically.
+ **Validate your chat template** — Before importing your model, test your chat template locally using the Transformers library to ensure it produces the expected output format.
+ **Include all special tokens** — Ensure your chat template includes all necessary special tokens (such as beginning-of-sequence, end-of-sequence, and role markers) that your model expects.
+ **Test with multi-turn conversations** — Validate that your chat template correctly handles multi-turn conversations with alternating user and assistant messages.
+ **Consider vision support** — If your model supports multimodal inputs, ensure your chat template includes logic for handling image and video content markers.

**Warning**  
Incorrectly formatted chat templates can result in poor model performance, unexpected outputs, or inference failures. Always test your chat template thoroughly before deploying to production.

# Calculate the cost of running a custom model
<a name="import-model-calculate-cost"></a>

You are charged for running inference with custom models that you import into Amazon Bedrock. Use the following information to estimate the cost of running a custom model.

A Custom Model Unit (CMU) is an abstract view of the hardware utilization that Amazon Bedrock needs to host a a single copy of your custom model. A model copy represents a single instance of your imported model that is ready to serve inference requests. Amazon Bedrock determines the number of custom model units that a model copy needs when you import the custom model. As inference demand changes, Amazon Bedrock can increase or decrease the number of running model copies that are running. 

You can use CMUs to estimate the cost of running your custom model by using the following formula.

*Total cost = Number of running model copies × Number of CMUs per copy × billing rate per CMU per min × ((Number of 5-min windows)/60)*
+ *Number of running model copies *– Get from the `ModelCopy` metric in Amazon CloudWatch.
+ *Number of CMUs per copy * – Get the value from the model details page in the Amazon Bedrock details page. Alternatively, call the [GetImportedModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_GetImportedModel.html) operation and get the number of CMUs per copy from the `customModelUnitsPerModelCopy` field.
+ *Billing rate per CMU per min* – Get the custom model units version from the model details page in the Amazon Bedrock console. Alternatively, call the [GetImportedModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_GetImportedModel.html) operation and get the custom model units version from the `customModelUnitsVersion` field.

  Then, get the billing rate (**Price per Custom Model Unit per min\$1**) for the model version (**Custom Model Unit version**) on the **Custom Model Import** section of the [Amazon Bedrock pricing](https://aws.amazon.com/bedrock/pricing/) page.

Billing is charged over 5-minute billing windows from the first successful inference call. 

For more information, see [Amazon Bedrock pricing](https://aws.amazon.com/bedrock/pricing/).

# Code samples for custom model import
<a name="custom-model-import-code-samples"></a>

The following code samples show how to set up permissions, create a custom model import job, view the details of your import jobs and imported models, and delete imported model. These code samples apply to Mistral AI, Llama, Qwen, Flan, GPTBigCode, and Mixtral architectures.

## Important: Model Architecture Support
<a name="model-architecture-support"></a>

**GPT-OSS Model Limitations:**
+ **Converse API Not Supported:** GPT-OSS based custom model import models do NOT support the Converse API or ConverseStream API.
+ **Use InvokeModel API Instead:** Customers must use the [InvokeModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html) API when working with GPT-OSS based custom models.
+ **API Schema Requirement:** GPT-OSS models require OpenAI-compatible API schemas:
  + Completion format for completion requests
  + ChatCompletion format for chat requests
  + Response format follows OpenAI API specifications
+ **Supported Models for Converse API:** The Converse API is supported for Mistral AI, Llama, Qwen, Flan, GPTBigCode, and Mixtral architectures only.

For GPT-OSS model usage examples with [InvokeModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html) API, refer to the OpenAI API documentation for request/response schemas.

1. Prepare model files for import

   1. If you are importing from an Amazon S3 bucket, you need to supply the model files in the Hugging Face weights format. For more information, see [Import a model source from Amazon S3](model-customization-import-model.md#model-customization-import-model-source). 

   1. Create an Amazon S3 bucket for your model files (the names must be unique).

   1. Upload the model files into the bucket.

1. Create a policy to access your model files and attach it to an IAM role with an Amazon Bedrock trust relationship. Choose the tab for your preferred method, and then follow the steps:

------
#### [ Console ]

   1. Create Amazon S3 policy to access the Amazon S3 bucket that contains your model files

      1. Navigate to the IAM console at [https://console.aws.amazon.com/iam](https://console.aws.amazon.com/iam) and choose **Policies** from the left navigation pane.

      1. Select **Create policy** and then choose **JSON** to open the **Policy editor**.

      1. Paste the following policy, replacing *\$1\$1model-file-bucket\$1* with your bucket name, and then select **Next**.

------
#### [ JSON ]

****  

         ```
         {
             "Version":"2012-10-17",		 	 	 
             "Statement": [
                 {
                     "Effect": "Allow",
                     "Action": [
                         "s3:GetObject",
                         "s3:ListBucket"
                     ],
                     "Resource": [
                         "arn:aws:s3:::${model-file-bucket}",
                         "arn:aws:s3:::${model-file-bucket}/*"
                     ]
                   }
                 ]
               }
         ```

------

      1. Name the policy *S3BucketPolicy* and select **Create policy**.

   1. Create an IAM role and attach the policy.

      1. From the left navigation pane, choose **Roles** and then select **Create role**.

      1. Select **Custom trust policy**, paste the following policy, and select **Next**.

------
#### [ JSON ]

****  

         ```
         {
             "Version":"2012-10-17",		 	 	 
             "Statement": [
                 {
                     "Effect": "Allow",
                     "Principal": {
                         "Service": "bedrock.amazonaws.com"
                     },
                     "Action": "sts:AssumeRole"
                 }
             ] 
         }
         ```

------

      1. Search for the *S3BucketPolicy* policy you created, select the checkbox, and choose **Next**.

      1. Name the role *MyImportModelRole* and select *Create role*.

------
#### [ CLI ]

   1. Create a file called *BedrockTrust.json* and paste the following policy into it.

------
#### [ JSON ]

****  

      ```
      {
          "Version":"2012-10-17",		 	 	 
          "Statement": [
              {
                  "Effect": "Allow",
                  "Principal": {
                      "Service": "bedrock.amazonaws.com"
                  },
                  "Action": "sts:AssumeRole"
              }
          ] 
      }
      ```

------

   1. Create another file called *S3BucketPolicy.json* and paste the following policy into it, replacing *\$1\$1model-file-bucket\$1* with your bucket names.

------
#### [ JSON ]

****  

      ```
      {
          "Version":"2012-10-17",		 	 	 
          "Statement": [
              {
                  "Effect": "Allow",
                  "Action": [
                      "s3:GetObject",
                      "s3:ListBucket"
                  ],
                  "Resource": [
                      "arn:aws:s3:::${model-file-bucket}",
                      "arn:aws:s3:::${model-file-bucket}/*"
                  ]
              }
           ]
      }
      ```

------

   1. In a terminal, navigate to the folder containing the policies you created.

   1. Make a [CreateRole](https://docs.aws.amazon.com/IAM/latest/APIReference/API_CreateRole.html) request to create an IAM role called *MyImportModelRole* and attach the *BedrockTrust.json* trust policy that you created.

      ```
      aws iam create-role \
          --role-name MyImportModelRole \
          --assume-role-policy-document file://BedrockTrust.json
      ```

   1. Make a [CreatePolicy](https://docs.aws.amazon.com/IAM/latest/APIReference/API_CreatePolicy.html) request to create the S3 data access policy with the *S3BucketPolicy.json* file you created. The response returns an `Arn` for the policy.

      ```
      aws iam create-policy \
          --policy-name S3BucketPolicy \
          --policy-document file://S3BucketPolicy.json
      ```

   1. Make an [AttachRolePolicy](https://docs.aws.amazon.com/IAM/latest/APIReference/API_AttachRolePolicy.html) request to attach the S3 data access policy to your role, replacing the `policy-arn` with the ARN in the response from the previous step:

      ```
      aws iam attach-role-policy \
          --role-name MyImportModelRole \
          --policy-arn ${policy-arn}
      ```

------
#### [ Python ]

   1. Run the following code to make a [CreateRole](https://docs.aws.amazon.com/IAM/latest/APIReference/API_CreateRole.html) request to create an IAM role called *MyImportModel* and to make a [CreatePolicy](https://docs.aws.amazon.com/IAM/latest/APIReference/API_CreatePolicy.html) request to create an S3 data access policy called *S3BucketPolicy*. For the S3 data access policy, replace *\$1\$1model-file-bucket\$1* with your S3 bucket names.

      ```
      import boto3
      import json
      
      iam = boto3.client("iam")
      
      iam.create_role(
          RoleName="MyImportModelRole",
          AssumeRolePolicyDocument=json.dumps({
              "Version": "2012-10-17",		 	 	 
              "Statement": [
                  {
                      "Effect": "Allow",
                      "Principal": {
                          "Service": "bedrock.amazonaws.com"
                      },
                      "Action": "sts:AssumeRole"
                  }
              ] 
          })
      )
      
      iam.create_policy(
          PolicyName="S3BucketPolicy",
          PolicyDocument=json.dumps({
              "Version": "2012-10-17",		 	 	 
              "Statement": [
                  {
                      "Effect": "Allow",
                      "Action": [
                          "s3:GetObject",
                          "s3:ListBucket"
                      ],
                      "Resource": [
                          "arn:aws:s3:::${training-bucket}",
                          "arn:aws:s3:::${training-bucket}/*"
                      ]
                  }
                ]
             })
           )
      ```

   1. An `Arn` is returned in the response. Run the following code snippet to make an [AttachRolePolicy](https://docs.aws.amazon.com/IAM/latest/APIReference/API_AttachRolePolicy.html) request, replacing *\$1\$1policy-arn\$1* with the returned `Arn`.

      ```
      iam.attach_role_policy(
          RoleName="MyImportModelRole",
          PolicyArn="${policy-arn}"
      )
      ```

------

1. Select a language to see code samples to call the custom model import API operations.

------
#### [ CLI ]

To submit a custom model import job, in a terminal run the following command in the command line, replacing *\$1\$1my-import-model-role-arn\$1* with the model role that you set up and the *s3-bucket-path* with the S3 bucket path of your model files.

```
aws bedrock create-model-import-job 
    --job-name MyImportedModelJobName
    --imported-model-name MyImportedModelName
    --role-arn ${my-import-model-role-arn}  
    --model-data-source '{"s3DataSource": {"s3Uri": s3-bucket-path }}
```

The response returns a *jobArn*. The custom import job will take some time to complete. You can use the `jobArn` with the following command to check the status of the import job.

The following fields are optional:
+ To add a VPC configuration, add the following argument to the above command to specify the security group and subnets:

  ```
     -\\-vpc-config '{securityGroupIds": ["sg-xx"], "subnetIds": ["subnet-yy", "subnet-zz"]}'
  ```
+ To encrypt your model with a KMS key, add the following argument to the above command, replacing the values to specify the key with which you want to encrypt your model.

  ```
     -\\-customModelKmsKeyId 'arn:aws:kms:region:account-id:key/key-id'
  ```
+ To add tags, add the following argument to the above command, replacing the keys and values with the tags you want to attach to the job and/or output model and making sure to separate key/value pairs with a space:

  ```
     -\\-tags key=key1,value=value1 key=key2,value=value2
  ```

The response returns a *jobArn*. The custom import job will take some time to complete. You can use the `jobArn` with the following command to check the status of the import job.

```
aws bedrock get-model-import-job \
    --job-identifier "jobArn"
```

The response looks similar to this:

```
{
    "jobArn": ${job-arn} ,
    "jobName": MyImportedModelJobName,
    "importedModelName": MyImportedModelName,
    "roleArn": ${my-role-arn},
    "modelDataSource": {
        "s3DataSource": {
            "s3Uri": "${S3Uri}"
        }
    },
    "status": "Complete",
    "creationTime": "2024-08-13T23:38:42.457Z",
    "lastModifiedTime": "2024-08-13T23:39:25.158Z"
```

When the `status` is `Complete`. the import job is complete.

To run inference on your newly imported model, you must provide the ARN of the imported model as the `model-id`. Get ARN of the imported model.

```
aws bedrock list-imported-models              
```

The response contains the model name and the model ARN. Use the model ARN to invoke the imported model. For more information, see [Submit a single prompt with InvokeModel](inference-invoke.md).

```
{
    "modelSummaries": [
        {
            "modelArn": model-arn,
            "modelName": "MyImportedModelName",
            "modelArchitecture":model-architecture,
            "instructSupported":Y,
            "creationTime": "2024-08-13T19:20:14.058Z"
            
        }
    ]
}
```

To delete your imported model, in a terminal run the following command in the command line, using the model name or the model ARN of the imported model you want to delete.

```
aws bedrock delete-imported-model 
                --model-identifier MyImportedModelName
```

------
#### [ Python ]

Run the following code snippet to submit an custom model import job. Replace *my-region* with the Region where you imported the model, *\$1\$1my-import-model-role-arn\$1* with the ARN of the *MyImportModelRole* that you set up and replace *\$1\$1model-file-bucket\$1* with your S3 bucket name.

```
import boto3
import json

REGION_NAME = my-region
bedrock = boto3.client(service_name='bedrock',
                       region_name=REGION_NAME)

JOB_NAME = MyImportedModelJobName 
ROLE_ARN = ${my-import-model-role-arn}
IMPORTED_MODEL_NAME = ImportedModelName
S3_URI = ${S3Uri}

# createModelImportJob API
create_job_response = bedrock.create_model_import_job(
    jobName=JOB_NAME,
    importedModelName=IMPORTED_MODEL_NAME,
    roleArn=ROLE_ARN,
    modelDataSource={
        "s3DataSource": {
            "s3Uri": S3_URI
        }
    },
)
job_arn = create_job_response.get("jobArn")
```

The following fields are optional.
+ To add a VPC configuration, add the following argument to the above command to specify the security group and subnets:

  ```
     vpc-config = {'securityGroupIds: ["sg-xx".], 'subnetIds': [subnet-yy, 'subnet-zz']}'
  ```
+ To encrypt your model with a KMS key, add the following argument to the above command, replacing the values to specify the key with which you want to encrypt your model.

  ```
     importedModelKmsKeyId = 'arn:aws:kms:region:account-id:key/key-id'
  ```
+ To add tags, add the following argument to the above command, replacing the keys and values with the tags you want to attach to the job and/or output model and making sure to separate key/value pairs with a space:

  ```
     jobTags key=key1,value=value1 key=key2,value=value2
  ```

The response returns a jobArn

```
               job_arn = create_job_response.get("jobArn")
```

The custom import job will take some time to complete. You can use the `jobArn` with the following command to check the status of the import job.

```
bedrock.get_model_import_job(jobIdentifier=jobArn)
```

When the `status` is `Completed`, the import job is complete.

To run inference on your newly imported model, you must provide the ARN of the imported model as the `model-id`. Get ARN of the imported model.

```
response_pt = bedrock.list_imported_models(
    creationTimeBefore=datetime (2015,1,1,
    creationTimeAfter= datetime (2015,1,1,
    nameContains = 'MyImportedModelName,
    maxresults = 123
    nextToken = 'none',
    sortBy = 'creationTime',
    sortOrder = 'Ascending'
```

The response returns the `modelArn` along with other details of the imported model. 

```
{
    'nextToken': '',
    'modelSummaries': [
        {
            'modelArn': 'your-model-arn',
            'modelName': 'MyImportedModelName',
            'modelArchitecture':model-architecture,
            'instructSupported':Y,
            'creationTime': datetime(2015, 1, 1)
        },
    ]
```

Use the model ARN to invoke the imported model. For more information, see [Submit a single prompt with InvokeModel](inference-invoke.md).

To delete your imported model, use the following command using the model name or the model ARN of the imported model you want to delete.

```
response = client.delete_imported_model(
    modelIdentifier='MyImportedModelName'
            )
```

------

# Converse API code samples for custom model import
<a name="custom-model-import-code-samples-converse"></a>

If you're importing a Mistral, Llama, or Qwen type instruct model and you want to use the [Converse](https://docs.aws.amazon.com//bedrock/latest/APIReference/API_runtime_Converse.html) or the [ConverseStream](https://docs.aws.amazon.com//bedrock/latest/APIReference/API_runtime_ConverseStream.html) API, make sure to include the `chat_template` for the model type you are importing in the `tokenizer_config.json`. Select the tab corresponding to the model type you are importing in the following table.

------
#### [ Llama 3.2 Text ]

```
{
  "added_tokens_decoder": {
    "128000": {
      "content": "<|begin_of_text|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128001": {
      "content": "<|end_of_text|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128002": {
      "content": "<|reserved_special_token_0|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128003": {
      "content": "<|reserved_special_token_1|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128004": {
      "content": "<|finetune_right_pad_id|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128005": {
      "content": "<|reserved_special_token_2|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128006": {
      "content": "<|start_header_id|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128007": {
      "content": "<|end_header_id|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128008": {
      "content": "<|eom_id|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128009": {
      "content": "<|eot_id|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128010": {
      "content": "<|python_tag|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128011": {
      "content": "<|reserved_special_token_3|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128012": {
      "content": "<|reserved_special_token_4|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128013": {
      "content": "<|reserved_special_token_5|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128014": {
      "content": "<|reserved_special_token_6|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128015": {
      "content": "<|reserved_special_token_7|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128016": {
      "content": "<|reserved_special_token_8|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128017": {
      "content": "<|reserved_special_token_9|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128018": {
      "content": "<|reserved_special_token_10|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128019": {
      "content": "<|reserved_special_token_11|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128020": {
      "content": "<|reserved_special_token_12|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128021": {
      "content": "<|reserved_special_token_13|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128022": {
      "content": "<|reserved_special_token_14|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128023": {
      "content": "<|reserved_special_token_15|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128024": {
      "content": "<|reserved_special_token_16|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128025": {
      "content": "<|reserved_special_token_17|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128026": {
      "content": "<|reserved_special_token_18|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128027": {
      "content": "<|reserved_special_token_19|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128028": {
      "content": "<|reserved_special_token_20|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128029": {
      "content": "<|reserved_special_token_21|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128030": {
      "content": "<|reserved_special_token_22|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128031": {
      "content": "<|reserved_special_token_23|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128032": {
      "content": "<|reserved_special_token_24|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128033": {
      "content": "<|reserved_special_token_25|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128034": {
      "content": "<|reserved_special_token_26|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128035": {
      "content": "<|reserved_special_token_27|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128036": {
      "content": "<|reserved_special_token_28|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128037": {
      "content": "<|reserved_special_token_29|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128038": {
      "content": "<|reserved_special_token_30|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128039": {
      "content": "<|reserved_special_token_31|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128040": {
      "content": "<|reserved_special_token_32|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128041": {
      "content": "<|reserved_special_token_33|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128042": {
      "content": "<|reserved_special_token_34|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128043": {
      "content": "<|reserved_special_token_35|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128044": {
      "content": "<|reserved_special_token_36|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128045": {
      "content": "<|reserved_special_token_37|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128046": {
      "content": "<|reserved_special_token_38|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128047": {
      "content": "<|reserved_special_token_39|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128048": {
      "content": "<|reserved_special_token_40|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128049": {
      "content": "<|reserved_special_token_41|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128050": {
      "content": "<|reserved_special_token_42|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128051": {
      "content": "<|reserved_special_token_43|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128052": {
      "content": "<|reserved_special_token_44|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128053": {
      "content": "<|reserved_special_token_45|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128054": {
      "content": "<|reserved_special_token_46|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128055": {
      "content": "<|reserved_special_token_47|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128056": {
      "content": "<|reserved_special_token_48|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128057": {
      "content": "<|reserved_special_token_49|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128058": {
      "content": "<|reserved_special_token_50|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128059": {
      "content": "<|reserved_special_token_51|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128060": {
      "content": "<|reserved_special_token_52|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128061": {
      "content": "<|reserved_special_token_53|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128062": {
      "content": "<|reserved_special_token_54|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128063": {
      "content": "<|reserved_special_token_55|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128064": {
      "content": "<|reserved_special_token_56|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128065": {
      "content": "<|reserved_special_token_57|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128066": {
      "content": "<|reserved_special_token_58|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128067": {
      "content": "<|reserved_special_token_59|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128068": {
      "content": "<|reserved_special_token_60|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128069": {
      "content": "<|reserved_special_token_61|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128070": {
      "content": "<|reserved_special_token_62|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128071": {
      "content": "<|reserved_special_token_63|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128072": {
      "content": "<|reserved_special_token_64|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128073": {
      "content": "<|reserved_special_token_65|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128074": {
      "content": "<|reserved_special_token_66|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128075": {
      "content": "<|reserved_special_token_67|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128076": {
      "content": "<|reserved_special_token_68|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128077": {
      "content": "<|reserved_special_token_69|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128078": {
      "content": "<|reserved_special_token_70|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128079": {
      "content": "<|reserved_special_token_71|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128080": {
      "content": "<|reserved_special_token_72|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128081": {
      "content": "<|reserved_special_token_73|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128082": {
      "content": "<|reserved_special_token_74|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128083": {
      "content": "<|reserved_special_token_75|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128084": {
      "content": "<|reserved_special_token_76|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128085": {
      "content": "<|reserved_special_token_77|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128086": {
      "content": "<|reserved_special_token_78|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128087": {
      "content": "<|reserved_special_token_79|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128088": {
      "content": "<|reserved_special_token_80|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128089": {
      "content": "<|reserved_special_token_81|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128090": {
      "content": "<|reserved_special_token_82|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128091": {
      "content": "<|reserved_special_token_83|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128092": {
      "content": "<|reserved_special_token_84|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128093": {
      "content": "<|reserved_special_token_85|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128094": {
      "content": "<|reserved_special_token_86|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128095": {
      "content": "<|reserved_special_token_87|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128096": {
      "content": "<|reserved_special_token_88|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128097": {
      "content": "<|reserved_special_token_89|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128098": {
      "content": "<|reserved_special_token_90|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128099": {
      "content": "<|reserved_special_token_91|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128100": {
      "content": "<|reserved_special_token_92|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128101": {
      "content": "<|reserved_special_token_93|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128102": {
      "content": "<|reserved_special_token_94|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128103": {
      "content": "<|reserved_special_token_95|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128104": {
      "content": "<|reserved_special_token_96|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128105": {
      "content": "<|reserved_special_token_97|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128106": {
      "content": "<|reserved_special_token_98|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128107": {
      "content": "<|reserved_special_token_99|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128108": {
      "content": "<|reserved_special_token_100|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128109": {
      "content": "<|reserved_special_token_101|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128110": {
      "content": "<|reserved_special_token_102|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128111": {
      "content": "<|reserved_special_token_103|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128112": {
      "content": "<|reserved_special_token_104|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128113": {
      "content": "<|reserved_special_token_105|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128114": {
      "content": "<|reserved_special_token_106|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128115": {
      "content": "<|reserved_special_token_107|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128116": {
      "content": "<|reserved_special_token_108|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128117": {
      "content": "<|reserved_special_token_109|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128118": {
      "content": "<|reserved_special_token_110|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128119": {
      "content": "<|reserved_special_token_111|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128120": {
      "content": "<|reserved_special_token_112|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128121": {
      "content": "<|reserved_special_token_113|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128122": {
      "content": "<|reserved_special_token_114|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128123": {
      "content": "<|reserved_special_token_115|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128124": {
      "content": "<|reserved_special_token_116|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128125": {
      "content": "<|reserved_special_token_117|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128126": {
      "content": "<|reserved_special_token_118|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128127": {
      "content": "<|reserved_special_token_119|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128128": {
      "content": "<|reserved_special_token_120|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128129": {
      "content": "<|reserved_special_token_121|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128130": {
      "content": "<|reserved_special_token_122|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128131": {
      "content": "<|reserved_special_token_123|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128132": {
      "content": "<|reserved_special_token_124|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128133": {
      "content": "<|reserved_special_token_125|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128134": {
      "content": "<|reserved_special_token_126|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128135": {
      "content": "<|reserved_special_token_127|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128136": {
      "content": "<|reserved_special_token_128|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128137": {
      "content": "<|reserved_special_token_129|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128138": {
      "content": "<|reserved_special_token_130|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128139": {
      "content": "<|reserved_special_token_131|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128140": {
      "content": "<|reserved_special_token_132|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128141": {
      "content": "<|reserved_special_token_133|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128142": {
      "content": "<|reserved_special_token_134|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128143": {
      "content": "<|reserved_special_token_135|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128144": {
      "content": "<|reserved_special_token_136|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128145": {
      "content": "<|reserved_special_token_137|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128146": {
      "content": "<|reserved_special_token_138|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128147": {
      "content": "<|reserved_special_token_139|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128148": {
      "content": "<|reserved_special_token_140|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128149": {
      "content": "<|reserved_special_token_141|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128150": {
      "content": "<|reserved_special_token_142|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128151": {
      "content": "<|reserved_special_token_143|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128152": {
      "content": "<|reserved_special_token_144|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128153": {
      "content": "<|reserved_special_token_145|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128154": {
      "content": "<|reserved_special_token_146|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128155": {
      "content": "<|reserved_special_token_147|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128156": {
      "content": "<|reserved_special_token_148|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128157": {
      "content": "<|reserved_special_token_149|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128158": {
      "content": "<|reserved_special_token_150|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128159": {
      "content": "<|reserved_special_token_151|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128160": {
      "content": "<|reserved_special_token_152|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128161": {
      "content": "<|reserved_special_token_153|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128162": {
      "content": "<|reserved_special_token_154|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128163": {
      "content": "<|reserved_special_token_155|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128164": {
      "content": "<|reserved_special_token_156|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128165": {
      "content": "<|reserved_special_token_157|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128166": {
      "content": "<|reserved_special_token_158|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128167": {
      "content": "<|reserved_special_token_159|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128168": {
      "content": "<|reserved_special_token_160|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128169": {
      "content": "<|reserved_special_token_161|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128170": {
      "content": "<|reserved_special_token_162|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128171": {
      "content": "<|reserved_special_token_163|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128172": {
      "content": "<|reserved_special_token_164|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128173": {
      "content": "<|reserved_special_token_165|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128174": {
      "content": "<|reserved_special_token_166|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128175": {
      "content": "<|reserved_special_token_167|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128176": {
      "content": "<|reserved_special_token_168|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128177": {
      "content": "<|reserved_special_token_169|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128178": {
      "content": "<|reserved_special_token_170|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128179": {
      "content": "<|reserved_special_token_171|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128180": {
      "content": "<|reserved_special_token_172|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128181": {
      "content": "<|reserved_special_token_173|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128182": {
      "content": "<|reserved_special_token_174|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128183": {
      "content": "<|reserved_special_token_175|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128184": {
      "content": "<|reserved_special_token_176|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128185": {
      "content": "<|reserved_special_token_177|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128186": {
      "content": "<|reserved_special_token_178|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128187": {
      "content": "<|reserved_special_token_179|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128188": {
      "content": "<|reserved_special_token_180|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128189": {
      "content": "<|reserved_special_token_181|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128190": {
      "content": "<|reserved_special_token_182|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128191": {
      "content": "<|reserved_special_token_183|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128192": {
      "content": "<|reserved_special_token_184|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128193": {
      "content": "<|reserved_special_token_185|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128194": {
      "content": "<|reserved_special_token_186|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128195": {
      "content": "<|reserved_special_token_187|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128196": {
      "content": "<|reserved_special_token_188|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128197": {
      "content": "<|reserved_special_token_189|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128198": {
      "content": "<|reserved_special_token_190|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128199": {
      "content": "<|reserved_special_token_191|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128200": {
      "content": "<|reserved_special_token_192|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128201": {
      "content": "<|reserved_special_token_193|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128202": {
      "content": "<|reserved_special_token_194|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128203": {
      "content": "<|reserved_special_token_195|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128204": {
      "content": "<|reserved_special_token_196|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128205": {
      "content": "<|reserved_special_token_197|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128206": {
      "content": "<|reserved_special_token_198|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128207": {
      "content": "<|reserved_special_token_199|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128208": {
      "content": "<|reserved_special_token_200|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128209": {
      "content": "<|reserved_special_token_201|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128210": {
      "content": "<|reserved_special_token_202|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128211": {
      "content": "<|reserved_special_token_203|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128212": {
      "content": "<|reserved_special_token_204|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128213": {
      "content": "<|reserved_special_token_205|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128214": {
      "content": "<|reserved_special_token_206|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128215": {
      "content": "<|reserved_special_token_207|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128216": {
      "content": "<|reserved_special_token_208|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128217": {
      "content": "<|reserved_special_token_209|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128218": {
      "content": "<|reserved_special_token_210|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128219": {
      "content": "<|reserved_special_token_211|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128220": {
      "content": "<|reserved_special_token_212|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128221": {
      "content": "<|reserved_special_token_213|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128222": {
      "content": "<|reserved_special_token_214|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128223": {
      "content": "<|reserved_special_token_215|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128224": {
      "content": "<|reserved_special_token_216|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128225": {
      "content": "<|reserved_special_token_217|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128226": {
      "content": "<|reserved_special_token_218|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128227": {
      "content": "<|reserved_special_token_219|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128228": {
      "content": "<|reserved_special_token_220|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128229": {
      "content": "<|reserved_special_token_221|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128230": {
      "content": "<|reserved_special_token_222|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128231": {
      "content": "<|reserved_special_token_223|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128232": {
      "content": "<|reserved_special_token_224|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128233": {
      "content": "<|reserved_special_token_225|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128234": {
      "content": "<|reserved_special_token_226|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128235": {
      "content": "<|reserved_special_token_227|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128236": {
      "content": "<|reserved_special_token_228|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128237": {
      "content": "<|reserved_special_token_229|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128238": {
      "content": "<|reserved_special_token_230|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128239": {
      "content": "<|reserved_special_token_231|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128240": {
      "content": "<|reserved_special_token_232|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128241": {
      "content": "<|reserved_special_token_233|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128242": {
      "content": "<|reserved_special_token_234|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128243": {
      "content": "<|reserved_special_token_235|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128244": {
      "content": "<|reserved_special_token_236|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128245": {
      "content": "<|reserved_special_token_237|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128246": {
      "content": "<|reserved_special_token_238|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128247": {
      "content": "<|reserved_special_token_239|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128248": {
      "content": "<|reserved_special_token_240|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128249": {
      "content": "<|reserved_special_token_241|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128250": {
      "content": "<|reserved_special_token_242|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128251": {
      "content": "<|reserved_special_token_243|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128252": {
      "content": "<|reserved_special_token_244|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128253": {
      "content": "<|reserved_special_token_245|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128254": {
      "content": "<|reserved_special_token_246|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128255": {
      "content": "<|reserved_special_token_247|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    }
  },
  "bos_token": "<|begin_of_text|>",
  "chat_template": "{{- bos_token }}\n{%- if custom_tools is defined %}\n    {%- set tools = custom_tools %}\n{%- endif %}\n{%- if not tools_in_user_message is defined %}\n    {%- set tools_in_user_message = true %}\n{%- endif %}\n{%- if not date_string is defined %}\n    {%- if strftime_now is defined %}\n        {%- set date_string = strftime_now(\"%d %b %Y\") %}\n    {%- else %}\n        {%- set date_string = \"26 Jul 2024\" %}\n    {%- endif %}\n{%- endif %}\n{%- if not tools is defined %}\n    {%- set tools = none %}\n{%- endif %}\n\n{#- This block extracts the system message, so we can slot it into the right place. #}\n{%- if messages[0]['role'] == 'system' %}\n    {%- set system_message = messages[0]['content']|trim %}\n    {%- set messages = messages[1:] %}\n{%- else %}\n    {%- set system_message = \"\" %}\n{%- endif %}\n\n{#- System message #}\n{{- \"|start_header_id|>system<|end_header_id|>\\n\\n\" }}\n{%- if tools is not none %}\n    {{- \"Environment: ipython\\n\" }}\n{%- endif %}\n{{- \"Cutting Knowledge Date: December 2023\\n\" }}\n{{- \"Today Date: \" + date_string + \"\\n\\n\" }}\n{%- if tools is not none and not tools_in_user_message %}\n    {{- \"You have access to the following functions. To call a function, please respond with JSON for a function call.\" }}\n    {{- 'Respond in the format {\"name\": function name, \"parameters\": dictionary of argument name and its value}.' }}\n    {{- \"Do not use variables.\\n\\n\" }}\n    {%- for t in tools %}\n        {{- t | tojson(indent=4) }}\n        {{- \"\\n\\n\" }}\n    {%- endfor %}\n{%- endif %}\n{{- system_message }}\n{{- \"<|eot_id|>\" }}\n\n{#- Custom tools are passed in a user message with some extra guidance #}\n{%- if tools_in_user_message and not tools is none %}\n    {#- Extract the first user message so we can plug it in here #}\n    {%- if messages | length != 0 %}\n        {%- set first_user_message = messages[0]['content']|trim %}\n        {%- set messages = messages[1:] %}\n    {%- else %}\n        {{- raise_exception(\"Cannot put tools in the first user message when there's no first user message!\") }}\n{%- endif %}\n    {{- '<|start_header_id|>user<|end_header_id|>\\n\\n' -}}\n    {{- \"Given the following functions, please respond with a JSON for a function call \" }}\n    {{- \"with its proper arguments that best answers the given prompt.\\n\\n\" }}\n    {{- 'Respond in the format {\"name\": function name, \"parameters\": dictionary of argument name and its value}.' }}\n    {{- \"Do not use variables.\\n\\n\" }}\n    {%- for t in tools %}\n        {{- t | tojson(indent=4) }}\n        {{- \"\\n\\n\" }}\n    {%- endfor %}\n    {{- first_user_message + \"<|eot_id|>\"}}\n{%- endif %}\n\n{%- for message in messages %}\n    {%- if not (message.role == 'ipython' or message.role == 'tool' or 'tool_calls' in message) %}\n        {{- '<|start_header_id|>' + message['role'] + '<|end_header_id|>\\n\\n'+ message['content'] | trim + '<|eot_id|>' }}\n    {%- elif 'tool_calls' in message %}\n        {%- if not message.tool_calls|length == 1 %}\n            {{- raise_exception(\"This model only supports single tool-calls at once!\") }}\n        {%- endif %}\n        {%- set tool_call = message.tool_calls[0].function %}\n        {{- '<|start_header_id|>assistant<|end_header_id|>\\n\\n' -}}\n        {{- '{\"name\": \"' + tool_call.name + '\", ' }}\n        {{- '\"parameters\": ' }}\n        {{- tool_call.arguments | tojson }}\n        {{- \"}\" }}\n        {{- \"<|eot_id|>\" }}\n    {%- elif message.role == \"tool\" or message.role == \"ipython\" %}\n        {{- \"<|start_header_id|>ipython<|end_header_id|>\\n\\n\" }}\n        {%- if message.content is mapping or message.content is iterable %}\n            {{- message.content | tojson }}\n        {%- else %}\n            {{- message.content }}\n        {%- endif %}\n        {{- \"<|eot_id|>\" }}\n    {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n    {{- '<|start_header_id|>assistant<|end_header_id|>\\n\\n' }}\n{%- endif %}\n",
  "clean_up_tokenization_spaces": true,
  "eos_token": "<|eot_id|>",
  "model_input_names": [
    "input_ids",
    "attention_mask"
  ],
  "model_max_length": 131072,
  "tokenizer_class": "PreTrainedTokenizerFast"
}
```

------
#### [ Llama 3.1 Text ]

```
{
  "added_tokens_decoder": {
    "128000": {
      "content": "<|begin_of_text|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128001": {
      "content": "<|end_of_text|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128002": {
      "content": "<|reserved_special_token_0|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128003": {
      "content": "<|reserved_special_token_1|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128004": {
      "content": "<|finetune_right_pad_id|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128005": {
      "content": "<|reserved_special_token_2|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128006": {
      "content": "<|start_header_id|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128007": {
      "content": "<|end_header_id|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128008": {
      "content": "<|eom_id|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128009": {
      "content": "<|eot_id|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128010": {
      "content": "<|python_tag|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128011": {
      "content": "<|reserved_special_token_3|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128012": {
      "content": "<|reserved_special_token_4|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128013": {
      "content": "<|reserved_special_token_5|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128014": {
      "content": "<|reserved_special_token_6|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128015": {
      "content": "<|reserved_special_token_7|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128016": {
      "content": "<|reserved_special_token_8|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128017": {
      "content": "<|reserved_special_token_9|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128018": {
      "content": "<|reserved_special_token_10|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128019": {
      "content": "<|reserved_special_token_11|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128020": {
      "content": "<|reserved_special_token_12|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128021": {
      "content": "<|reserved_special_token_13|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128022": {
      "content": "<|reserved_special_token_14|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128023": {
      "content": "<|reserved_special_token_15|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128024": {
      "content": "<|reserved_special_token_16|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128025": {
      "content": "<|reserved_special_token_17|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128026": {
      "content": "<|reserved_special_token_18|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128027": {
      "content": "<|reserved_special_token_19|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128028": {
      "content": "<|reserved_special_token_20|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128029": {
      "content": "<|reserved_special_token_21|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128030": {
      "content": "<|reserved_special_token_22|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128031": {
      "content": "<|reserved_special_token_23|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128032": {
      "content": "<|reserved_special_token_24|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128033": {
      "content": "<|reserved_special_token_25|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128034": {
      "content": "<|reserved_special_token_26|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128035": {
      "content": "<|reserved_special_token_27|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128036": {
      "content": "<|reserved_special_token_28|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128037": {
      "content": "<|reserved_special_token_29|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128038": {
      "content": "<|reserved_special_token_30|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128039": {
      "content": "<|reserved_special_token_31|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128040": {
      "content": "<|reserved_special_token_32|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128041": {
      "content": "<|reserved_special_token_33|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128042": {
      "content": "<|reserved_special_token_34|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128043": {
      "content": "<|reserved_special_token_35|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128044": {
      "content": "<|reserved_special_token_36|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128045": {
      "content": "<|reserved_special_token_37|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128046": {
      "content": "<|reserved_special_token_38|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128047": {
      "content": "<|reserved_special_token_39|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128048": {
      "content": "<|reserved_special_token_40|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128049": {
      "content": "<|reserved_special_token_41|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128050": {
      "content": "<|reserved_special_token_42|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128051": {
      "content": "<|reserved_special_token_43|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128052": {
      "content": "<|reserved_special_token_44|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128053": {
      "content": "<|reserved_special_token_45|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128054": {
      "content": "<|reserved_special_token_46|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128055": {
      "content": "<|reserved_special_token_47|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128056": {
      "content": "<|reserved_special_token_48|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128057": {
      "content": "<|reserved_special_token_49|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128058": {
      "content": "<|reserved_special_token_50|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128059": {
      "content": "<|reserved_special_token_51|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128060": {
      "content": "<|reserved_special_token_52|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128061": {
      "content": "<|reserved_special_token_53|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128062": {
      "content": "<|reserved_special_token_54|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128063": {
      "content": "<|reserved_special_token_55|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128064": {
      "content": "<|reserved_special_token_56|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128065": {
      "content": "<|reserved_special_token_57|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128066": {
      "content": "<|reserved_special_token_58|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128067": {
      "content": "<|reserved_special_token_59|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128068": {
      "content": "<|reserved_special_token_60|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128069": {
      "content": "<|reserved_special_token_61|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128070": {
      "content": "<|reserved_special_token_62|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128071": {
      "content": "<|reserved_special_token_63|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128072": {
      "content": "<|reserved_special_token_64|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128073": {
      "content": "<|reserved_special_token_65|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128074": {
      "content": "<|reserved_special_token_66|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128075": {
      "content": "<|reserved_special_token_67|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128076": {
      "content": "<|reserved_special_token_68|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128077": {
      "content": "<|reserved_special_token_69|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128078": {
      "content": "<|reserved_special_token_70|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128079": {
      "content": "<|reserved_special_token_71|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128080": {
      "content": "<|reserved_special_token_72|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128081": {
      "content": "<|reserved_special_token_73|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128082": {
      "content": "<|reserved_special_token_74|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128083": {
      "content": "<|reserved_special_token_75|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128084": {
      "content": "<|reserved_special_token_76|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128085": {
      "content": "<|reserved_special_token_77|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128086": {
      "content": "<|reserved_special_token_78|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128087": {
      "content": "<|reserved_special_token_79|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128088": {
      "content": "<|reserved_special_token_80|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128089": {
      "content": "<|reserved_special_token_81|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128090": {
      "content": "<|reserved_special_token_82|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128091": {
      "content": "<|reserved_special_token_83|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128092": {
      "content": "<|reserved_special_token_84|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128093": {
      "content": "<|reserved_special_token_85|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128094": {
      "content": "<|reserved_special_token_86|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128095": {
      "content": "<|reserved_special_token_87|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128096": {
      "content": "<|reserved_special_token_88|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128097": {
      "content": "<|reserved_special_token_89|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128098": {
      "content": "<|reserved_special_token_90|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128099": {
      "content": "<|reserved_special_token_91|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128100": {
      "content": "<|reserved_special_token_92|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128101": {
      "content": "<|reserved_special_token_93|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128102": {
      "content": "<|reserved_special_token_94|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128103": {
      "content": "<|reserved_special_token_95|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128104": {
      "content": "<|reserved_special_token_96|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128105": {
      "content": "<|reserved_special_token_97|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128106": {
      "content": "<|reserved_special_token_98|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128107": {
      "content": "<|reserved_special_token_99|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128108": {
      "content": "<|reserved_special_token_100|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128109": {
      "content": "<|reserved_special_token_101|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128110": {
      "content": "<|reserved_special_token_102|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128111": {
      "content": "<|reserved_special_token_103|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128112": {
      "content": "<|reserved_special_token_104|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128113": {
      "content": "<|reserved_special_token_105|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128114": {
      "content": "<|reserved_special_token_106|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128115": {
      "content": "<|reserved_special_token_107|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128116": {
      "content": "<|reserved_special_token_108|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128117": {
      "content": "<|reserved_special_token_109|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128118": {
      "content": "<|reserved_special_token_110|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128119": {
      "content": "<|reserved_special_token_111|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128120": {
      "content": "<|reserved_special_token_112|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128121": {
      "content": "<|reserved_special_token_113|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128122": {
      "content": "<|reserved_special_token_114|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128123": {
      "content": "<|reserved_special_token_115|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128124": {
      "content": "<|reserved_special_token_116|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128125": {
      "content": "<|reserved_special_token_117|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128126": {
      "content": "<|reserved_special_token_118|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128127": {
      "content": "<|reserved_special_token_119|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128128": {
      "content": "<|reserved_special_token_120|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128129": {
      "content": "<|reserved_special_token_121|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128130": {
      "content": "<|reserved_special_token_122|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128131": {
      "content": "<|reserved_special_token_123|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128132": {
      "content": "<|reserved_special_token_124|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128133": {
      "content": "<|reserved_special_token_125|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128134": {
      "content": "<|reserved_special_token_126|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128135": {
      "content": "<|reserved_special_token_127|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128136": {
      "content": "<|reserved_special_token_128|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128137": {
      "content": "<|reserved_special_token_129|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128138": {
      "content": "<|reserved_special_token_130|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128139": {
      "content": "<|reserved_special_token_131|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128140": {
      "content": "<|reserved_special_token_132|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128141": {
      "content": "<|reserved_special_token_133|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128142": {
      "content": "<|reserved_special_token_134|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128143": {
      "content": "<|reserved_special_token_135|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128144": {
      "content": "<|reserved_special_token_136|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128145": {
      "content": "<|reserved_special_token_137|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128146": {
      "content": "<|reserved_special_token_138|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128147": {
      "content": "<|reserved_special_token_139|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128148": {
      "content": "<|reserved_special_token_140|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128149": {
      "content": "<|reserved_special_token_141|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128150": {
      "content": "<|reserved_special_token_142|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128151": {
      "content": "<|reserved_special_token_143|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128152": {
      "content": "<|reserved_special_token_144|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128153": {
      "content": "<|reserved_special_token_145|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128154": {
      "content": "<|reserved_special_token_146|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128155": {
      "content": "<|reserved_special_token_147|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128156": {
      "content": "<|reserved_special_token_148|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128157": {
      "content": "<|reserved_special_token_149|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128158": {
      "content": "<|reserved_special_token_150|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128159": {
      "content": "<|reserved_special_token_151|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128160": {
      "content": "<|reserved_special_token_152|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128161": {
      "content": "<|reserved_special_token_153|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128162": {
      "content": "<|reserved_special_token_154|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128163": {
      "content": "<|reserved_special_token_155|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128164": {
      "content": "<|reserved_special_token_156|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128165": {
      "content": "<|reserved_special_token_157|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128166": {
      "content": "<|reserved_special_token_158|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128167": {
      "content": "<|reserved_special_token_159|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128168": {
      "content": "<|reserved_special_token_160|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128169": {
      "content": "<|reserved_special_token_161|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128170": {
      "content": "<|reserved_special_token_162|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128171": {
      "content": "<|reserved_special_token_163|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128172": {
      "content": "<|reserved_special_token_164|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128173": {
      "content": "<|reserved_special_token_165|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128174": {
      "content": "<|reserved_special_token_166|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128175": {
      "content": "<|reserved_special_token_167|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128176": {
      "content": "<|reserved_special_token_168|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128177": {
      "content": "<|reserved_special_token_169|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128178": {
      "content": "<|reserved_special_token_170|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128179": {
      "content": "<|reserved_special_token_171|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128180": {
      "content": "<|reserved_special_token_172|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128181": {
      "content": "<|reserved_special_token_173|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128182": {
      "content": "<|reserved_special_token_174|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128183": {
      "content": "<|reserved_special_token_175|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128184": {
      "content": "<|reserved_special_token_176|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128185": {
      "content": "<|reserved_special_token_177|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128186": {
      "content": "<|reserved_special_token_178|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128187": {
      "content": "<|reserved_special_token_179|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128188": {
      "content": "<|reserved_special_token_180|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128189": {
      "content": "<|reserved_special_token_181|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128190": {
      "content": "<|reserved_special_token_182|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128191": {
      "content": "<|reserved_special_token_183|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128192": {
      "content": "<|reserved_special_token_184|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128193": {
      "content": "<|reserved_special_token_185|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128194": {
      "content": "<|reserved_special_token_186|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128195": {
      "content": "<|reserved_special_token_187|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128196": {
      "content": "<|reserved_special_token_188|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128197": {
      "content": "<|reserved_special_token_189|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128198": {
      "content": "<|reserved_special_token_190|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128199": {
      "content": "<|reserved_special_token_191|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128200": {
      "content": "<|reserved_special_token_192|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128201": {
      "content": "<|reserved_special_token_193|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128202": {
      "content": "<|reserved_special_token_194|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128203": {
      "content": "<|reserved_special_token_195|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128204": {
      "content": "<|reserved_special_token_196|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128205": {
      "content": "<|reserved_special_token_197|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128206": {
      "content": "<|reserved_special_token_198|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128207": {
      "content": "<|reserved_special_token_199|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128208": {
      "content": "<|reserved_special_token_200|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128209": {
      "content": "<|reserved_special_token_201|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128210": {
      "content": "<|reserved_special_token_202|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128211": {
      "content": "<|reserved_special_token_203|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128212": {
      "content": "<|reserved_special_token_204|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128213": {
      "content": "<|reserved_special_token_205|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128214": {
      "content": "<|reserved_special_token_206|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128215": {
      "content": "<|reserved_special_token_207|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128216": {
      "content": "<|reserved_special_token_208|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128217": {
      "content": "<|reserved_special_token_209|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128218": {
      "content": "<|reserved_special_token_210|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128219": {
      "content": "<|reserved_special_token_211|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128220": {
      "content": "<|reserved_special_token_212|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128221": {
      "content": "<|reserved_special_token_213|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128222": {
      "content": "<|reserved_special_token_214|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128223": {
      "content": "<|reserved_special_token_215|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128224": {
      "content": "<|reserved_special_token_216|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128225": {
      "content": "<|reserved_special_token_217|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128226": {
      "content": "<|reserved_special_token_218|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128227": {
      "content": "<|reserved_special_token_219|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128228": {
      "content": "<|reserved_special_token_220|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128229": {
      "content": "<|reserved_special_token_221|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128230": {
      "content": "<|reserved_special_token_222|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128231": {
      "content": "<|reserved_special_token_223|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128232": {
      "content": "<|reserved_special_token_224|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128233": {
      "content": "<|reserved_special_token_225|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128234": {
      "content": "<|reserved_special_token_226|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128235": {
      "content": "<|reserved_special_token_227|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128236": {
      "content": "<|reserved_special_token_228|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128237": {
      "content": "<|reserved_special_token_229|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128238": {
      "content": "<|reserved_special_token_230|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128239": {
      "content": "<|reserved_special_token_231|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128240": {
      "content": "<|reserved_special_token_232|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128241": {
      "content": "<|reserved_special_token_233|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128242": {
      "content": "<|reserved_special_token_234|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128243": {
      "content": "<|reserved_special_token_235|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128244": {
      "content": "<|reserved_special_token_236|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128245": {
      "content": "<|reserved_special_token_237|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128246": {
      "content": "<|reserved_special_token_238|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128247": {
      "content": "<|reserved_special_token_239|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128248": {
      "content": "<|reserved_special_token_240|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128249": {
      "content": "<|reserved_special_token_241|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128250": {
      "content": "<|reserved_special_token_242|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128251": {
      "content": "<|reserved_special_token_243|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128252": {
      "content": "<|reserved_special_token_244|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128253": {
      "content": "<|reserved_special_token_245|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128254": {
      "content": "<|reserved_special_token_246|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128255": {
      "content": "<|reserved_special_token_247|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    }
  },
  "bos_token": "<|begin_of_text|>",
  "chat_template": "{{- bos_token }}\n{%- if custom_tools is defined %}\n    {%- set tools = custom_tools %}\n{%- endif %}\n{%- if not tools_in_user_message is defined %}\n    {%- set tools_in_user_message = true %}\n{%- endif %}\n{%- if not date_string is defined %}\n    {%- set date_string = \"26 Jul 2024\" %}\n{%- endif %}\n{%- if not tools is defined %}\n    {%- set tools = none %}\n{%- endif %}\n\n{#- This block extracts the system message, so we can slot it into the right place. #}\n{%- if messages[0]['role'] == 'system' %}\n    {%- set system_message = messages[0]['content']|trim %}\n    {%- set messages = messages[1:] %}\n{%- else %}\n    {%- set system_message = \"\" %}\n{%- endif %}\n\n{#- System message + builtin tools #}\n{{- \"<|start_header_id|>system<|end_header_id|>\\n\\n\" }}\n{%- if builtin_tools is defined or tools is not none %}\n    {{- \"Environment: ipython\\n\" }}\n{%- endif %}\n{%- if builtin_tools is defined %}\n    {{- \"Tools: \" + builtin_tools | reject('equalto', 'code_interpreter') | join(\", \") + \"\\n\\n\"}}\n{%- endif %}\n{{- \"Cutting Knowledge Date: December 2023\\n\" }}\n{{- \"Today Date: \" + date_string + \"\\n\\n\" }}\n{%- if tools is not none and not tools_in_user_message %}\n    {{- \"You have access to the following functions. To call a function, please respond with JSON for a function call.\" }}\n    {{- 'Respond in the format {\"name\": function name, \"parameters\": dictionary of argument name and its value}.' }}\n    {{- \"Do not use variables.\\n\\n\" }}\n    {%- for t in tools %}\n        {{- t | tojson(indent=4) }}\n        {{- \"\\n\\n\" }}\n    {%- endfor %}\n{%- endif %}\n{{- system_message }}\n{{- \"<|eot_id|>\" }}\n\n{#- Custom tools are passed in a user message with some extra guidance #}\n{%- if tools_in_user_message and not tools is none %}\n    {#- Extract the first user message so we can plug it in here #}\n    {%- if messages | length != 0 %}\n        {%- set first_user_message = messages[0]['content']|trim %}\n        {%- set messages = messages[1:] %}\n    {%- else %}\n        {{- raise_exception(\"Cannot put tools in the first user message when there's no first user message!\") }}\n{%- endif %}\n    {{- '<|start_header_id|>user<|end_header_id|>\\n\\n' -}}\n    {{- \"Given the following functions, please respond with a JSON for a function call \" }}\n    {{- \"with its proper arguments that best answers the given prompt.\\n\\n\" }}\n    {{- 'Respond in the format {\"name\": function name, \"parameters\": dictionary of argument name and its value}.' }}\n    {{- \"Do not use variables.\\n\\n\" }}\n    {%- for t in tools %}\n        {{- t | tojson(indent=4) }}\n        {{- \"\\n\\n\" }}\n    {%- endfor %}\n    {{- first_user_message + \"<|eot_id|>\"}}\n{%- endif %}\n\n{%- for message in messages %}\n    {%- if not (message.role == 'ipython' or message.role == 'tool' or 'tool_calls' in message) %}\n        {{- '<|start_header_id|>' + message['role'] + '<|end_header_id|>\\n\\n'+ message['content'] | trim + '<|eot_id|>' }}\n    {%- elif 'tool_calls' in message %}\n        {%- if not message.tool_calls|length == 1 %}\n            {{- raise_exception(\"This model only supports single tool-calls at once!\") }}\n        {%- endif %}\n        {%- set tool_call = message.tool_calls[0].function %}\n        {%- if builtin_tools is defined and tool_call.name in builtin_tools %}\n            {{- '<|start_header_id|>assistant<|end_header_id|>\\n\\n' -}}\n            {{- \"<|python_tag|>\" + tool_call.name + \".call(\" }}\n            {%- for arg_name, arg_val in tool_call.arguments | items %}\n                {{- arg_name + '=\"' + arg_val + '\"' }}\n                {%- if not loop.last %}\n                    {{- \", \" }}\n                {%- endif %}\n                {%- endfor %}\n            {{- \")\" }}\n        {%- else  %}\n            {{- '<|start_header_id|>assistant<|end_header_id|>\\n\\n' -}}\n            {{- '{\"name\": \"' + tool_call.name + '\", ' }}\n            {{- '\"parameters\": ' }}\n            {{- tool_call.arguments | tojson }}\n            {{- \"}\" }}\n        {%- endif %}\n        {%- if builtin_tools is defined %}\n            {#- This means we're in ipython mode #}\n            {{- \"<|eom_id|>\" }}\n        {%- else %}\n            {{- \"<|eot_id|>\" }}\n        {%- endif %}\n    {%- elif message.role == \"tool\" or message.role == \"ipython\" %}\n        {{- \"<|start_header_id|>ipython<|end_header_id|>\\n\\n\" }}\n        {%- if message.content is mapping or message.content is iterable %}\n            {{- message.content | tojson }}\n        {%- else %}\n            {{- message.content }}\n        {%- endif %}\n        {{- \"<|eot_id|>\" }}\n    {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n    {{- '<|start_header_id|>assistant<|end_header_id|>\\n\\n' }}\n{%- endif %}\n",
  "clean_up_tokenization_spaces": true,
  "eos_token": "<|eot_id|>",
  "model_input_names": [
    "input_ids",
    "attention_mask"
  ],
  "model_max_length": 131072,
  "tokenizer_class": "PreTrainedTokenizerFast"
}
```

------
#### [ Llama 3.0 Text ]

```
{
  "added_tokens_decoder": {
    "128000": {
      "content": "<|begin_of_text|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128001": {
      "content": "<|end_of_text|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128002": {
      "content": "<|reserved_special_token_0|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128003": {
      "content": "<|reserved_special_token_1|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128004": {
      "content": "<|reserved_special_token_2|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128005": {
      "content": "<|reserved_special_token_3|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128006": {
      "content": "<|start_header_id|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128007": {
      "content": "<|end_header_id|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128008": {
      "content": "<|reserved_special_token_4|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128009": {
      "content": "<|eot_id|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128010": {
      "content": "<|reserved_special_token_5|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128011": {
      "content": "<|reserved_special_token_6|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128012": {
      "content": "<|reserved_special_token_7|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128013": {
      "content": "<|reserved_special_token_8|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128014": {
      "content": "<|reserved_special_token_9|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128015": {
      "content": "<|reserved_special_token_10|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128016": {
      "content": "<|reserved_special_token_11|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128017": {
      "content": "<|reserved_special_token_12|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128018": {
      "content": "<|reserved_special_token_13|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128019": {
      "content": "<|reserved_special_token_14|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128020": {
      "content": "<|reserved_special_token_15|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128021": {
      "content": "<|reserved_special_token_16|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128022": {
      "content": "<|reserved_special_token_17|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128023": {
      "content": "<|reserved_special_token_18|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128024": {
      "content": "<|reserved_special_token_19|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128025": {
      "content": "<|reserved_special_token_20|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128026": {
      "content": "<|reserved_special_token_21|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128027": {
      "content": "<|reserved_special_token_22|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128028": {
      "content": "<|reserved_special_token_23|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128029": {
      "content": "<|reserved_special_token_24|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128030": {
      "content": "<|reserved_special_token_25|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128031": {
      "content": "<|reserved_special_token_26|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128032": {
      "content": "<|reserved_special_token_27|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128033": {
      "content": "<|reserved_special_token_28|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128034": {
      "content": "<|reserved_special_token_29|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128035": {
      "content": "<|reserved_special_token_30|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128036": {
      "content": "<|reserved_special_token_31|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128037": {
      "content": "<|reserved_special_token_32|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128038": {
      "content": "<|reserved_special_token_33|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128039": {
      "content": "<|reserved_special_token_34|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128040": {
      "content": "<|reserved_special_token_35|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128041": {
      "content": "<|reserved_special_token_36|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128042": {
      "content": "<|reserved_special_token_37|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128043": {
      "content": "<|reserved_special_token_38|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128044": {
      "content": "<|reserved_special_token_39|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128045": {
      "content": "<|reserved_special_token_40|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128046": {
      "content": "<|reserved_special_token_41|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128047": {
      "content": "<|reserved_special_token_42|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128048": {
      "content": "<|reserved_special_token_43|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128049": {
      "content": "<|reserved_special_token_44|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128050": {
      "content": "<|reserved_special_token_45|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128051": {
      "content": "<|reserved_special_token_46|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128052": {
      "content": "<|reserved_special_token_47|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128053": {
      "content": "<|reserved_special_token_48|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128054": {
      "content": "<|reserved_special_token_49|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128055": {
      "content": "<|reserved_special_token_50|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128056": {
      "content": "<|reserved_special_token_51|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128057": {
      "content": "<|reserved_special_token_52|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128058": {
      "content": "<|reserved_special_token_53|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128059": {
      "content": "<|reserved_special_token_54|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128060": {
      "content": "<|reserved_special_token_55|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128061": {
      "content": "<|reserved_special_token_56|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128062": {
      "content": "<|reserved_special_token_57|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128063": {
      "content": "<|reserved_special_token_58|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128064": {
      "content": "<|reserved_special_token_59|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128065": {
      "content": "<|reserved_special_token_60|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128066": {
      "content": "<|reserved_special_token_61|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128067": {
      "content": "<|reserved_special_token_62|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128068": {
      "content": "<|reserved_special_token_63|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128069": {
      "content": "<|reserved_special_token_64|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128070": {
      "content": "<|reserved_special_token_65|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128071": {
      "content": "<|reserved_special_token_66|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128072": {
      "content": "<|reserved_special_token_67|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128073": {
      "content": "<|reserved_special_token_68|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128074": {
      "content": "<|reserved_special_token_69|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128075": {
      "content": "<|reserved_special_token_70|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128076": {
      "content": "<|reserved_special_token_71|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128077": {
      "content": "<|reserved_special_token_72|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128078": {
      "content": "<|reserved_special_token_73|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128079": {
      "content": "<|reserved_special_token_74|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128080": {
      "content": "<|reserved_special_token_75|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128081": {
      "content": "<|reserved_special_token_76|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128082": {
      "content": "<|reserved_special_token_77|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128083": {
      "content": "<|reserved_special_token_78|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128084": {
      "content": "<|reserved_special_token_79|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128085": {
      "content": "<|reserved_special_token_80|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128086": {
      "content": "<|reserved_special_token_81|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128087": {
      "content": "<|reserved_special_token_82|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128088": {
      "content": "<|reserved_special_token_83|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128089": {
      "content": "<|reserved_special_token_84|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128090": {
      "content": "<|reserved_special_token_85|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128091": {
      "content": "<|reserved_special_token_86|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128092": {
      "content": "<|reserved_special_token_87|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128093": {
      "content": "<|reserved_special_token_88|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128094": {
      "content": "<|reserved_special_token_89|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128095": {
      "content": "<|reserved_special_token_90|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128096": {
      "content": "<|reserved_special_token_91|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128097": {
      "content": "<|reserved_special_token_92|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128098": {
      "content": "<|reserved_special_token_93|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128099": {
      "content": "<|reserved_special_token_94|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128100": {
      "content": "<|reserved_special_token_95|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128101": {
      "content": "<|reserved_special_token_96|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128102": {
      "content": "<|reserved_special_token_97|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128103": {
      "content": "<|reserved_special_token_98|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128104": {
      "content": "<|reserved_special_token_99|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128105": {
      "content": "<|reserved_special_token_100|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128106": {
      "content": "<|reserved_special_token_101|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128107": {
      "content": "<|reserved_special_token_102|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128108": {
      "content": "<|reserved_special_token_103|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128109": {
      "content": "<|reserved_special_token_104|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128110": {
      "content": "<|reserved_special_token_105|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128111": {
      "content": "<|reserved_special_token_106|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128112": {
      "content": "<|reserved_special_token_107|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128113": {
      "content": "<|reserved_special_token_108|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128114": {
      "content": "<|reserved_special_token_109|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128115": {
      "content": "<|reserved_special_token_110|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128116": {
      "content": "<|reserved_special_token_111|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128117": {
      "content": "<|reserved_special_token_112|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128118": {
      "content": "<|reserved_special_token_113|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128119": {
      "content": "<|reserved_special_token_114|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128120": {
      "content": "<|reserved_special_token_115|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128121": {
      "content": "<|reserved_special_token_116|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128122": {
      "content": "<|reserved_special_token_117|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128123": {
      "content": "<|reserved_special_token_118|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128124": {
      "content": "<|reserved_special_token_119|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128125": {
      "content": "<|reserved_special_token_120|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128126": {
      "content": "<|reserved_special_token_121|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128127": {
      "content": "<|reserved_special_token_122|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128128": {
      "content": "<|reserved_special_token_123|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128129": {
      "content": "<|reserved_special_token_124|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128130": {
      "content": "<|reserved_special_token_125|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128131": {
      "content": "<|reserved_special_token_126|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128132": {
      "content": "<|reserved_special_token_127|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128133": {
      "content": "<|reserved_special_token_128|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128134": {
      "content": "<|reserved_special_token_129|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128135": {
      "content": "<|reserved_special_token_130|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128136": {
      "content": "<|reserved_special_token_131|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128137": {
      "content": "<|reserved_special_token_132|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128138": {
      "content": "<|reserved_special_token_133|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128139": {
      "content": "<|reserved_special_token_134|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128140": {
      "content": "<|reserved_special_token_135|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128141": {
      "content": "<|reserved_special_token_136|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128142": {
      "content": "<|reserved_special_token_137|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128143": {
      "content": "<|reserved_special_token_138|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128144": {
      "content": "<|reserved_special_token_139|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128145": {
      "content": "<|reserved_special_token_140|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128146": {
      "content": "<|reserved_special_token_141|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128147": {
      "content": "<|reserved_special_token_142|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128148": {
      "content": "<|reserved_special_token_143|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128149": {
      "content": "<|reserved_special_token_144|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128150": {
      "content": "<|reserved_special_token_145|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128151": {
      "content": "<|reserved_special_token_146|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128152": {
      "content": "<|reserved_special_token_147|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128153": {
      "content": "<|reserved_special_token_148|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128154": {
      "content": "<|reserved_special_token_149|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128155": {
      "content": "<|reserved_special_token_150|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128156": {
      "content": "<|reserved_special_token_151|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128157": {
      "content": "<|reserved_special_token_152|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128158": {
      "content": "<|reserved_special_token_153|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128159": {
      "content": "<|reserved_special_token_154|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128160": {
      "content": "<|reserved_special_token_155|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128161": {
      "content": "<|reserved_special_token_156|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128162": {
      "content": "<|reserved_special_token_157|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128163": {
      "content": "<|reserved_special_token_158|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128164": {
      "content": "<|reserved_special_token_159|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128165": {
      "content": "<|reserved_special_token_160|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128166": {
      "content": "<|reserved_special_token_161|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128167": {
      "content": "<|reserved_special_token_162|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128168": {
      "content": "<|reserved_special_token_163|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128169": {
      "content": "<|reserved_special_token_164|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128170": {
      "content": "<|reserved_special_token_165|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128171": {
      "content": "<|reserved_special_token_166|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128172": {
      "content": "<|reserved_special_token_167|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128173": {
      "content": "<|reserved_special_token_168|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128174": {
      "content": "<|reserved_special_token_169|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128175": {
      "content": "<|reserved_special_token_170|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128176": {
      "content": "<|reserved_special_token_171|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128177": {
      "content": "<|reserved_special_token_172|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128178": {
      "content": "<|reserved_special_token_173|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128179": {
      "content": "<|reserved_special_token_174|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128180": {
      "content": "<|reserved_special_token_175|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128181": {
      "content": "<|reserved_special_token_176|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128182": {
      "content": "<|reserved_special_token_177|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128183": {
      "content": "<|reserved_special_token_178|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128184": {
      "content": "<|reserved_special_token_179|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128185": {
      "content": "<|reserved_special_token_180|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128186": {
      "content": "<|reserved_special_token_181|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128187": {
      "content": "<|reserved_special_token_182|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128188": {
      "content": "<|reserved_special_token_183|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128189": {
      "content": "<|reserved_special_token_184|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128190": {
      "content": "<|reserved_special_token_185|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128191": {
      "content": "<|reserved_special_token_186|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128192": {
      "content": "<|reserved_special_token_187|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128193": {
      "content": "<|reserved_special_token_188|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128194": {
      "content": "<|reserved_special_token_189|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128195": {
      "content": "<|reserved_special_token_190|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128196": {
      "content": "<|reserved_special_token_191|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128197": {
      "content": "<|reserved_special_token_192|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128198": {
      "content": "<|reserved_special_token_193|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128199": {
      "content": "<|reserved_special_token_194|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128200": {
      "content": "<|reserved_special_token_195|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128201": {
      "content": "<|reserved_special_token_196|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128202": {
      "content": "<|reserved_special_token_197|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128203": {
      "content": "<|reserved_special_token_198|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128204": {
      "content": "<|reserved_special_token_199|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128205": {
      "content": "<|reserved_special_token_200|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128206": {
      "content": "<|reserved_special_token_201|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128207": {
      "content": "<|reserved_special_token_202|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128208": {
      "content": "<|reserved_special_token_203|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128209": {
      "content": "<|reserved_special_token_204|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128210": {
      "content": "<|reserved_special_token_205|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128211": {
      "content": "<|reserved_special_token_206|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128212": {
      "content": "<|reserved_special_token_207|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128213": {
      "content": "<|reserved_special_token_208|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128214": {
      "content": "<|reserved_special_token_209|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128215": {
      "content": "<|reserved_special_token_210|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128216": {
      "content": "<|reserved_special_token_211|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128217": {
      "content": "<|reserved_special_token_212|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128218": {
      "content": "<|reserved_special_token_213|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128219": {
      "content": "<|reserved_special_token_214|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128220": {
      "content": "<|reserved_special_token_215|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128221": {
      "content": "<|reserved_special_token_216|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128222": {
      "content": "<|reserved_special_token_217|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128223": {
      "content": "<|reserved_special_token_218|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128224": {
      "content": "<|reserved_special_token_219|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128225": {
      "content": "<|reserved_special_token_220|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128226": {
      "content": "<|reserved_special_token_221|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128227": {
      "content": "<|reserved_special_token_222|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128228": {
      "content": "<|reserved_special_token_223|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128229": {
      "content": "<|reserved_special_token_224|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128230": {
      "content": "<|reserved_special_token_225|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128231": {
      "content": "<|reserved_special_token_226|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128232": {
      "content": "<|reserved_special_token_227|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128233": {
      "content": "<|reserved_special_token_228|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128234": {
      "content": "<|reserved_special_token_229|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128235": {
      "content": "<|reserved_special_token_230|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128236": {
      "content": "<|reserved_special_token_231|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128237": {
      "content": "<|reserved_special_token_232|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128238": {
      "content": "<|reserved_special_token_233|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128239": {
      "content": "<|reserved_special_token_234|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128240": {
      "content": "<|reserved_special_token_235|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128241": {
      "content": "<|reserved_special_token_236|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128242": {
      "content": "<|reserved_special_token_237|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128243": {
      "content": "<|reserved_special_token_238|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128244": {
      "content": "<|reserved_special_token_239|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128245": {
      "content": "<|reserved_special_token_240|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128246": {
      "content": "<|reserved_special_token_241|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128247": {
      "content": "<|reserved_special_token_242|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128248": {
      "content": "<|reserved_special_token_243|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128249": {
      "content": "<|reserved_special_token_244|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128250": {
      "content": "<|reserved_special_token_245|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128251": {
      "content": "<|reserved_special_token_246|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128252": {
      "content": "<|reserved_special_token_247|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128253": {
      "content": "<|reserved_special_token_248|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128254": {
      "content": "<|reserved_special_token_249|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128255": {
      "content": "<|reserved_special_token_250|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    }
  },
  "bos_token": "<|begin_of_text|>",
  "chat_template": "{% set loop_messages = messages %}{% for message in loop_messages %}{% set content = '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n'+ message['content'] | trim + '<|eot_id|>' %}{% if loop.index0 == 0 %}{% set content = bos_token + content %}{% endif %}{{ content }}{% endfor %}{% if add_generation_prompt %}{{ '<|start_header_id|>assistant<|end_header_id|>\n\n' }}{% endif %}",
  "clean_up_tokenization_spaces": true,
  "eos_token": "<|eot_id|>",
  "model_input_names": [
    "input_ids",
    "attention_mask"
  ],
  "model_max_length": 1000000000000000019884624838656,
  "tokenizer_class": "PreTrainedTokenizerFast"
}
```

------
#### [ Llama 2.0 Chat models ]

```
{
  "add_bos_token": true,
  "add_eos_token": false,
  "bos_token": {
    "__type": "AddedToken",
    "content": "<s>",
    "lstrip": false,
    "normalized": false,
    "rstrip": false,
    "single_word": false
  },
  "chat_template": "{% if messages[0]['role'] == 'system' %}{% set loop_messages = messages[1:] %}{% set system_message = messages[0]['content'] %}{% else %}{% set loop_messages = messages %}{% set system_message = false %}{% endif %}{% for message in loop_messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if loop.index0 == 0 and system_message != false %}{% set content = '<<SYS>>\\n' + system_message + '\\n<</SYS>>\\n\\n' + message['content'] %}{% else %}{% set content = message['content'] %}{% endif %}{% if message['role'] == 'user' %}{{ bos_token + '[INST] ' + content.strip() + ' [/INST]' }}{% elif message['role'] == 'assistant' %}{{ ' '  + content.strip() + ' ' + eos_token }}{% endif %}{% endfor %}",
  "clean_up_tokenization_spaces": false,
  "eos_token": {
    "__type": "AddedToken",
    "content": "</s>",
    "lstrip": false,
    "normalized": false,
    "rstrip": false,
    "single_word": false
  },
  "legacy": false,
  "model_max_length": 1000000000000000019884624838656,
  "pad_token": null,
  "padding_side": "right",
  "sp_model_kwargs": {},
  "tokenizer_class": "LlamaTokenizer",
  "unk_token": {
    "__type": "AddedToken",
    "content": "<unk>",
    "lstrip": false,
    "normalized": false,
    "rstrip": false,
    "single_word": false
  }
}
```

------
#### [ Mistral 8x7b Text ]

```
{
  "add_bos_token": true,
  "add_eos_token": false,
  "add_prefix_space": null,
  "added_tokens_decoder": {
    "0": {
      "content": "<unk>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "1": {
      "content": "<s>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "2": {
      "content": "</s>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    }
  },
  "additional_special_tokens": [],
  "bos_token": "<s>",
  "chat_template": "{%- if messages[0]['role'] == 'system' %}\n    {%- set system_message = messages[0]['content'] %}\n    {%- set loop_messages = messages[1:] %}\n{%- else %}\n    {%- set loop_messages = messages %}\n{%- endif %}\n\n{{- bos_token }}\n{%- for message in loop_messages %}\n    {%- if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}\n        {{- raise_exception('After the optional system message, conversation roles must alternate user/assistant/user/assistant/...') }}\n    {%- endif %}\n    {%- if message['role'] == 'user' %}\n        {%- if loop.first and system_message is defined %}\n            {{- ' [INST] ' + system_message + '\\n\\n' + message['content'] + ' [/INST]' }}\n        {%- else %}\n            {{- ' [INST] ' + message['content'] + ' [/INST]' }}\n        {%- endif %}\n    {%- elif message['role'] == 'assistant' %}\n        {{- ' ' + message['content'] + eos_token}}\n    {%- else %}\n        {{- raise_exception('Only user and assistant roles are supported, with the exception of an initial optional system message!') }}\n    {%- endif %}\n{%- endfor %}\n",
  "clean_up_tokenization_spaces": false,
  "eos_token": "</s>",
  "legacy": false,
  "model_max_length": 1000000000000000019884624838656,
  "pad_token": null,
  "sp_model_kwargs": {},
  "spaces_between_special_tokens": false,
  "tokenizer_class": "LlamaTokenizer",
  "unk_token": "<unk>",
  "use_default_system_prompt": false
}
```

------
#### [ Mistral 7b ]

```
{
  "add_bos_token": true,
  "add_eos_token": false,
  "add_prefix_space": true,
  "added_tokens_decoder": {
    "0": {
      "content": "<unk>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "1": {
      "content": "<s>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "2": {
      "content": "</s>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "3": {
      "content": "[INST]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "4": {
      "content": "[/INST]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "5": {
      "content": "[TOOL_CALLS]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "6": {
      "content": "[AVAILABLE_TOOLS]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "7": {
      "content": "[/AVAILABLE_TOOLS]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "8": {
      "content": "[TOOL_RESULTS]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "9": {
      "content": "[/TOOL_RESULTS]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "10": {
      "content": "[control_8]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "11": {
      "content": "[control_9]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "12": {
      "content": "[control_10]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "13": {
      "content": "[control_11]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "14": {
      "content": "[control_12]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "15": {
      "content": "[control_13]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "16": {
      "content": "[control_14]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "17": {
      "content": "[control_15]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "18": {
      "content": "[control_16]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "19": {
      "content": "[control_17]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "20": {
      "content": "[control_18]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "21": {
      "content": "[control_19]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "22": {
      "content": "[control_20]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "23": {
      "content": "[control_21]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "24": {
      "content": "[control_22]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "25": {
      "content": "[control_23]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "26": {
      "content": "[control_24]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "27": {
      "content": "[control_25]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "28": {
      "content": "[control_26]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "29": {
      "content": "[control_27]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "30": {
      "content": "[control_28]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "31": {
      "content": "[control_29]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "32": {
      "content": "[control_30]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "33": {
      "content": "[control_31]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "34": {
      "content": "[control_32]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "35": {
      "content": "[control_33]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "36": {
      "content": "[control_34]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "37": {
      "content": "[control_35]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "38": {
      "content": "[control_36]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "39": {
      "content": "[control_37]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "40": {
      "content": "[control_38]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "41": {
      "content": "[control_39]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "42": {
      "content": "[control_40]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "43": {
      "content": "[control_41]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "44": {
      "content": "[control_42]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "45": {
      "content": "[control_43]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "46": {
      "content": "[control_44]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "47": {
      "content": "[control_45]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "48": {
      "content": "[control_46]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "49": {
      "content": "[control_47]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "50": {
      "content": "[control_48]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "51": {
      "content": "[control_49]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "52": {
      "content": "[control_50]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "53": {
      "content": "[control_51]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "54": {
      "content": "[control_52]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "55": {
      "content": "[control_53]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "56": {
      "content": "[control_54]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "57": {
      "content": "[control_55]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "58": {
      "content": "[control_56]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "59": {
      "content": "[control_57]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "60": {
      "content": "[control_58]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "61": {
      "content": "[control_59]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "62": {
      "content": "[control_60]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "63": {
      "content": "[control_61]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "64": {
      "content": "[control_62]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "65": {
      "content": "[control_63]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "66": {
      "content": "[control_64]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "67": {
      "content": "[control_65]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "68": {
      "content": "[control_66]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "69": {
      "content": "[control_67]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "70": {
      "content": "[control_68]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "71": {
      "content": "[control_69]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "72": {
      "content": "[control_70]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "73": {
      "content": "[control_71]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "74": {
      "content": "[control_72]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "75": {
      "content": "[control_73]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "76": {
      "content": "[control_74]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "77": {
      "content": "[control_75]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "78": {
      "content": "[control_76]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "79": {
      "content": "[control_77]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "80": {
      "content": "[control_78]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "81": {
      "content": "[control_79]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "82": {
      "content": "[control_80]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "83": {
      "content": "[control_81]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "84": {
      "content": "[control_82]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "85": {
      "content": "[control_83]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "86": {
      "content": "[control_84]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "87": {
      "content": "[control_85]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "88": {
      "content": "[control_86]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "89": {
      "content": "[control_87]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "90": {
      "content": "[control_88]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "91": {
      "content": "[control_89]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "92": {
      "content": "[control_90]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "93": {
      "content": "[control_91]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "94": {
      "content": "[control_92]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "95": {
      "content": "[control_93]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "96": {
      "content": "[control_94]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "97": {
      "content": "[control_95]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "98": {
      "content": "[control_96]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "99": {
      "content": "[control_97]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "100": {
      "content": "[control_98]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "101": {
      "content": "[control_99]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "102": {
      "content": "[control_100]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "103": {
      "content": "[control_101]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "104": {
      "content": "[control_102]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "105": {
      "content": "[control_103]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "106": {
      "content": "[control_104]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "107": {
      "content": "[control_105]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "108": {
      "content": "[control_106]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "109": {
      "content": "[control_107]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "110": {
      "content": "[control_108]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "111": {
      "content": "[control_109]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "112": {
      "content": "[control_110]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "113": {
      "content": "[control_111]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "114": {
      "content": "[control_112]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "115": {
      "content": "[control_113]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "116": {
      "content": "[control_114]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "117": {
      "content": "[control_115]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "118": {
      "content": "[control_116]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "119": {
      "content": "[control_117]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "120": {
      "content": "[control_118]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "121": {
      "content": "[control_119]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "122": {
      "content": "[control_120]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "123": {
      "content": "[control_121]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "124": {
      "content": "[control_122]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "125": {
      "content": "[control_123]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "126": {
      "content": "[control_124]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "127": {
      "content": "[control_125]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "128": {
      "content": "[control_126]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "129": {
      "content": "[control_127]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "130": {
      "content": "[control_128]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "131": {
      "content": "[control_129]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "132": {
      "content": "[control_130]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "133": {
      "content": "[control_131]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "134": {
      "content": "[control_132]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "135": {
      "content": "[control_133]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "136": {
      "content": "[control_134]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "137": {
      "content": "[control_135]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "138": {
      "content": "[control_136]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "139": {
      "content": "[control_137]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "140": {
      "content": "[control_138]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "141": {
      "content": "[control_139]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "142": {
      "content": "[control_140]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "143": {
      "content": "[control_141]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "144": {
      "content": "[control_142]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "145": {
      "content": "[control_143]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "146": {
      "content": "[control_144]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "147": {
      "content": "[control_145]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "148": {
      "content": "[control_146]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "149": {
      "content": "[control_147]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "150": {
      "content": "[control_148]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "151": {
      "content": "[control_149]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "152": {
      "content": "[control_150]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "153": {
      "content": "[control_151]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "154": {
      "content": "[control_152]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "155": {
      "content": "[control_153]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "156": {
      "content": "[control_154]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "157": {
      "content": "[control_155]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "158": {
      "content": "[control_156]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "159": {
      "content": "[control_157]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "160": {
      "content": "[control_158]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "161": {
      "content": "[control_159]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "162": {
      "content": "[control_160]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "163": {
      "content": "[control_161]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "164": {
      "content": "[control_162]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "165": {
      "content": "[control_163]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "166": {
      "content": "[control_164]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "167": {
      "content": "[control_165]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "168": {
      "content": "[control_166]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "169": {
      "content": "[control_167]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "170": {
      "content": "[control_168]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "171": {
      "content": "[control_169]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "172": {
      "content": "[control_170]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "173": {
      "content": "[control_171]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "174": {
      "content": "[control_172]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "175": {
      "content": "[control_173]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "176": {
      "content": "[control_174]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "177": {
      "content": "[control_175]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "178": {
      "content": "[control_176]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "179": {
      "content": "[control_177]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "180": {
      "content": "[control_178]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "181": {
      "content": "[control_179]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "182": {
      "content": "[control_180]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "183": {
      "content": "[control_181]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "184": {
      "content": "[control_182]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "185": {
      "content": "[control_183]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "186": {
      "content": "[control_184]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "187": {
      "content": "[control_185]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "188": {
      "content": "[control_186]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "189": {
      "content": "[control_187]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "190": {
      "content": "[control_188]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "191": {
      "content": "[control_189]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "192": {
      "content": "[control_190]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "193": {
      "content": "[control_191]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "194": {
      "content": "[control_192]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "195": {
      "content": "[control_193]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "196": {
      "content": "[control_194]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "197": {
      "content": "[control_195]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "198": {
      "content": "[control_196]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "199": {
      "content": "[control_197]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "200": {
      "content": "[control_198]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "201": {
      "content": "[control_199]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "202": {
      "content": "[control_200]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "203": {
      "content": "[control_201]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "204": {
      "content": "[control_202]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "205": {
      "content": "[control_203]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "206": {
      "content": "[control_204]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "207": {
      "content": "[control_205]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "208": {
      "content": "[control_206]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "209": {
      "content": "[control_207]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "210": {
      "content": "[control_208]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "211": {
      "content": "[control_209]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "212": {
      "content": "[control_210]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "213": {
      "content": "[control_211]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "214": {
      "content": "[control_212]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "215": {
      "content": "[control_213]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "216": {
      "content": "[control_214]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "217": {
      "content": "[control_215]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "218": {
      "content": "[control_216]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "219": {
      "content": "[control_217]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "220": {
      "content": "[control_218]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "221": {
      "content": "[control_219]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "222": {
      "content": "[control_220]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "223": {
      "content": "[control_221]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "224": {
      "content": "[control_222]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "225": {
      "content": "[control_223]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "226": {
      "content": "[control_224]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "227": {
      "content": "[control_225]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "228": {
      "content": "[control_226]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "229": {
      "content": "[control_227]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "230": {
      "content": "[control_228]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "231": {
      "content": "[control_229]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "232": {
      "content": "[control_230]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "233": {
      "content": "[control_231]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "234": {
      "content": "[control_232]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "235": {
      "content": "[control_233]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "236": {
      "content": "[control_234]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "237": {
      "content": "[control_235]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "238": {
      "content": "[control_236]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "239": {
      "content": "[control_237]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "240": {
      "content": "[control_238]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "241": {
      "content": "[control_239]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "242": {
      "content": "[control_240]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "243": {
      "content": "[control_241]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "244": {
      "content": "[control_242]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "245": {
      "content": "[control_243]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "246": {
      "content": "[control_244]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "247": {
      "content": "[control_245]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "248": {
      "content": "[control_246]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "249": {
      "content": "[control_247]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "250": {
      "content": "[control_248]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "251": {
      "content": "[control_249]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "252": {
      "content": "[control_250]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "253": {
      "content": "[control_251]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "254": {
      "content": "[control_252]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "255": {
      "content": "[control_253]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "256": {
      "content": "[control_254]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "257": {
      "content": "[control_255]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "258": {
      "content": "[control_256]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "259": {
      "content": "[control_257]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "260": {
      "content": "[control_258]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "261": {
      "content": "[control_259]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "262": {
      "content": "[control_260]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "263": {
      "content": "[control_261]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "264": {
      "content": "[control_262]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "265": {
      "content": "[control_263]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "266": {
      "content": "[control_264]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "267": {
      "content": "[control_265]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "268": {
      "content": "[control_266]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "269": {
      "content": "[control_267]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "270": {
      "content": "[control_268]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "271": {
      "content": "[control_269]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "272": {
      "content": "[control_270]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "273": {
      "content": "[control_271]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "274": {
      "content": "[control_272]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "275": {
      "content": "[control_273]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "276": {
      "content": "[control_274]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "277": {
      "content": "[control_275]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "278": {
      "content": "[control_276]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "279": {
      "content": "[control_277]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "280": {
      "content": "[control_278]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "281": {
      "content": "[control_279]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "282": {
      "content": "[control_280]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "283": {
      "content": "[control_281]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "284": {
      "content": "[control_282]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "285": {
      "content": "[control_283]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "286": {
      "content": "[control_284]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "287": {
      "content": "[control_285]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "288": {
      "content": "[control_286]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "289": {
      "content": "[control_287]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "290": {
      "content": "[control_288]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "291": {
      "content": "[control_289]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "292": {
      "content": "[control_290]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "293": {
      "content": "[control_291]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "294": {
      "content": "[control_292]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "295": {
      "content": "[control_293]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "296": {
      "content": "[control_294]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "297": {
      "content": "[control_295]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "298": {
      "content": "[control_296]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "299": {
      "content": "[control_297]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "300": {
      "content": "[control_298]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "301": {
      "content": "[control_299]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "302": {
      "content": "[control_300]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "303": {
      "content": "[control_301]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "304": {
      "content": "[control_302]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "305": {
      "content": "[control_303]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "306": {
      "content": "[control_304]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "307": {
      "content": "[control_305]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "308": {
      "content": "[control_306]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "309": {
      "content": "[control_307]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "310": {
      "content": "[control_308]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "311": {
      "content": "[control_309]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "312": {
      "content": "[control_310]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "313": {
      "content": "[control_311]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "314": {
      "content": "[control_312]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "315": {
      "content": "[control_313]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "316": {
      "content": "[control_314]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "317": {
      "content": "[control_315]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "318": {
      "content": "[control_316]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "319": {
      "content": "[control_317]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "320": {
      "content": "[control_318]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "321": {
      "content": "[control_319]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "322": {
      "content": "[control_320]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "323": {
      "content": "[control_321]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "324": {
      "content": "[control_322]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "325": {
      "content": "[control_323]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "326": {
      "content": "[control_324]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "327": {
      "content": "[control_325]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "328": {
      "content": "[control_326]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "329": {
      "content": "[control_327]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "330": {
      "content": "[control_328]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "331": {
      "content": "[control_329]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "332": {
      "content": "[control_330]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "333": {
      "content": "[control_331]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "334": {
      "content": "[control_332]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "335": {
      "content": "[control_333]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "336": {
      "content": "[control_334]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "337": {
      "content": "[control_335]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "338": {
      "content": "[control_336]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "339": {
      "content": "[control_337]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "340": {
      "content": "[control_338]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "341": {
      "content": "[control_339]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "342": {
      "content": "[control_340]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "343": {
      "content": "[control_341]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "344": {
      "content": "[control_342]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "345": {
      "content": "[control_343]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "346": {
      "content": "[control_344]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "347": {
      "content": "[control_345]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "348": {
      "content": "[control_346]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "349": {
      "content": "[control_347]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "350": {
      "content": "[control_348]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "351": {
      "content": "[control_349]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "352": {
      "content": "[control_350]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "353": {
      "content": "[control_351]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "354": {
      "content": "[control_352]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "355": {
      "content": "[control_353]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "356": {
      "content": "[control_354]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "357": {
      "content": "[control_355]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "358": {
      "content": "[control_356]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "359": {
      "content": "[control_357]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "360": {
      "content": "[control_358]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "361": {
      "content": "[control_359]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "362": {
      "content": "[control_360]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "363": {
      "content": "[control_361]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "364": {
      "content": "[control_362]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "365": {
      "content": "[control_363]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "366": {
      "content": "[control_364]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "367": {
      "content": "[control_365]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "368": {
      "content": "[control_366]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "369": {
      "content": "[control_367]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "370": {
      "content": "[control_368]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "371": {
      "content": "[control_369]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "372": {
      "content": "[control_370]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "373": {
      "content": "[control_371]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "374": {
      "content": "[control_372]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "375": {
      "content": "[control_373]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "376": {
      "content": "[control_374]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "377": {
      "content": "[control_375]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "378": {
      "content": "[control_376]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "379": {
      "content": "[control_377]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "380": {
      "content": "[control_378]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "381": {
      "content": "[control_379]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "382": {
      "content": "[control_380]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "383": {
      "content": "[control_381]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "384": {
      "content": "[control_382]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "385": {
      "content": "[control_383]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "386": {
      "content": "[control_384]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "387": {
      "content": "[control_385]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "388": {
      "content": "[control_386]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "389": {
      "content": "[control_387]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "390": {
      "content": "[control_388]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "391": {
      "content": "[control_389]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "392": {
      "content": "[control_390]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "393": {
      "content": "[control_391]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "394": {
      "content": "[control_392]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "395": {
      "content": "[control_393]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "396": {
      "content": "[control_394]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "397": {
      "content": "[control_395]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "398": {
      "content": "[control_396]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "399": {
      "content": "[control_397]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "400": {
      "content": "[control_398]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "401": {
      "content": "[control_399]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "402": {
      "content": "[control_400]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "403": {
      "content": "[control_401]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "404": {
      "content": "[control_402]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "405": {
      "content": "[control_403]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "406": {
      "content": "[control_404]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "407": {
      "content": "[control_405]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "408": {
      "content": "[control_406]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "409": {
      "content": "[control_407]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "410": {
      "content": "[control_408]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "411": {
      "content": "[control_409]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "412": {
      "content": "[control_410]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "413": {
      "content": "[control_411]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "414": {
      "content": "[control_412]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "415": {
      "content": "[control_413]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "416": {
      "content": "[control_414]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "417": {
      "content": "[control_415]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "418": {
      "content": "[control_416]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "419": {
      "content": "[control_417]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "420": {
      "content": "[control_418]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "421": {
      "content": "[control_419]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "422": {
      "content": "[control_420]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "423": {
      "content": "[control_421]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "424": {
      "content": "[control_422]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "425": {
      "content": "[control_423]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "426": {
      "content": "[control_424]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "427": {
      "content": "[control_425]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "428": {
      "content": "[control_426]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "429": {
      "content": "[control_427]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "430": {
      "content": "[control_428]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "431": {
      "content": "[control_429]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "432": {
      "content": "[control_430]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "433": {
      "content": "[control_431]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "434": {
      "content": "[control_432]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "435": {
      "content": "[control_433]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "436": {
      "content": "[control_434]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "437": {
      "content": "[control_435]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "438": {
      "content": "[control_436]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "439": {
      "content": "[control_437]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "440": {
      "content": "[control_438]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "441": {
      "content": "[control_439]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "442": {
      "content": "[control_440]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "443": {
      "content": "[control_441]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "444": {
      "content": "[control_442]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "445": {
      "content": "[control_443]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "446": {
      "content": "[control_444]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "447": {
      "content": "[control_445]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "448": {
      "content": "[control_446]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "449": {
      "content": "[control_447]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "450": {
      "content": "[control_448]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "451": {
      "content": "[control_449]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "452": {
      "content": "[control_450]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "453": {
      "content": "[control_451]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "454": {
      "content": "[control_452]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "455": {
      "content": "[control_453]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "456": {
      "content": "[control_454]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "457": {
      "content": "[control_455]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "458": {
      "content": "[control_456]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "459": {
      "content": "[control_457]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "460": {
      "content": "[control_458]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "461": {
      "content": "[control_459]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "462": {
      "content": "[control_460]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "463": {
      "content": "[control_461]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "464": {
      "content": "[control_462]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "465": {
      "content": "[control_463]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "466": {
      "content": "[control_464]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "467": {
      "content": "[control_465]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "468": {
      "content": "[control_466]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "469": {
      "content": "[control_467]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "470": {
      "content": "[control_468]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "471": {
      "content": "[control_469]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "472": {
      "content": "[control_470]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "473": {
      "content": "[control_471]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "474": {
      "content": "[control_472]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "475": {
      "content": "[control_473]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "476": {
      "content": "[control_474]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "477": {
      "content": "[control_475]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "478": {
      "content": "[control_476]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "479": {
      "content": "[control_477]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "480": {
      "content": "[control_478]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "481": {
      "content": "[control_479]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "482": {
      "content": "[control_480]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "483": {
      "content": "[control_481]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "484": {
      "content": "[control_482]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "485": {
      "content": "[control_483]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "486": {
      "content": "[control_484]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "487": {
      "content": "[control_485]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "488": {
      "content": "[control_486]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "489": {
      "content": "[control_487]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "490": {
      "content": "[control_488]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "491": {
      "content": "[control_489]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "492": {
      "content": "[control_490]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "493": {
      "content": "[control_491]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "494": {
      "content": "[control_492]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "495": {
      "content": "[control_493]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "496": {
      "content": "[control_494]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "497": {
      "content": "[control_495]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "498": {
      "content": "[control_496]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "499": {
      "content": "[control_497]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "500": {
      "content": "[control_498]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "501": {
      "content": "[control_499]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "502": {
      "content": "[control_500]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "503": {
      "content": "[control_501]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "504": {
      "content": "[control_502]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "505": {
      "content": "[control_503]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "506": {
      "content": "[control_504]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "507": {
      "content": "[control_505]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "508": {
      "content": "[control_506]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "509": {
      "content": "[control_507]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "510": {
      "content": "[control_508]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "511": {
      "content": "[control_509]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "512": {
      "content": "[control_510]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "513": {
      "content": "[control_511]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "514": {
      "content": "[control_512]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "515": {
      "content": "[control_513]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "516": {
      "content": "[control_514]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "517": {
      "content": "[control_515]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "518": {
      "content": "[control_516]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "519": {
      "content": "[control_517]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "520": {
      "content": "[control_518]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "521": {
      "content": "[control_519]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "522": {
      "content": "[control_520]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "523": {
      "content": "[control_521]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "524": {
      "content": "[control_522]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "525": {
      "content": "[control_523]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "526": {
      "content": "[control_524]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "527": {
      "content": "[control_525]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "528": {
      "content": "[control_526]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "529": {
      "content": "[control_527]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "530": {
      "content": "[control_528]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "531": {
      "content": "[control_529]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "532": {
      "content": "[control_530]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "533": {
      "content": "[control_531]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "534": {
      "content": "[control_532]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "535": {
      "content": "[control_533]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "536": {
      "content": "[control_534]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "537": {
      "content": "[control_535]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "538": {
      "content": "[control_536]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "539": {
      "content": "[control_537]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "540": {
      "content": "[control_538]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "541": {
      "content": "[control_539]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "542": {
      "content": "[control_540]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "543": {
      "content": "[control_541]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "544": {
      "content": "[control_542]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "545": {
      "content": "[control_543]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "546": {
      "content": "[control_544]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "547": {
      "content": "[control_545]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "548": {
      "content": "[control_546]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "549": {
      "content": "[control_547]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "550": {
      "content": "[control_548]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "551": {
      "content": "[control_549]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "552": {
      "content": "[control_550]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "553": {
      "content": "[control_551]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "554": {
      "content": "[control_552]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "555": {
      "content": "[control_553]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "556": {
      "content": "[control_554]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "557": {
      "content": "[control_555]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "558": {
      "content": "[control_556]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "559": {
      "content": "[control_557]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "560": {
      "content": "[control_558]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "561": {
      "content": "[control_559]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "562": {
      "content": "[control_560]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "563": {
      "content": "[control_561]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "564": {
      "content": "[control_562]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "565": {
      "content": "[control_563]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "566": {
      "content": "[control_564]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "567": {
      "content": "[control_565]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "568": {
      "content": "[control_566]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "569": {
      "content": "[control_567]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "570": {
      "content": "[control_568]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "571": {
      "content": "[control_569]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "572": {
      "content": "[control_570]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "573": {
      "content": "[control_571]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "574": {
      "content": "[control_572]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "575": {
      "content": "[control_573]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "576": {
      "content": "[control_574]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "577": {
      "content": "[control_575]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "578": {
      "content": "[control_576]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "579": {
      "content": "[control_577]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "580": {
      "content": "[control_578]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "581": {
      "content": "[control_579]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "582": {
      "content": "[control_580]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "583": {
      "content": "[control_581]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "584": {
      "content": "[control_582]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "585": {
      "content": "[control_583]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "586": {
      "content": "[control_584]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "587": {
      "content": "[control_585]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "588": {
      "content": "[control_586]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "589": {
      "content": "[control_587]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "590": {
      "content": "[control_588]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "591": {
      "content": "[control_589]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "592": {
      "content": "[control_590]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "593": {
      "content": "[control_591]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "594": {
      "content": "[control_592]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "595": {
      "content": "[control_593]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "596": {
      "content": "[control_594]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "597": {
      "content": "[control_595]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "598": {
      "content": "[control_596]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "599": {
      "content": "[control_597]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "600": {
      "content": "[control_598]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "601": {
      "content": "[control_599]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "602": {
      "content": "[control_600]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "603": {
      "content": "[control_601]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "604": {
      "content": "[control_602]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "605": {
      "content": "[control_603]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "606": {
      "content": "[control_604]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "607": {
      "content": "[control_605]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "608": {
      "content": "[control_606]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "609": {
      "content": "[control_607]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "610": {
      "content": "[control_608]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "611": {
      "content": "[control_609]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "612": {
      "content": "[control_610]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "613": {
      "content": "[control_611]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "614": {
      "content": "[control_612]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "615": {
      "content": "[control_613]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "616": {
      "content": "[control_614]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "617": {
      "content": "[control_615]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "618": {
      "content": "[control_616]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "619": {
      "content": "[control_617]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "620": {
      "content": "[control_618]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "621": {
      "content": "[control_619]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "622": {
      "content": "[control_620]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "623": {
      "content": "[control_621]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "624": {
      "content": "[control_622]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "625": {
      "content": "[control_623]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "626": {
      "content": "[control_624]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "627": {
      "content": "[control_625]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "628": {
      "content": "[control_626]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "629": {
      "content": "[control_627]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "630": {
      "content": "[control_628]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "631": {
      "content": "[control_629]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "632": {
      "content": "[control_630]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "633": {
      "content": "[control_631]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "634": {
      "content": "[control_632]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "635": {
      "content": "[control_633]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "636": {
      "content": "[control_634]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "637": {
      "content": "[control_635]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "638": {
      "content": "[control_636]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "639": {
      "content": "[control_637]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "640": {
      "content": "[control_638]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "641": {
      "content": "[control_639]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "642": {
      "content": "[control_640]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "643": {
      "content": "[control_641]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "644": {
      "content": "[control_642]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "645": {
      "content": "[control_643]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "646": {
      "content": "[control_644]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "647": {
      "content": "[control_645]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "648": {
      "content": "[control_646]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "649": {
      "content": "[control_647]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "650": {
      "content": "[control_648]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "651": {
      "content": "[control_649]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "652": {
      "content": "[control_650]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "653": {
      "content": "[control_651]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "654": {
      "content": "[control_652]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "655": {
      "content": "[control_653]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "656": {
      "content": "[control_654]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "657": {
      "content": "[control_655]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "658": {
      "content": "[control_656]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "659": {
      "content": "[control_657]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "660": {
      "content": "[control_658]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "661": {
      "content": "[control_659]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "662": {
      "content": "[control_660]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "663": {
      "content": "[control_661]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "664": {
      "content": "[control_662]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "665": {
      "content": "[control_663]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "666": {
      "content": "[control_664]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "667": {
      "content": "[control_665]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "668": {
      "content": "[control_666]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "669": {
      "content": "[control_667]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "670": {
      "content": "[control_668]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "671": {
      "content": "[control_669]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "672": {
      "content": "[control_670]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "673": {
      "content": "[control_671]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "674": {
      "content": "[control_672]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "675": {
      "content": "[control_673]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "676": {
      "content": "[control_674]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "677": {
      "content": "[control_675]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "678": {
      "content": "[control_676]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "679": {
      "content": "[control_677]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "680": {
      "content": "[control_678]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "681": {
      "content": "[control_679]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "682": {
      "content": "[control_680]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "683": {
      "content": "[control_681]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "684": {
      "content": "[control_682]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "685": {
      "content": "[control_683]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "686": {
      "content": "[control_684]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "687": {
      "content": "[control_685]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "688": {
      "content": "[control_686]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "689": {
      "content": "[control_687]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "690": {
      "content": "[control_688]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "691": {
      "content": "[control_689]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "692": {
      "content": "[control_690]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "693": {
      "content": "[control_691]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "694": {
      "content": "[control_692]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "695": {
      "content": "[control_693]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "696": {
      "content": "[control_694]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "697": {
      "content": "[control_695]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "698": {
      "content": "[control_696]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "699": {
      "content": "[control_697]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "700": {
      "content": "[control_698]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "701": {
      "content": "[control_699]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "702": {
      "content": "[control_700]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "703": {
      "content": "[control_701]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "704": {
      "content": "[control_702]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "705": {
      "content": "[control_703]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "706": {
      "content": "[control_704]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "707": {
      "content": "[control_705]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "708": {
      "content": "[control_706]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "709": {
      "content": "[control_707]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "710": {
      "content": "[control_708]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "711": {
      "content": "[control_709]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "712": {
      "content": "[control_710]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "713": {
      "content": "[control_711]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "714": {
      "content": "[control_712]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "715": {
      "content": "[control_713]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "716": {
      "content": "[control_714]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "717": {
      "content": "[control_715]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "718": {
      "content": "[control_716]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "719": {
      "content": "[control_717]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "720": {
      "content": "[control_718]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "721": {
      "content": "[control_719]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "722": {
      "content": "[control_720]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "723": {
      "content": "[control_721]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "724": {
      "content": "[control_722]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "725": {
      "content": "[control_723]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "726": {
      "content": "[control_724]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "727": {
      "content": "[control_725]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "728": {
      "content": "[control_726]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "729": {
      "content": "[control_727]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "730": {
      "content": "[control_728]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "731": {
      "content": "[control_729]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "732": {
      "content": "[control_730]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "733": {
      "content": "[control_731]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "734": {
      "content": "[control_732]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "735": {
      "content": "[control_733]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "736": {
      "content": "[control_734]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "737": {
      "content": "[control_735]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "738": {
      "content": "[control_736]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "739": {
      "content": "[control_737]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "740": {
      "content": "[control_738]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "741": {
      "content": "[control_739]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "742": {
      "content": "[control_740]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "743": {
      "content": "[control_741]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "744": {
      "content": "[control_742]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "745": {
      "content": "[control_743]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "746": {
      "content": "[control_744]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "747": {
      "content": "[control_745]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "748": {
      "content": "[control_746]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "749": {
      "content": "[control_747]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "750": {
      "content": "[control_748]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "751": {
      "content": "[control_749]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "752": {
      "content": "[control_750]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "753": {
      "content": "[control_751]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "754": {
      "content": "[control_752]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "755": {
      "content": "[control_753]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "756": {
      "content": "[control_754]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "757": {
      "content": "[control_755]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "758": {
      "content": "[control_756]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "759": {
      "content": "[control_757]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "760": {
      "content": "[control_758]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "761": {
      "content": "[control_759]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "762": {
      "content": "[control_760]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "763": {
      "content": "[control_761]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "764": {
      "content": "[control_762]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "765": {
      "content": "[control_763]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "766": {
      "content": "[control_764]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "767": {
      "content": "[control_765]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "768": {
      "content": "[control_766]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "769": {
      "content": "[control_767]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "770": {
      "content": "[control_768]",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    }
  },
  "bos_token": "<s>",
  "chat_template": "{%- if messages[0][\"role\"] == \"system\" %}\n    {%- set system_message = messages[0][\"content\"] %}\n    {%- set loop_messages = messages[1:] %}\n{%- else %}\n    {%- set loop_messages = messages %}\n{%- endif %}\n{%- if not tools is defined %}\n    {%- set tools = none %}\n{%- endif %}\n{%- set user_messages = loop_messages | selectattr(\"role\", \"equalto\", \"user\") | list %}\n\n{#- This block checks for alternating user/assistant messages, skipping tool calling messages #}\n{%- set ns = namespace() %}\n{%- set ns.index = 0 %}\n{%- for message in loop_messages %}\n    {%- if not (message.role == \"tool\" or message.role == \"tool_results\" or (message.tool_calls is defined and message.tool_calls is not none)) %}\n        {%- if (message[\"role\"] == \"user\") != (ns.index % 2 == 0) %}\n            {{- raise_exception(\"After the optional system message, conversation roles must alternate user/assistant/user/assistant/...\") }}\n        {%- endif %}\n        {%- set ns.index = ns.index + 1 %}\n    {%- endif %}\n{%- endfor %}\n\n{{- bos_token }}\n{%- for message in loop_messages %}\n    {%- if message[\"role\"] == \"user\" %}\n        {%- if tools is not none and (message == user_messages[-1]) %}\n            {{- \"[AVAILABLE_TOOLS] [\" }}\n            {%- for tool in tools %}\n                {%- set tool = tool.function %}\n                {{- '{\"type\": \"function\", \"function\": {' }}\n                {%- for key, val in tool.items() if key != \"return\" %}\n                    {%- if val is string %}\n                        {{- '\"' + key + '\": \"' + val + '\"' }}\n                    {%- else %}\n                        {{- '\"' + key + '\": ' + val|tojson }}\n                    {%- endif %}\n                    {%- if not loop.last %}\n                        {{- \", \" }}\n                    {%- endif %}\n                {%- endfor %}\n                {{- \"}}\" }}\n                {%- if not loop.last %}\n                    {{- \", \" }}\n                {%- else %}\n                    {{- \"]\" }}\n                {%- endif %}\n            {%- endfor %}\n            {{- \"[/AVAILABLE_TOOLS]\" }}\n            {%- endif %}\n        {%- if loop.last and system_message is defined %}\n            {{- \"[INST] \" + system_message + \"\\n\\n\" + message[\"content\"] + \"[/INST]\" }}\n        {%- else %}\n            {{- \"[INST] \" + message[\"content\"] + \"[/INST]\" }}\n        {%- endif %}\n    {%- elif message.tool_calls is defined and message.tool_calls is not none %}\n        {{- \"[TOOL_CALLS] [\" }}\n        {%- for tool_call in message.tool_calls %}\n            {%- set out = tool_call.function|tojson %}\n            {{- out[:-1] }}\n            {%- if not tool_call.id is defined or tool_call.id|length != 9 %}\n                {{- raise_exception(\"Tool call IDs should be alphanumeric strings with length 9!\") }}\n            {%- endif %}\n            {{- ', \"id\": \"' + tool_call.id + '\"}' }}\n            {%- if not loop.last %}\n                {{- \", \" }}\n            {%- else %}\n                {{- \"]\" + eos_token }}\n            {%- endif %}\n        {%- endfor %}\n    {%- elif message[\"role\"] == \"assistant\" %}\n        {{- \" \" + message[\"content\"]|trim + eos_token}}\n    {%- elif message[\"role\"] == \"tool_results\" or message[\"role\"] == \"tool\" %}\n        {%- if message.content is defined and message.content.content is defined %}\n            {%- set content = message.content.content %}\n        {%- else %}\n            {%- set content = message.content %}\n        {%- endif %}\n        {{- '[TOOL_RESULTS] {\"content\": ' + content|string + \", \" }}\n        {%- if not message.tool_call_id is defined or message.tool_call_id|length != 9 %}\n            {{- raise_exception(\"Tool call IDs should be alphanumeric strings with length 9!\") }}\n        {%- endif %}\n        {{- '\"call_id\": \"' + message.tool_call_id + '\"}[/TOOL_RESULTS]' }}\n    {%- else %}\n        {{- raise_exception(\"Only user and assistant roles are supported, with the exception of an initial optional system message!\") }}\n    {%- endif %}\n{%- endfor %}\n",
  "clean_up_tokenization_spaces": false,
  "eos_token": "</s>",
  "legacy": false,
  "model_max_length": 1000000000000000019884624838656,
  "pad_token": null,
  "sp_model_kwargs": {},
  "spaces_between_special_tokens": false,
  "tokenizer_class": "LlamaTokenizer",
  "unk_token": "<unk>",
  "use_default_system_prompt": false
}
```

------
#### [ Qwen2 ]

```
{
  "add_prefix_space": false,
  "added_tokens_decoder": {
    "151643": {
      "content": "<|endoftext|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "151644": {
      "content": "<|im_start|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    },
    "151645": {
      "content": "<|im_end|>",
      "lstrip": false,
      "normalized": false,
      "rstrip": false,
      "single_word": false,
      "special": true
    }
  },
  "additional_special_tokens": ["<|im_start|>", "<|im_end|>"],
  "bos_token": null,
  "chat_template": "{% for message in messages %}{% if loop.first and messages[0]['role'] != 'system' %}{{ '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n' }}{% endif %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}",
  "clean_up_tokenization_spaces": false,
  "eos_token": "<|im_end|>",
  "errors": "replace",
  "model_max_length": 131072,
  "pad_token": "<|endoftext|>",
  "split_special_tokens": false,
  "tokenizer_class": "Qwen2Tokenizer",
  "unk_token": null
}
```

------

# Import a SageMaker AI-trained Amazon Nova model
<a name="import-with-create-custom-model"></a>

**Note**  
To import open-source models into Amazon Bedrock, you use a model import job. For more information, see [Use Custom model import to import a customized open-source model into Amazon Bedrock](model-customization-import-model.md). 

 To import an Amazon Nova model that you customized with SageMaker AI, you create a new custom model in Amazon Bedrock for it. For example, if you used SageMaker AI to fine-tune an Amazon Nova Pro model to improve its performance for a specific use case, you can import the fine-tuned model into Amazon Bedrock as a custom model and use it to run inference. 

 Creating and using a SageMaker AI-trained Amazon Nova model in Amazon Bedrock works as follows: 

1. **Customize the model in SageMaker AI** – Customize a Amazon Nova model through pre-built base recipes using SageMaker AI training jobs. For more information, see [Customizing Amazon Nova models](https://docs.aws.amazon.com/sagemaker/latest/dg/nova-model.html) in the *Amazon SageMaker AI Developer Guide*.

1. **Create a custom model in Amazon Bedrock** – Create a custom model with the [CreateCustomModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_CreateCustomModel.html) API operation. When you create a custom model, Amazon Bedrock validates and then imports the model artifacts from an Amazon-managed Amazon S3 bucket. SageMaker AI creates this bucket when you run your first SageMaker AI training job. For a code sample, see [Create a custom model (AWS SDKs)](create-custom-model-sdks.md).

1.  **Set up inference for the custom model** – After the import process is complete, you can set up inference for the model. For more information, see [Set up inference for a custom model](model-customization-use.md). You manage the custom model just like models you create in Amazon Bedrock through model customization. For more information, see [Customize your model to improve its performance for your use case](custom-models.md).

You can create a custom model from an Amazon Nova model in the following Regions (for more information about Regions supported in Amazon Bedrock, see [Amazon Bedrock endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/bedrock.html)): 
+ US East (N. Virginia)

**Topics**
+ [

## Guidelines and requirements
](#create-custom-model-considerations)
+ [

# Create a custom model (AWS SDKs)
](create-custom-model-sdks.md)

## Guidelines and requirements
<a name="create-custom-model-considerations"></a>

Before you import an Amazon Nova model, note the following:
+ You must use the US East (N. Virginia) region.
+ When you customize the model in SageMaker AI, you must use one of the following Amazon Nova models:
  + Amazon Nova Lite
  + Amazon Nova Micro
  + Amazon Nova Pro
+ When you customize the model in SageMaker AI, you must use the SageMaker AI Amazon Nova recipes. For more information, see [Amazon Nova recipes](https://docs.aws.amazon.com/sagemaker/latest/dg/nova-model-recipes.html) in the Amazon SageMaker AI Developer Guide. 
+ Your SageMaker AI-trained Amazon Nova models must be stored in an Amazon-managed Amazon S3 bucket. SageMaker AI creates this bucket when you run your first SageMaker AI training job.
+ Your Amazon Bedrock service role must have permissions to access the Amazon-managed Amazon S3 bucket, and if specified, your AWS KMS key. For more information about creating the role, see [Create a service role for importing pre-trained models](model-import-iam-role.md). For information about granting the role permission to use your AWS KMS key, see [Encryption of imported custom models](encryption-import-model.md). 
+ You can only use the Amazon Bedrock APIs to create a custom model from an existing SageMaker AI-trained Amazon Nova model. You can't use the Amazon Bedrock console.

# Create a custom model (AWS SDKs)
<a name="create-custom-model-sdks"></a>

To create a custom model from a SageMaker AI-trained Amazon Nova model stored in Amazon S3, you use the [CreateCustomModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_CreateCustomModel.html) API operation. You can use the following code to create a custom model with the SDK for Python (Boto3). The code creates a custom model and then checks its status until the model is `ACTIVE` and ready to use.

To use the code, update the following parameters. The code sample also includes optional parameters such as `clientRequestToken` for idempotency and `modelTags` for resource tagging. 
+ **modelName** – Give the model a unique name.
+ **s3Uri** – Specify the path to the Amazon-managed Amazon S3 bucket that stores your model artifacts. SageMaker AI creates this bucket when you run your first SageMaker AI training job.
+ **roleArn** – Specify the Amazon Resource Name (ARN) of an IAM service role that Amazon Bedrock assumes to perform tasks on your behalf. For more information about creating this role, see [Create a service role for importing pre-trained models](model-import-iam-role.md).
+ **modelKmsKeyArn** (optional) – Specify a AWS KMS key to encrypt the model in Amazon Bedrock. If you don't provide a AWS KMS key, Amazon Bedrock uses an AWS-managed AWS KMS key to encrypt the model. For information about encryption, see [Encryption of imported custom models](encryption-import-model.md).

After you create the custom model, the model appears in the [ListCustomModels](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_ListCustomModels.html) response with a `customizationType` of `imported`. To track the status of the new model, you use the [GetCustomModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_GetCustomModel.html) API operation.

```
import boto3
import uuid
from botocore.exceptions import ClientError
import time

def create_custom_model(bedrock_client):
    """
    Creates a custom model in Amazon Bedrock from a SageMaker AI-trained Amazon Nova model stored in Amazon S3.
    Args:
        bedrock_client: The Amazon Bedrock client instance
    Returns:
        dict: Response from the CreateCustomModel API call
    """
    try:
        # Create a unique client request token for idempotency
        client_request_token = str(uuid.uuid4())

        # Define the model source configuration
        model_source_config = {
            's3DataSource': {
                's3Uri': 's3://amzn-s3-demo-bucket/folder/',
            }
        }

        # Create the custom model
        response = bedrock_client.create_custom_model(
            # Required parameters
            modelName='modelName',
            roleArn='serviceRoleArn',
            modelSourceConfig=model_source_config,

            # Optional parameters
            clientRequestToken=client_request_token,
            modelKmsKeyArn='keyArn',
            modelTags=[
                {
                    'key': 'Environment',
                    'value': 'Production'
                },
                {
                    'key': 'Project',
                    'value': 'AIInference'
                }
            ]
        )

        print(f"Custom model creation initiated. Model ARN: {response['modelArn']}")

        return response

    except ClientError as e:
        print(f"Error creating custom model: {e}")
        raise

def list_custom_models(bedrock_client):
    """
    Lists all custom models in Amazon Bedrock.

    Args:
        bedrock_client: An Amazon Bedrock client.

    Returns:
        dict: Response from the ListCustomModels API call

    """

    try:
        response = bedrock_client.list_custom_models()
        print(f"Total number of custom models: {len(response['modelSummaries'])}")

        for model in response['modelSummaries']:
            print("ARN: " + model['modelArn'])
            print("Name: " + model['modelName'])
            print("Status: " + model['modelStatus'])
            print("Customization type: " + model['customizationType'])
            print("------------------------------------------------------")

        return response

    except ClientError as e:
        print(f"Error listing custom models: {e}")
        raise

def check_model_status(bedrock_client, model_arn):
    """
    Checks the status of a custom model creation.

    Args:
        model_arn (str): The ARN of the custom model
        bedrock_client: An Amazon Bedrock client.

    Returns:
        dict: Response from the GetCustomModel API call

    """

    try:
        max_time = time.time() + 60 * 60  # 1 hour

        while time.time() < max_time:
            response = bedrock_client.get_custom_model(modelIdentifier=model_arn)
            status = response.get('modelStatus')
            print(f"Job status: {status}")
            if status == 'Failed':
                print(f"Failure reason: {response.get('failureMessage')}")
                break
            if status == 'Active':
                print("Model is ready for use.")
                break
            time.sleep(60)

    except ClientError as e:
        print(f"Error checking model status: {e}")
        raise


def main():
    bedrock_client = boto3.client(service_name='bedrock', region_name='REGION')
    
    # Create the custom model
    model_arn = create_custom_model(bedrock_client)["modelArn"]

    # Check the status of the model
    if model_arn:
        check_model_status(bedrock_client, model_arn)

    # View all custom models
    list_custom_models(bedrock_client)


if __name__ == "__main__":
    main()
```

# Manage customized models
<a name="manage-customized-models"></a>

After creating a customized model through fine-tuning, reinforcement fine-tuning, distillation, or import, you can manage the model throughout its lifecycle. This section covers security configuration, monitoring, deployment, sharing across accounts, and troubleshooting.

**Security and monitoring**
+ [Model customization access and security](custom-model-job-access-security.md) - Configure IAM roles, permissions, and encryption for model customization jobs.
+ [Monitor your model customization job](model-customization-monitor.md) - Track the progress and status of your model customization jobs.
+ [Analyze the results of a model customization job](model-customization-analyze.md) - Review metrics and results after a customization job completes.
+ [Stop a model customization job](model-customization-stop.md) - Stop a running model customization job if needed.

**Using and deploying models**
+ [View details about a custom model](model-customization-view.md) - View details, metadata, and configuration of your custom models.
+ [Set up inference for a custom model](model-customization-use.md) - Configure and deploy a custom model for inference.

**Sharing and distribution**
+ [Share a model for another account to use](share-model.md) - Share a custom model with another AWS account in your organization.
+ [Copy a customized or shared model to use in a Region](copy-model.md) - Copy a customized or shared model to use in a different Region.

**Maintenance**
+ [Delete a custom model](model-customization-delete.md) - Delete a custom model when it's no longer needed.
+ [Code samples for model customization](model-customization-code-samples.md) - Example code for common model customization tasks.
+ [Troubleshooting model customization issues](fine-tuning-troubleshooting.md) - Diagnose and resolve common issues with model customization.

# Model customization access and security
<a name="custom-model-job-access-security"></a>

This page provides comprehensive security and permissions information for all Amazon Bedrock model customization types including fine-tuning, reinforcement fine-tuning, distillation, and model operations. Before you begin customizing a model, make sure that you understand what kind of access Amazon Bedrock needs and consider some options for securing your customization jobs and artifacts.

The following sections contain the permissions required for different customization scenarios. Choose the appropriate permissions based on your specific use case:

## Basic model customization permissions
<a name="custom-model-job-basic-permissions"></a>

All model customization jobs require these basic permissions. These apply to fine-tuning, distillation, and other customization types.

### Create an IAM service role for model customization
<a name="custom-model-job-service-role"></a>

Amazon Bedrock needs an AWS Identity and Access Management (IAM) service role to access the S3 bucket where you want to store your model customization training and validation data. There are a couple ways to do this:
+ Create the service role automatically by using the AWS Management Console.
+ Create the service role manually with the proper permissions to access your S3 bucket.

For the manual option, create an IAM role and attach the following permissions by following the steps at [Creating a role to delegate permissions to an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-service.html).
+ Trust relationship
+ Permissions to access your training and validation data in S3 and to write your output data to S3
+ (Optional) If you encrypt any of the following resources with a KMS key, permissions to decrypt the key (see [Encryption of custom models](encryption-custom-job.md))
  + A model customization job or the resulting custom model
  + The training, validation, or output data for the model customization job

### Trust relationship
<a name="custom-model-job-service-role-trust-relationship"></a>

The following policy allows Amazon Bedrock to assume this role and carry out the model customization job. The following shows an example policy you can use.

You can optionally restrict the scope of the permission for [cross-service confused deputy prevention](cross-service-confused-deputy-prevention.md) by using one or more global condition context keys with the `Condition` field. For more information, see [AWS global condition context keys.](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html)
+ Set the `aws:SourceAccount` value to your account ID.
+ (Optional) Use the `ArnEquals` or `ArnLike` condition to restrict the scope to specific model customization jobs in your account ID.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "bedrock.amazonaws.com"
            },
            "Action": "sts:AssumeRole",
            "Condition": {
                "StringEquals": {
                    "aws:SourceAccount": "123456789012"
                },
                "ArnEquals": {
                    "aws:SourceArn": "arn:aws:bedrock:us-east-1:111122223333:model-customization-job/*"
                }
            }
        }
    ]
}
```

------

### Permissions to access training and validation files and to write output files in S3
<a name="custom-model-job-service-role-s3-permissions"></a>

Attach the following policy to allow the role to access your training and validation data and the bucket to which to write your output data. Replace the values in the `Resource` list with your actual bucket names.

To restrict access to a specific folder in a bucket, add an `s3:prefix` condition key with your folder path. You can follow the **User policy** example in [Example 2: Getting a list of objects in a bucket with a specific prefix](https://docs.aws.amazon.com/AmazonS3/latest/userguide/amazon-s3-policy-keys.html#condition-key-bucket-ops-2) 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::training-bucket",
                "arn:aws:s3:::training-bucket/*",
                "arn:aws:s3:::validation-bucket",
                "arn:aws:s3:::validation-bucket/*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:PutObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::output-bucket",
                "arn:aws:s3:::output-bucket/*"
            ]
        }
    ]
}
```

------

## Model copying permissions
<a name="custom-model-job-model-copying-permissions"></a>

To copy a model to another Region, you need specific permissions depending on your role's current permissions and the model's configuration.

1. If your role doesn't have the [AmazonBedrockFullAccess](security-iam-awsmanpol.md#security-iam-awsmanpol-AmazonBedrockFullAccess) policy attached, attach the following identity-based policy to the role to allow the minimal permissions to copy models and to track copy jobs.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "CopyModels",
               "Effect": "Allow",
               "Action": [
                   "bedrock:CreateModelCopyJob",
                   "bedrock:GetModelCopyJob",
                   "bedrock:ListModelCopyJobs"
               ],
               "Resource": [
                   "arn:aws:bedrock:us-east-1::foundation-model/model-id"
               ],
               "Condition": {
                   "StringEquals": {
                       "aws:RequestedRegion": [
                           "us-east-1"
                       ]
                   }
               }
           }
       ]
   }
   ```

------

1. (Optional) If the model to be copied is encrypted with a KMS key, attach a [key policy to the KMS key that encrypted the model](encryption-custom-job.md#encryption-key-policy-decrypt) to allow a role to decrypt it.

1. (Optional) If you plan to encrypt the model copy with a KMS key, attach a [key policy to the KMS key that will be used to encrypt the model](encryption-custom-job.md#encryption-key-policy-encrypt) to allow a role to encrypt the model with the key.

## Cross-region inference profile permissions
<a name="custom-models-cross-region-inference-profile-basic-permissions"></a>

To use a cross-region inference profile for a teacher model in a distillation job, the service role must have permissions to invoke the inference profile in an AWS Region, in addition to the model in each Region in the inference profile.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "CrossRegionInference",
            "Effect": "Allow",
            "Action": [
                "bedrock:InvokeModel"
            ],
            "Resource": [
                "arn:aws:bedrock:us-east-1:123456789012:inference-profile/${InferenceProfileId}",
                "arn:aws:bedrock:us-east-1::foundation-model/${ModelId}",
                "arn:aws:bedrock:us-east-1::foundation-model/${ModelId}"
            ]
        }
    ]
}
```

------

## (Optional) Permissions to create a Distillation job with a cross-region inference profile
<a name="custom-models-cross-region-inference-profile-permissions"></a>

To use a cross-region inference profile for a teacher model in a distillation job, the service role must have permissions to invoke the inference profile in an AWS Region, in addition to the model in each Region in the inference profile.

For permissions to invoke with a cross-Region (system-defined) inference profile, use the following policy as a template for the permissions policy to attach to your service role:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "CrossRegionInference",
            "Effect": "Allow",
            "Action": [
                "bedrock:InvokeModel"
            ],
            "Resource": [
                "arn:aws:bedrock:us-east-1:123456789012:inference-profile/${InferenceProfileId}",
                "arn:aws:bedrock:us-east-1::foundation-model/${ModelId}",
                "arn:aws:bedrock:us-east-1::foundation-model/${ModelId}"
            ]
        }
    ]
}
```

------

## (Optional) Encrypt model customization jobs and artifacts
<a name="custom-model-job-encryption"></a>

Encrypt the input and output data, customization jobs, or inference requests made to custom models. For more information, see [Encryption of custom models](encryption-custom-job.md).

## (Optional) Protect your model customization jobs using a VPC
<a name="vpc-model-customization"></a>

When you run a model customization job, the job accesses your Amazon S3 bucket to download the input data and to upload job metrics. To control access to your data, we recommend that you use a virtual private cloud (VPC) with [Amazon VPC](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html). You can further protect your data by configuring your VPC so that your data isn't available over the internet and instead creating a VPC interface endpoint with [AWS PrivateLink](https://docs.aws.amazon.com/vpc/latest/privatelink/what-is-privatelink.html) to establish a private connection to your data. For more information about how Amazon VPC and AWS PrivateLink integrate with Amazon Bedrock, see [Protect your data using Amazon VPC and AWS PrivateLink](usingVPC.md).

Do the following steps to configure and use a VPC for the training, validation, and output data for your model customization jobs.

**Topics**
+ [

### Set up VPC to protect your data during model customization
](#vpc-cm-setup)
+ [

### Attach VPC permissions to a model customization role
](#vpc-data-access-role)
+ [

### Add the VPC configuration when submitting a model customization job
](#vpc-config)

### Set up VPC to protect your data during model customization
<a name="vpc-cm-setup"></a>

To set up a VPC, follow the steps at [Set up a VPC](usingVPC.md#create-vpc). You can further secure your VPC by setting up an S3 VPC endpoint and using resource-based IAM policies to restrict access to the S3 bucket containing your model customization data by following the steps at [(Example) Restrict data access to your Amazon S3 data using VPC](vpc-s3.md).

### Attach VPC permissions to a model customization role
<a name="vpc-data-access-role"></a>

After you finish setting up your VPC, attach the following permissions to your [model customization service role](model-customization-iam-role.md) to allow it to access the VPC. Modify this policy to allow access to only the VPC resources that your job needs. Replace the *\$1\$1\$1subnet-ids\$1\$1* and *security-group-id* with the values from your VPC.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeNetworkInterfaces",
                "ec2:DescribeVpcs",
                "ec2:DescribeDhcpOptions",
                "ec2:DescribeSubnets",
                "ec2:DescribeSecurityGroups"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ec2:CreateNetworkInterface"
            ],
            "Resource": [
                "arn:aws:ec2:us-east-1:123456789012:network-interface/*"
            ],
            "Condition": {
                "StringEquals": {
                    "aws:RequestTag/BedrockManaged": [
                        "true"
                    ]
                },
                "ArnEquals": {
                    "aws:RequestTag/BedrockModelCustomizationJobArn": [
                        "arn:aws:bedrock:us-east-1:123456789012:model-customization-job/*"
                    ]
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": [
                "ec2:CreateNetworkInterface"
            ],
            "Resource": [
                "arn:aws:ec2:us-east-1:123456789012:subnet/subnet-id",
                "arn:aws:ec2:us-east-1:123456789012:subnet/subnet-id2",
                "arn:aws:ec2:us-east-1:123456789012:security-group/security-group-id"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "ec2:CreateNetworkInterfacePermission",
                "ec2:DeleteNetworkInterface",
                "ec2:DeleteNetworkInterfacePermission"
            ],
            "Resource": "*",
            "Condition": {
                "ArnEquals": {
                    "ec2:Subnet": [
                        "arn:aws:ec2:us-east-1:123456789012:subnet/subnet-id",
                        "arn:aws:ec2:us-east-1:123456789012:subnet/subnet-id2"
                    ],
                    "ec2:ResourceTag/BedrockModelCustomizationJobArn": [
                        "arn:aws:bedrock:us-east-1:123456789012:model-customization-job/*"
                    ]
                },
                "StringEquals": {
                    "ec2:ResourceTag/BedrockManaged": "true"
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": [
                "ec2:CreateTags"
            ],
            "Resource": "arn:aws:ec2:us-east-1:123456789012:network-interface/*",
            "Condition": {
                "StringEquals": {
                    "ec2:CreateAction": [
                        "CreateNetworkInterface"
                    ]
                },
                "ForAllValues:StringEquals": {
                    "aws:TagKeys": [
                        "BedrockManaged",
                        "BedrockModelCustomizationJobArn"
                    ]
                }
            }
        }
    ]
}
```

------

### Add the VPC configuration when submitting a model customization job
<a name="vpc-config"></a>

After you configure the VPC and the required roles and permissions as described in the previous sections, you can create a model customization job that uses this VPC.

When you specify the VPC subnets and security groups for a job, Amazon Bedrock creates *elastic network interfaces* (ENIs) that are associated with your security groups in one of the subnets. ENIs allow the Amazon Bedrock job to connect to resources in your VPC. For information about ENIs, see [Elastic Network Interfaces](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_ElasticNetworkInterfaces.html) in the *Amazon VPC User Guide*. Amazon Bedrock tags ENIs that it creates with `BedrockManaged` and `BedrockModelCustomizationJobArn` tags.

We recommend that you provide at least one subnet in each Availability Zone.

You can use security groups to establish rules for controlling Amazon Bedrock access to your VPC resources.

You can configure the VPC to use in either the console or through the API. Choose the tab for your preferred method, and then follow the steps:

------
#### [ Console ]

For the Amazon Bedrock console, you specify VPC subnets and security groups in the optional **VPC settings** section when you create the model customization job. For more information about configuring jobs, see [Submit a model customization job for fine-tuning](model-customization-submit.md).

**Note**  
For a job that includes VPC configuration, the console can't automatically create a service role for you. Follow the guidance at [Create a service role for model customization](model-customization-iam-role.md) to create a custom role.

------
#### [ API ]

When you submit a [CreateModelCustomizationJob](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_CreateModelCustomizationJob.html) request, you can include a `VpcConfig` as a request parameter to specify the VPC subnets and security groups to use, as in the following example.

```
"vpcConfig": { 
    "securityGroupIds": [
        "${{sg-0123456789abcdef0}}"
    ],
    "subnets": [
        "${{subnet-0123456789abcdef0}}",
        "${{subnet-0123456789abcdef1}}",
        "${{subnet-0123456789abcdef2}}"
    ]
}
```

------

# Monitor your model customization job
<a name="model-customization-monitor"></a>

You can track the progress of your model customization job. Each job consists of the following events:
+ **Validation**
+ **Data processing**
+ **Training**

If your job fails for some reason, you can see where in the process the failure occurred. Use this information to [troubleshoot](fine-tuning-troubleshooting.md) the issue.

------
#### [ Console ]

**To monitor the status of your fine-tuning jobs**

1. Sign in to the AWS Management Console with an IAM identity that has permissions to use the Amazon Bedrock console. Then, open the Amazon Bedrock console at [https://console.aws.amazon.com/bedrock](https://console.aws.amazon.com/bedrock).

1. From the left navigation pane, choose **Custom models** under **Tune**.

1. Choose the job from the **Jobs** table to see job-related details.

------
#### [ API ]

To list information about all your model customization jobs, send a [ListModelCustomizationJob](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_ListModelCustomizationJobs.html) request with an [Amazon Bedrock control plane endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#br-cp). Refer to [ListModelCustomizationJob](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_ListModelCustomizationJobs.html) for filters that you can use.

To monitor the status of a model customization job, send a [GetModelCustomizationJob](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_GetModelCustomizationJob.html) request with an [Amazon Bedrock control plane endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#br-cp) with the `jobArn` of the job.

To list all the tags for a model customization job, send a [ListTagsForResource](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_ListTagsForResource.html) request with an [Amazon Bedrock control plane endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#br-cp) and include the Amazon Resource Name (ARN) of the job.

[See code examples](model-customization-code-samples.md)

------

You can also monitor model customization jobs with Amazon EventBridge. For more information, see [Monitor Amazon Bedrock job state changes using Amazon EventBridgeMonitor event changes](monitoring-eventbridge.md).

# Analyze the results of a model customization job
<a name="model-customization-analyze"></a>

When your model customization job completes, you can analyze the results of the customization process. The following artifacts are uploaded to the S3 bucket that you specify when creating your model customization job:
+ **Training and validation metrics** – Amazon Bedrock provides training metrics for all model customization jobs. Validation metrics are also included with some model customization jobs.
+ **Synthetic data (Model Distillation only)** – Sample prompts from the synthetic dataset that Amazon Bedrock generated from your teacher model and used to fine tune your student model during the [distillation job](submit-model-distillation-job.md). This information can help you further understand and validate how your custom model was trained.
+ **Prompt insights (Model Distillation only)** – A report of input prompts that were accepted and rejected (along with a reason why) during distillation. This information can help you fix and refine your prompts if you need to run another distillation job.

 Amazon Bedrock stores your customized models in AWS managed storage scoped to your AWS account.

You can also evaluate your model by running a model evaluation job. For more information, see [Evaluate the performance of Amazon Bedrock resources](evaluation.md).

The following example shows where you can fine training and validation metrics in an S3 bucket:

```
- model-customization-job-training-job-id/
    - training_artifacts/
        - step_wise_training_metrics.csv
    - validation_artifacts/
        - post_fine_tuning_validation/
            - validation_metrics.csv
```

Use the `step_wise_training_metrics.csv` and the `validation_metrics.csv` files to analyze the model customization job and to help you adjust the model as necessary.

The columns in the `step_wise_training_metrics.csv` file are as follows.
+ `step_number` – The step in the training process. Starts from 0.
+ `epoch_number` – The epoch in the training process.
+ `training_loss` – Indicates how well the model fits the training data. A lower value indicates a better fit.
+ `perplexity` – Indicates how well the model can predict a sequence of tokens. A lower value indicates better predictive ability.

The columns in the `validation_metrics.csv` file are the same as the training file, except that `validation_loss` (how well the model fits the validation data) appears in place of `training_loss`.



You can find the output files by opening up the [https://console.aws.amazon.com/s3](https://console.aws.amazon.com/s3) directly or by finding the link to the output folder within your model details. Choose the tab for your preferred method, and then follow the steps:

------
#### [ Console ]

1. Sign in to the AWS Management Console with an IAM identity that has permissions to use the Amazon Bedrock console. Then, open the Amazon Bedrock console at [https://console.aws.amazon.com/bedrock](https://console.aws.amazon.com/bedrock).

1. From the left navigation pane, choose **Custom models** under **Tune**.

1. In the **Models** tab, select a model to view its details. The **Job name** can be found in the **Model details** section.

1. To view the output S3 files, select the **S3 location** in the **Output data** section.

1. Find the training and validation metrics files in the folder whose name matches the **Job name** for the model.

------
#### [ API ]

To list information about all your custom models, send a [ListCustomModels](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_ListCustomModels.html) (see link for request and response formats and field details) request with an [Amazon Bedrock control plane endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#br-cp). Refer to [ListCustomModels](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_ListCustomModels.html) for filters that you can use.

To list all the tags for a custom model, send a [ListTagsForResource](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_ListTagsForResource.html) request with an [Amazon Bedrock control plane endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#br-cp) and include the Amazon Resource Name (ARN) of the custom model.

To monitor the status of a model customization job, send a [GetCustomModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_GetCustomModel.html) (see link for request and response formats and field details) request with an [Amazon Bedrock control plane endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#br-cp) with the `modelIdentifier`, which is either of the following.
+ The name that you gave the model.
+ The ARN of the model.

You can see `trainingMetrics` and `validationMetrics` for a model customization job in either the [GetModelCustomizationJob](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_GetModelCustomizationJob.html) or [GetCustomModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_GetCustomModel.html) response.

To download the training and validation metrics files, follow the steps at [Downloading objects](https://docs.aws.amazon.com/AmazonS3/latest/userguide/download-objects.html). Use the S3 URI you provided in the `outputDataConfig`.

[See code examples](model-customization-code-samples.md)

------

# Stop a model customization job
<a name="model-customization-stop"></a>

You can stop an Amazon Bedrock model customization job while it's in progress. Choose the tab for your preferred method, and then follow the steps:

**Warning**  
You can't resume a stopped job. Amazon Bedrock charges for the tokens that it used to train the model before you stopped the job. Amazon Bedrock doesn't create an intermediate custom model for a stopped job.

------
#### [ Console ]

**To stop a model customization job**

1. Sign in to the AWS Management Console with an IAM identity that has permissions to use the Amazon Bedrock console. Then, open the Amazon Bedrock console at [https://console.aws.amazon.com/bedrock](https://console.aws.amazon.com/bedrock).

1. From the left navigation pane, choose **Custom models** under **Tune**.

1. In the **Training Jobs** tab, choose the radio button next to the job to stop or select the job to stop to navigate to the details page.

1. Select the **Stop job** button. You can only stop a job if its status is `Training`.

1. A modal appears to warn you that you can't resume the training job if you stop it. Select **Stop job** to confirm.

------
#### [ API ]

To stop a model customization job, send a [StopModelCustomizationJob](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_StopModelCustomizationJob.html) (see link for request and response formats and field details) request with a [Amazon Bedrock control plane endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#br-cp), using the `jobArn` of the job.

You can only stop a job if its status is `IN_PROGRESS`. Check the `status` with a [GetModelCustomizationJob](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_GetModelCustomizationJob.html) request. The system marks the job for termination and sets the state to `STOPPING`. Once the job is stopped, the state becomes `STOPPED`.

[See code examples](model-customization-code-samples.md)

------

# View details about a custom model
<a name="model-customization-view"></a>

To learn how to view details about your customized model, choose the tab for your preferred method, and then follow the steps:

------
#### [ Console ]

**To view information about a custom model**

1. Sign in to the AWS Management Console with an IAM identity that has permissions to use the Amazon Bedrock console. Then, open the Amazon Bedrock console at [https://console.aws.amazon.com/bedrock](https://console.aws.amazon.com/bedrock).

1. From the left navigation pane, choose **Custom models** under **Tune**.

1. From the **Models** section, select a model.

1. View the details for the custom model configuration and how it was customized.

------
#### [ API ]

To retrieve information about a specific custom model, send a [GetCustomModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_GetCustomModel.html) request with an [Amazon Bedrock control plane endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#br-cp). Specify either the name of the custom model or its ARN as the `modelIdentifier`.

To list information about all the custom models in an account, send a [ListCustomModels](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_ListCustomModels.html) request with an [Amazon Bedrock control plane endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#br-cp). To control the number of results that are returned, you can specify the following optional parameters:


****  

| Field | Short description | 
| --- | --- | 
| maxResults | The maximum number of results to return in a response. | 
| nextToken | If there are more results than the number you specified in the maxResults field, the response returns a nextToken value. To see the next batch of results, send the nextToken value in another request. | 

For other optional parameters that you can specify to sort and filter the results, see [ListCustomModels](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_ListCustomModels.html).

To list all the tags for a custom model, send a [ListTagsForResource](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_ListTagsForResource.html) request with an [Amazon Bedrock control plane endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#br-cp) and include the Amazon Resource Name (ARN) of the custom model.

------

# Set up inference for a custom model
<a name="model-customization-use"></a>

After you create a custom model, you can set up inference using one of the following options:
+ **Purchase Provisioned Throughput** – Purchase Provisioned Throughput for your model to set up dedicated compute capacity with guaranteed throughput for consistent performance and lower latency. 

  For more information about Provisioned Throughput, see [Increase model invocation capacity with Provisioned Throughput in Amazon Bedrock](prov-throughput.md). For more information about using custom models with Provisioned Throughput, see [Purchase Provisioned Throughput for a custom model](custom-model-use-pt.md).
+ **Deploy custom model for on-demand inference** – To set up on-demand inference, you deploy the model with a custom model deployment. After you deploy the model, you invoke it using the ARN for the custom model deployment. With on-demand inference, you only pay for what you use and you don't need to set up provisioned compute resources.

  For more information about deploying custom models for on-demand inference, see [Deploy a custom model for on-demand inference](deploy-custom-model-on-demand.md).

**Topics**
+ [

# Purchase Provisioned Throughput for a custom model
](custom-model-use-pt.md)
+ [

# Deploy a custom model for on-demand inference
](deploy-custom-model-on-demand.md)

# Purchase Provisioned Throughput for a custom model
<a name="custom-model-use-pt"></a>

To use a custom model with dedicated compute capacity and guaranteed throughput, you can purchase Provisioned Throughput for it. You can then use the resulting provisioned model for inference. For more information about Provisioned Throughput, see [Increase model invocation capacity with Provisioned Throughput in Amazon Bedrock](prov-throughput.md).

------
#### [ Console ]

**To purchase Provisioned Throughput for a custom model.**

1. Sign in to the AWS Management Console with an IAM identity that has permissions to use the Amazon Bedrock console. Then, open the Amazon Bedrock console at [https://console.aws.amazon.com/bedrock](https://console.aws.amazon.com/bedrock).

1. From the left navigation pane, choose **Custom models** under **Tune**.

1. In the **Models** tab, choose the radio button next to the model for which you want to buy Provisioned Throughput or select the model name to navigate to the details page.

1. Select **Purchase Provisioned Throughput**.

1. For more details, follow the steps in the provisioned throughput documentation.

1. After purchasing Provisioned Throughput for your custom model, follow the steps in the provisioned throughput usage documentation.

When you carry out any operation that supports usage of custom models, you will see your custom model as an option in the model selection menu.

------
#### [ API ]

To purchase Provisioned Throughput for a custom model, follow the steps in the provisioned throughput documentation to send a [CreateProvisionedModelThroughput](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_CreateProvisionedModelThroughput.html) (see link for request and response formats and field details) request with a [Amazon Bedrock control plane endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#br-cp). Use the name or ARN of your custom model as the `modelId`. The response returns a `provisionedModelArn` that you can use as the `modelId` when making an [InvokeModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html) or [InvokeModelWithResponseStream](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModelWithResponseStream.html) request.

[See code examples](model-customization-code-samples.md)

------

# Deploy a custom model for on-demand inference
<a name="deploy-custom-model-on-demand"></a>

 After you create a custom model with a model customization job or import a SageMaker AI-trained custom Amazon Nova model, you can set up on-demand inference for the model. With on-demand inference, you only pay for what you use and you don't need to set up provisioned compute resources. 

To set up on-demand inference for a custom model, you deploy it with a custom model deployment. After you deploy your custom model, you use the deployment's Amazon Resource Name (ARN) as the `modelId` parameter when you submit prompts and generate responses with model inference.

 For information about on-demand inference pricing, see [Amazon Bedrock pricing](https://aws.amazon.com/bedrock/pricing). You can deploy a custom model for on-demand inference in the following Regions (for more information about Regions supported in Amazon Bedrock, see [Amazon Bedrock endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/bedrock.html)): 
+ US East (N. Virginia)
+ US West (Oregon)

## Prerequisites for deploying a custom model for on-demand inference
<a name="custom-model-inference-prerequisites"></a>

Before you can deploy a custom model for on-demand inference, make sure you meet the following requirements:
+ You must use the US East (N. Virginia) or US West (Oregon) region.
+ You must customize the model on or after 7/16/2025. For supported models, see [Supported base models](#custom-model-inference-supported-models).
+ Your account must have permission to access the model that you are deploying. For more information about model customization access and security, see [Model customization access and security](custom-model-job-access-security.md).
+ If the model is encrypted with a AWS KMS key, you must have permission to use that key. For more information, see [Encryption of custom models](encryption-custom-job.md).

## Supported base models
<a name="custom-model-inference-supported-models"></a>

You can set up on-demand inference for the following base models:
+ Amazon Nova Lite
+ Amazon Nova Micro
+ Amazon Nova Pro
+ Meta Llama 3.3 70B Instruct

## Deploy a custom model
<a name="deploy-custom-model"></a>

You can deploy a custom model with the Amazon Bedrock console, AWS Command Line Interface, or AWS SDKs. For information about using the deployment for inference, see [Use a deployment for on-demand inference](#use-custom-model-on-demand).

------
#### [ Console ]

You deploy a custom model from the **Custom models** page as follows. You can also deploy a model from the **Custom model on-demand** page with the same fields. To find this page, under **Infer** in the navigation pane, choose **Custom model on-demand**.

**To deploy a custom model**

1. Sign in to the AWS Management Console with an IAM identity that has permissions to use the Amazon Bedrock console. Then, open the Amazon Bedrock console at [https://console.aws.amazon.com/bedrock](https://console.aws.amazon.com/bedrock).

1. From the left navigation pane, choose **Custom models** under **Tune**.

1. In the **Models** tab, choose the radio button for the model you want to deploy.

1. Choose **Set up inference** and choose **Deploy for on-demand**.

1. In **Deployment details**, provide the following information:
   + **Deployment Name** (required) – Enter a unique name for your deployment.
   + **Description** (optional) – Enter a description for your deployment.
   + **Tags** (optional) – Add tags for cost allocation and resource management.

1. Choose **Create**. When the deployment's status is `Active`, your custom model is ready for on-demand inference. For more information about using the custom model, see [Use a deployment for on-demand inference](#use-custom-model-on-demand).

------
#### [ CLI ]

To deploy a custom model for on-demand inference using the AWS Command Line Interface, use the `create-custom-model-deployment` command with your custom model's Amazon Resource Name (ARN). This command uses the [CreateCustomModelDeployment](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_CreateCustomModelDeployment.html) API operation. The response includes the deployment's ARN. When the deployment is active, you use this ARN as the `modelId` when making inference requests. For information about using the deployment for inference, see [Use a deployment for on-demand inference](#use-custom-model-on-demand).

```
aws bedrock create-custom-model-deployment \
--model-deployment-name "Unique name" \
--model-arn "Custom Model ARN" \
--description "Deployment description" \
--tags '[
    {
        "key": "Environment",
        "value": "Production"
    },
    {
        "key": "Team",
        "value": "ML-Engineering"
    },
    {
        "key": "Project",
        "value": "CustomerSupport"
    }
]' \
--client-request-token "unique-deployment-token" \
--region region
```

------
#### [ API ]

To deploy a custom model for on-demand inference, use the [CreateCustomModelDeployment](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_CreateCustomModelDeployment.html) API operation with your custom model's Amazon Resource Name (ARN). The response includes the deployment's ARN. When the deployment is active, you use this ARN as the `modelId` when making inference requests. For information about using the deployment for inference, see [Use a deployment for on-demand inference](#use-custom-model-on-demand).

The following code shows how to use the SDK for Python (Boto3) to deploy a custom model.

```
def create_custom_model_deployment(bedrock_client):
    """Create a custom model deployment
    Args:
        bedrock_client: A boto3 Amazon Bedrock client for making API calls

    Returns:
        str: The ARN of the new custom model deployment

    Raises:
        Exception: If there is an error creating the deployment
    """

    try:
        response = bedrock_client.create_custom_model_deployment(
            modelDeploymentName="Unique deployment name",
            modelArn="Custom Model ARN",
            description="Deployment description",
            tags=[
                {'key': 'Environment', 'value': 'Production'},
                {'key': 'Team', 'value': 'ML-Engineering'},
                {'key': 'Project', 'value': 'CustomerSupport'}
            ],
            clientRequestToken=f"deployment-{uuid.uuid4()}"
        )

        deployment_arn = response['customModelDeploymentArn']
        print(f"Deployment created: {deployment_arn}")
        return deployment_arn

    except Exception as e:
        print(f"Error creating deployment: {str(e)}")
        raise
```

------

## Use a deployment for on-demand inference
<a name="use-custom-model-on-demand"></a>

After you deploy your custom model, you use the deployment's Amazon Resource Name (ARN) as the `modelId` parameter when you submit prompts and generate responses with model inference.

For information about making inference requests, see the following topics:
+ [Submit prompts and generate responses with model inference](inference.md)
+ [Prerequisites for running model inference](inference-prereq.md)
+ [Submit prompts and generate responses using the API](inference-api.md)

## Delete a custom model deployment
<a name="delete-custom-model-deployment"></a>

After you are finished using your model for on-demand inference, you can delete the deployment. After you delete the deployment, you can't use it for on-demand inference but deployment deletion doesn't delete the underlying custom model.

You can delete a custom model deployment with the Amazon Bedrock console, AWS Command Line Interface, or AWS SDKs.

**Important**  
Deleting a custom model deployment is irreversible. Make sure you no longer need the deployment before proceeding with the deletion. If you need to use the custom model for on-demand inference again, you must create a new deployment.

------
#### [ Console ]

**To delete a custom model deployment**

1. In the navigation pane, under **Infer**, choose **Custom model on-demand**.

1. Choose the custom model deployment you want to delete.

1. Choose **Delete**.

1. In the confirmation dialog, enter the deployment name to confirm the deletion.

1. Choose **Delete** to confirm deletion.

------
#### [ CLI ]

To delete a custom model deployment using the AWS Command Line Interface, use the `delete-custom-model-deployment` command with your deployment identifier. This command uses the [DeleteCustomModelDeployment](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_DeleteCustomModelDeployment.html) API operation. 

```
aws bedrock delete-custom-model-deployment \
--custom-model-deployment-identifier "deployment-arn-or-name" \
--region region
```

------
#### [ API ]

To delete a custom model deployment programmatically, use the [DeleteCustomModelDeployment](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_DeleteCustomModelDeployment.html) API operation with the deployment's Amazon Resource Name (ARN) or name. The following code shows how to use the SDK for Python (Boto3) to delete a custom model deployment.

```
def delete_custom_model_deployment(bedrock_client):
    """Delete a custom model deployment

    Args:
        bedrock_client: A boto3 Amazon Bedrock client for making API calls

    Returns:
        dict: The response from the delete operation

    Raises:
        Exception: If there is an error deleting the deployment
    """

    try:
        response = bedrock_client.delete_custom_model_deployment(
            customModelDeploymentIdentifier="Deployment identifier"
        )

        print("Deleting deployment...")
        return response

    except Exception as e:
        print(f"Error deleting deployment: {str(e)}")
        raise
```

------

# Share a model for another account to use
<a name="share-model"></a>

By default, models are only available in the Region and account in which they were created. Amazon Bedrock provides you the ability to share custom models with other accounts so that they can use them. The general process to share a model with another account is as follows:

1. Sign up for an AWS Organizations account, create an organization, and add the account that will share the model and the account that will receive the model to the organization.

1. Set up IAM permissions for the following:
   + The account that will share the model.
   + The model that will be shared.

1. Share the model with the help of AWS Resource Access Manager.

1. The recipient account copies the model to the Region in which they want to use it.

**Topics**
+ [

# Supported Regions and models for model sharing
](share-model-support.md)
+ [

# Fulfill prerequisites to share models
](share-model-prereq.md)
+ [

# Share a model with another account
](share-model-share.md)
+ [

# View information about shared models
](share-model-view.md)
+ [

# Update access to a shared model
](share-model-edit.md)
+ [

# Revoke access to a shared model
](share-model-revoke.md)

# Supported Regions and models for model sharing
<a name="share-model-support"></a>

The following list provides links to general information about Regional and model support in Amazon Bedrock:
+ For a list of Region codes and endpoints supported in Amazon Bedrock, see [Amazon Bedrock endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bedrock_region).
+ For a list of Amazon Bedrock model IDs to use when calling Amazon Bedrock API operations, see [Supported foundation models in Amazon Bedrock](models-supported.md).

The following table shows the models that you can share and the Regions from which you can share:


| Provider | Model | Model ID | Single-region model support | 
| --- | --- | --- | --- | 
| Amazon | Titan Multimodal Embeddings G1 | amazon.titan-embed-image-v1 |  ap-south-1 ap-southeast-2 eu-west-1 eu-west-3 us-east-1 us-west-2  | 
| Anthropic | Claude 3 Haiku | anthropic.claude-3-haiku-20240307-v1:0 |  ap-south-1 ap-southeast-2 eu-west-1 eu-west-2 us-east-1 us-west-2  | 

**Note**  
Custom Amazon Titan Text Premier models aren't shareable because they can't be [copied to a Region](copy-model.md).

# Fulfill prerequisites to share models
<a name="share-model-prereq"></a>

Amazon Bedrock interfaces with the [AWS Resource Access Manager](https://docs.aws.amazon.com/ram/latest/userguide/) and [AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/) services to allow the sharing of models. Before you can share a model with another account, you must fulfill the following prerequisites:

## Create an organization with AWS Organizations and add the model sharer and recipient
<a name="share-model-prereq-orgs"></a>

For an account to share a model with another account, the two accounts must be part of the same organization in AWS Organizations and resource sharing in AWS RAM must be enabled for the organization. To set up an organization and invite accounts to it, do the following:

1. Enable resource sharing through AWS RAM in AWS Organizations by following the steps at [Enable resource sharing within AWS Organizations](https://docs.aws.amazon.com/ram/latest/userguide/getting-started-sharing.html#getting-started-sharing-orgs) in the AWS RAM User Guide.

1. Create an organization in AWS Organizations by following the steps at [Creating an organization](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_org_create.html) in the AWS Organizations User Guide.

1. Invite the account that you want to share the model with by following the steps at [Inviting an AWS account to join your organization](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_invites.html) in the AWS Organizations User Guide.

1. The administrator of the account you sent an invitation to must accept the invitation by following the steps at [Accepting or declining an invitation from an organization](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_invites.html#orgs_manage_accounts_accept-decline-invite).

## Add an identity-based policy to an IAM role to allow it to share a model
<a name="share-model-prereq-ibp"></a>

For a role to have permissions to share a model, it must have permissions to both Amazon Bedrock and AWS RAM actions. Attach the following policies to the role:

1. To provide permissions for a role to manage sharing of a model with another account through AWS Resource Access Manager, attach the following identity-based policy to the role to provide minimal permissions:

------
#### [ JSON ]

****  

   ```
   { 
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
              "Sid": "ShareResources",
              "Effect": "Allow",
              "Action": [
                  "ram:CreateResourceShare",
                  "ram:UpdateResourceShare",
                  "ram:DeleteResourceShare",
                  "ram:AssociateResourceShare",
                  "ram:DisassociateResourceShare",
                  "ram:GetResourceShares"
              ],
              "Resource": [
                  "arn:aws:bedrock:us-east-1::foundation-model/model-id"
              ]
           }
       ]
   }
   ```

------

   Replace *\$1\$1model-arn\$1* with the Amazon Resource Name (ARN) of the model that you want to share. Add models to the `Resource` list as necessary. You can review the [Actions, resources, and condition keys for AWS Resource Access Manager](https://docs.aws.amazon.com/service-authorization/latest/reference/list_awsresourceaccessmanagerram.html) and modify the AWS RAM actions that the role can carry out as necessary.
**Note**  
You can also attach the more permissive [AWSResourceManagerFullAccess managed policy](https://docs.aws.amazon.com/ram/latest/userguide/security-iam-managed-policies.html#security-iam-managed-policies-AWSResourceAccessManagerFullAccess) to the role.

1. Check that the role has the [AmazonBedrockFullAccess policy](security-iam-awsmanpol.md#security-iam-awsmanpol-AmazonBedrockFullAccess) attached. If it doesn't, you must also attach the following policy to the role to allow it to share models (replacing *\$1\$1model-arn\$1*) as necessary:

------
#### [ JSON ]

****  

   ```
   { 
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "ShareCustomModels",
               "Effect": "Allow",
               "Action": [
                   "bedrock:GetCustomModel",
                   "bedrock:ListCustomModels",
                   "bedrock:PutResourcePolicy",
                   "bedrock:GetResourcePolicy",
                   "bedrock:DeleteResourcePolicy"
               ],
               "Resource": [
                   "arn:aws:bedrock:us-east-1::foundation-model/model-id"
               ]
           }
       ]
   }
   ```

------

## (Optional) Set up KMS key policies to encrypt a model and to allow it to be decrypted
<a name="share-model-prereq-kms"></a>

**Note**  
Skip this prerequisite if the model you're sharing is not encrypted with a customer managed key and you don't plan to encrypt it.

If you need to encrypt a model with a customer managed key before sharing it with another account, attach permissions to the KMS key that you'll use to encrypt the model by following the steps at [Set up key permissions for encrypting custom models](encryption-custom-job.md#encryption-cm).

If the model you share with another account is encrypted with a customer managed key, attach permissions to the KMS key that encrypted the model to allow the recipient account to decrypt it by following the steps at [Set up key permissions for copying custom models](encryption-custom-job.md#encryption-copy).

# Share a model with another account
<a name="share-model-share"></a>

After you [fulfill the prerequisites](share-model-prereq.md), you can share a model. Choose the tab for your preferred method, and then follow the steps:

------
#### [ Console ]

1. Sign in to the AWS Management Console with an IAM identity that has permissions to use the Amazon Bedrock console. Then, open the Amazon Bedrock console at [https://console.aws.amazon.com/bedrock](https://console.aws.amazon.com/bedrock).

1. From the left navigation pane, choose **Custom models** under **Tune**.

1. Select the button next to the model that you want to share. Then, choose the three dots (![\[Vertical ellipsis icon representing a menu or more options.\]](http://docs.aws.amazon.com/bedrock/latest/userguide/images/icons/vertical-ellipsis.png)) and select **Share**.

1. In the **Model sharing details** section, do the following:

   1. In the **Name for shared model** field, give the shared model a name.

   1. In the **Recipient account ID** field, specify the ID of the account that will receive the model.

   1. (Optional) To add tags, expand the **Tags** section. For more information, see [Tagging Amazon Bedrock resources](tagging.md).

1. Choose **Share model**. After the recipient accepts the model in [Resource Access Manager](https://docs.aws.amazon.com/ram/latest/userguide/getting-started-shared.html), the model appears in their list of custom models.

------
#### [ API ]

To share a model, send a [CreateResourceShare](https://docs.aws.amazon.com/ram/latest/APIReference/API_CreateResourceShare.html) request with an [AWS Resource Access Manager endpoint](https://docs.aws.amazon.com/general/latest/gr/ram.html). Minimally, provide the following fields:


****  

| Field | Use case | 
| --- | --- | 
| Name | To provide a name for the resource share. | 
| resourceArns | To specify the ARNs of each model to share. | 
| principals | To specify the principals to share the model with. | 

The [CreateResourceShare](https://docs.aws.amazon.com/ram/latest/APIReference/API_CreateResourceShare.html) response returns a `resourceShareArn` that you can use to manage the resource share.

The account receiving a model can check whether a model has been shared by sending a [ListCustomModels](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_ListCustomModels.html) request with an [Amazon Bedrock control plane endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#br-cp). Models that have been shared will show up with a `shared` status of `true`.

------

After sharing the model, the recipient of the model must copy it into a Region in order to use it. For more information, see [Copy a customized or shared model to use in a Region](copy-model.md).

# View information about shared models
<a name="share-model-view"></a>

To learn how to view information about models that you've shared with other accounts or models that have been shared with you, choose the tab for your preferred method, and then follow the steps:

------
#### [ Console ]

**To view models that you've shared with other accounts**

1. Sign in to the AWS Management Console and open the AWS RAM console at [https://console.aws.amazon.com/ram/home](https://console.aws.amazon.com/ram/home).

1. Follow the steps at [Viewing resource shares you created in AWS Resource Access Manager](https://docs.aws.amazon.com/ram/latest/userguide/working-with-sharing-view-rs.html).

**To view models shared with you by other accounts**

1. Sign in to the AWS Management Console with an IAM identity that has permissions to use the Amazon Bedrock console. Then, open the Amazon Bedrock console at [https://console.aws.amazon.com/bedrock](https://console.aws.amazon.com/bedrock).

1. From the left navigation pane, choose **Custom models** under **Tune**.

1. Models that have been shared with you by other accounts will be shown in the following ways, depending on whether you've [copied them to a Region](copy-model.md):

   1. Shared models that you haven't copied to a Region yet are listed in the **Models shared with you** section.

   1. Shared models that have been copied to the current Region are listed in the **Models** section with a **Share status** of `Shared`.

------
#### [ API ]

To view information about models that you've shared, send a [GetResourceShares](https://docs.aws.amazon.com/ram/latest/APIReference/API_GetResourceShares.html) request with an [AWS Resource Access Manager endpoint](https://docs.aws.amazon.com/general/latest/gr/ram.html) and specify `SELF` in the `resourceOwner` field. You can use the optional fields to filter for specific models or resource shares.

To view information about models that have been shared with you, send a [ListCustomModels](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_ListCustomModels.html) request with an [Amazon Bedrock control plane endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#br-cp) and specify `false` with the `isOwned` filter.

------

# Update access to a shared model
<a name="share-model-edit"></a>

To learn how to update access to models that you've shared with other accounts, choose the tab for your preferred method, and then follow the steps:

------
#### [ Console ]

**To update access to a model that you've shared**

1. Sign in to the AWS Management Console with an IAM identity that has permissions to use the Amazon Bedrock console. Then, open the Amazon Bedrock console at [https://console.aws.amazon.com/bedrock](https://console.aws.amazon.com/bedrock).

1. From the left navigation pane, choose **Custom models** under **Tune**.

1. In the **Models** section, select a model that you want to update access to.

1. In the **Model sharing details** section, do one of the following:
   + To share the model with another account, choose **Share** and then do the following:

     1. In the **Model sharing details** section, do the following:

        1. In the **Name for shared model** field, give the shared model a name.

        1. In the **Recipient account ID** field, specify the ID of the account that will receive the model.

        1. (Optional) To add tags, expand the **Tags** section. For more information, see [Tagging Amazon Bedrock resources](tagging.md).

     1. Choose **Share model**. After the recipient accepts the model in [Resource Access Manager](https://docs.aws.amazon.com/ram/latest/userguide/getting-started-shared.html), the model appears in their list of custom models.
   + To delete a model share and revoke access from the accounts specified in that model share, do the following:

     1. Select a model share and choose **Revoke shared model**.

     1. Review the message, type **revoke** in the text box, and choose **Revoke shared model** to confirm revoking of access.

------
#### [ API ]

To share a model with more accounts, do one of the following:
+ Send an [AssociateResourceShare](https://docs.aws.amazon.com/ram/latest/APIReference/API_AssociateResourceShare.html) request with an [AWS Resource Access Manager endpoint](https://docs.aws.amazon.com/general/latest/gr/ram.html). Specify the Amazon Resource Name (ARN) of the resource share in the `resourceShareArn` field and append accounts that you want to share the model with in the list of `principals`.
**Note**  
You can also share more models with the same account or accounts by appending model ARNs to the list of `resourceArns`.
+ Create a new resource share by following the steps in the **API** tab at [Share a model with another account](share-model-share.md).

------

# Revoke access to a shared model
<a name="share-model-revoke"></a>

To learn how to revoke access to a model that you've shared, choose the tab for your preferred method, and then follow the steps:

------
#### [ Console ]

1. Sign in to the AWS Management Console with an IAM identity that has permissions to use the Amazon Bedrock console. Then, open the Amazon Bedrock console at [https://console.aws.amazon.com/bedrock](https://console.aws.amazon.com/bedrock).

1. From the left navigation pane, choose **Custom models** under **Tune**.

1. In the **Models** table, select the model that you want to revoke access to.

1. In the **Model sharing details** section, do the following to delete a model share and revoke access from the accounts specified in that model share:

   1. Select a model share and choose **Revoke shared model**.

   1. Review the message, type **revoke** in the text box, and choose **Revoke shared model** to confirm revoking of access.

------
#### [ API ]

To revoke access to a model from an account, send a [DisassociateResourceShare](https://docs.aws.amazon.com/ram/latest/APIReference/API_DisassociateResourceShare.html) request with an [AWS Resource Access Manager endpoint](https://docs.aws.amazon.com/general/latest/gr/ram.html). Specify the ARN of the share in the `resourceShareArn` field and the account whose access you want to revoke in the list of `principals`.

To completely delete a resource share by sending a [DeleteResourceShare](https://docs.aws.amazon.com/ram/latest/APIReference/API_DeleteResourceShare.html) request with an [AWS Resource Access Manager endpoint](https://docs.aws.amazon.com/general/latest/gr/ram.html). Specify the ARN of the share in the `resourceShareArn`.

------

# Copy a customized or shared model to use in a Region
<a name="copy-model"></a>

By default, models are only available in the Region and account in which they were created. Amazon Bedrock provides you the ability to copy some types of models to other Regions. You can copy the following the types of models to other Regions:
+ [Custom models](custom-models.md)
+ [Shared models](share-model.md)

You can copy models to be used in supported Regions. If a model was shared with you from another account, you must first copy it to a Region to be able to use it. To learn about sharing models to and receiving models from other accounts, see [Share a model for another account to use](share-model.md).

**Topics**
+ [

# Supported Regions and models for model copy
](copy-model-support.md)
+ [

# Fulfill prerequisites to copy models
](copy-model-prereq.md)
+ [

# Copy a model to a Region
](copy-model-copy.md)
+ [

# View information about model copy jobs
](copy-model-job-view.md)

# Supported Regions and models for model copy
<a name="copy-model-support"></a>

The following list provides links to general information about Regional and model support in Amazon Bedrock:
+ For a list of Region codes and endpoints supported in Amazon Bedrock, see [Amazon Bedrock endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bedrock_region).
+ For a list of Amazon Bedrock model IDs to use when calling Amazon Bedrock API operations, see [Supported foundation models in Amazon Bedrock](models-supported.md).

The following table shows the models whose customized version you can copy and the Regions to which you can copy them:


| Provider | Model | Model ID | Single-region model support | 
| --- | --- | --- | --- | 
| Amazon | Nova Canvas | amazon.nova-canvas-v1:0 |  ap-northeast-2 eu-west-1 us-east-1  | 
| Amazon | Nova Lite | amazon.nova-lite-v1:0 |  ap-northeast-1 ap-northeast-2 ap-south-1 ap-southeast-1 ap-southeast-2 eu-central-1 eu-north-1 eu-south-1 eu-south-2 eu-west-1 eu-west-3 us-east-1 us-east-2 us-gov-west-1 us-west-2  | 
| Amazon | Nova Micro | amazon.nova-micro-v1:0 |  ap-northeast-1 ap-northeast-2 ap-south-1 ap-southeast-1 ap-southeast-2 eu-central-1 eu-north-1 eu-south-1 eu-south-2 eu-west-1 eu-west-3 us-east-1 us-east-2 us-gov-west-1 us-west-2  | 
| Amazon | Nova Pro | amazon.nova-pro-v1:0 |  ap-northeast-1 ap-northeast-2 ap-south-1 ap-southeast-1 ap-southeast-2 eu-central-1 eu-north-1 eu-south-1 eu-south-2 eu-west-1 eu-west-3 us-east-1 us-east-2 us-gov-west-1 us-west-2  | 
| Amazon | Titan Multimodal Embeddings G1 | amazon.titan-embed-image-v1 |  ap-south-1 ap-southeast-2 ca-central-1 eu-west-1 eu-west-2 eu-west-3 sa-east-1 us-east-1 us-west-2  | 
| Anthropic | Claude 3 Haiku | anthropic.claude-3-haiku-20240307-v1:0 |  ap-south-1 ap-southeast-2 eu-west-1 eu-west-2 us-east-1 us-west-2  | 
| Meta | Llama 3.1 405B Instruct | meta.llama3-1-405b-instruct-v1:0 |  us-east-1 us-east-2 us-west-2  | 
| Meta | Llama 3.1 70B Instruct | meta.llama3-1-70b-instruct-v1:0 |  us-east-1 us-east-2 us-west-2  | 
| Meta | Llama 3.1 8B Instruct | meta.llama3-1-8b-instruct-v1:0 |  us-east-1 us-east-2 us-west-2  | 
| Meta | Llama 3.2 11B Instruct | meta.llama3-2-11b-instruct-v1:0 |  us-east-1 us-east-2 us-west-2  | 
| Meta | Llama 3.2 1B Instruct | meta.llama3-2-1b-instruct-v1:0 |  eu-central-1 eu-west-1 eu-west-3 us-east-1 us-east-2 us-west-2  | 
| Meta | Llama 3.2 3B Instruct | meta.llama3-2-3b-instruct-v1:0 |  us-east-1 us-east-2 us-west-2  | 
| Meta | Llama 3.2 90B Instruct | meta.llama3-2-90b-instruct-v1:0 |  us-east-1 us-east-2 us-west-2  | 

# Fulfill prerequisites to copy models
<a name="copy-model-prereq"></a>

To allow a role to copy a model, you might have to set up permissions, depending on the role's permissions and the model's configuration. Review the permissions in the following list and the circumstances in which you must configure them:

1. If your role doesn't have the [AmazonBedrockFullAccess](security-iam-awsmanpol.md#security-iam-awsmanpol-AmazonBedrockFullAccess) policy attached, attach the following identity-based policy to the role to allow the minimal permissions to copy models and to track copy jobs.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "CopyModels",
               "Effect": "Allow",
               "Action": [
                   "bedrock:CreateModelCopyJob",
                   "bedrock:GetModelCopyJob",
                   "bedrock:ListModelCopyJobs"
               ],
               "Resource": [
                   "arn:aws:bedrock:us-east-1::foundation-model/model-id"
               ],
               "Condition": {
                   "StringEquals": {
                       "aws:RequestedRegion": [
                           "us-east-1"
                       ]
                   }
               }
           }
       ]
   }
   ```

------

   Add ARNs of models to the `Resource` list. You can restrict the Regions that the model is copied to by adding Regions to the list in the [`aws:RequestedRegion` condition key.](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-requestedregion)

1. (Optional) If the model to be copied is encrypted with a KMS key, attach a [key policy to the KMS key that encrypted the model](encryption-custom-job.md#encryption-key-policy-decrypt) to allow a role to decrypt it. Specify the account that the model will be shared with in the `Principal` field.

1. (Optional) If you plan to encrypt the model copy with a KMS key, attach a [key policy to the KMS key that will be used to encrypt the model](encryption-custom-job.md#encryption-key-policy-encrypt) to allow a role to encrypt the model with the key. Specify the role in the `Principal` field.

# Copy a model to a Region
<a name="copy-model-copy"></a>

After you [fulfill the prerequisites](copy-model-prereq.md), you can copy a model. You can copy a model that you own into a different Region, or a model that has been shared with you into a Region so that you can use it. Choose the tab for your preferred method, and then follow the steps:

------
#### [ Console ]

1. Sign in to the AWS Management Console with an IAM identity that has permissions to use the Amazon Bedrock console. Then, open the Amazon Bedrock console at [https://console.aws.amazon.com/bedrock](https://console.aws.amazon.com/bedrock).

1. From the left navigation pane, choose **Custom models** under **Tune**.

1. Depending on your use case, do one of the following:
   + To copy a model that you own into a different Region, select the button next to the model that you want to share in the **Models** section. Then, choose the three dots (![\[Vertical ellipsis icon representing a menu or more options.\]](http://docs.aws.amazon.com/bedrock/latest/userguide/images/icons/vertical-ellipsis.png)) and select **Copy**.
   + To copy a model that was shared with you into a Region, select the button next to the model that you want to share in the **Models shared with you** section. Then, choose **Copy**.

1. In the **Copy details** section, do the following:

   1. In the **Model name** field, give the model copy a name.

   1. Select a Region from the dropdown menu in the **Destination Region** field.

   1. (Optional) To add tags, expand the **Tags** section. For more information, see [Tagging Amazon Bedrock resources](tagging.md).

1. In the **Copy job name** section, give the job a **Name**.

1. (Optional) To encrypt the model copy, select an AWS KMS key that you have access to. For more information, see [Permissions and key policies for custom and copied models](encryption-custom-job.md#encryption-cm-statements).

1. Choose **Copy model**.

1. The model copy job appears in the **Jobs** tab. When the job is complete, the model's status becomes **Complete** and it appears in the **Models** section in the **Models** tab in the Region that you copied the model to.

------
#### [ API ]

To copy a model to another Region, send a [CreateModelCopyJob](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_CreateModelCopyJob.html) request with an [Amazon Bedrock control plane endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#br-cp) in the Region in which you want to use the model.

The following fields are required:


****  

| Field | Brief description | 
| --- | --- | 
| sourceModelArn | The Amazon Resource Name (ARN) of the model to copy. | 
| targetModelName | A name for the model copy. | 

The following fields are optional:


****  

| Field | Use-case | 
| --- | --- | 
| clientToken | To ensure the API request completes only once. For more information, see [Ensuring idempotency](https://docs.aws.amazon.com/ec2/latest/devguide/ec2-api-idempotency.html). | 
| modelKmsKeyId | To provide a KMS key to encrypt the model copy. For more information, see [Permissions and key policies for custom and copied models](encryption-custom-job.md#encryption-cm-statements) | 
| targetModelTags | To provide tags for the model copy. For more information, see [Tagging Amazon Bedrock resources](tagging.md). | 

The response includes a `jobArn` field, which is the ARN of the model copy job.

------

# View information about model copy jobs
<a name="copy-model-job-view"></a>

To learn how to view information about model copy jobs that you've submitted, choose the tab for your preferred method, and then follow the steps:

------
#### [ Console ]

1. Sign in to the AWS Management Console with an IAM identity that has permissions to use the Amazon Bedrock console. Then, open the Amazon Bedrock console at [https://console.aws.amazon.com/bedrock](https://console.aws.amazon.com/bedrock).

1. From the left navigation pane, choose **Custom models** under **Tune**.

1. Select the **Jobs** tab.

1. If a model is still being copied, the **Status** is **Copying**. If it's finished and ready for use, the **Status** is **Completed**.

1. When the job is complete, the model appears in the **Models** section in the **Models** tab in the Region that you copied the model to.

------
#### [ API ]

To get information about a model copy job, send a [GetModelCopyJob](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_GetModelCopyJob.html) request with an [Amazon Bedrock control plane endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#br-cp). Include the `jobArn` in the request.

To list the model copy jobs that you've submitted, send a [ListModelCopyJobs](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_ListModelCopyJobs.html) request with an [Amazon Bedrock control plane endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#br-cp). You can use the headers in the request to specify filters for which jobs to return.

The response returns a list, each of which contains information about a model copy job that you've submitted.

When the job is complete, you should be able to see the copied model by sending a [ListCustomModels](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_ListCustomModels.html) request with an [Amazon Bedrock control plane endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#br-cp), specifying the Region that you copied the model to.

------

# Delete a custom model
<a name="model-customization-delete"></a>

To delete a custom model, choose the tab for your preferred method, and then follow the steps:

**Before you begin**  
If you're deleting a distilled model, you must first delete any Provisioned Throughput or custom model deployment associated with the model. 

------
#### [ Console ]

1. Sign in to the AWS Management Console with an IAM identity that has permissions to use the Amazon Bedrock console. Then, open the Amazon Bedrock console at [https://console.aws.amazon.com/bedrock](https://console.aws.amazon.com/bedrock).

1. Select **Provisioned Throughput** from the left navigation pane.

1. From the **Models** section, select a custom model.

1. Choose the options icon (![\[Vertical ellipsis icon representing a menu or more options.\]](http://docs.aws.amazon.com/bedrock/latest/userguide/images/icons/vertical-ellipsis.png)) and select **Delete**.

1. Follow the instructions to confirm deletion. Your custom model is then deleted.

------
#### [ API ]

To delete a custom model, send a [DeleteCustomModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_DeleteCustomModel.html) request with an [Amazon Bedrock control plane endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#br-cp). Specify either the name of the custom model or its ARN as the `modelIdentifier`.

------

# Code samples for model customization
<a name="model-customization-code-samples"></a>

The following code samples show how to prepare a basic dataset, set up permissions, create a custom model, view the output files, purchase throughput for the model, and run inference on the model. You can modify these code snippets to your specific use-case.

1. Prepare the training dataset.

   1. Create a training dataset file containing the following one line and name it *train.jsonl*.

      ```
      {"prompt": "what is AWS", "completion": "it's Amazon Web Services"}
      ```

   1. Create an S3 bucket for your training data and another one for your output data (the names must be unique).

   1. Upload *train.jsonl* into the training data bucket.

1. Create a policy to access your training and attach it to an IAM role with an Amazon Bedrock trust relationship. Choose the tab for your preferred method, and then follow the steps:

------
#### [ Console ]

   1. Create the S3 policy.

      1. Navigate to the IAM console at [https://console.aws.amazon.com/iam](https://console.aws.amazon.com/iam) and choose **Policies** from the left navigation pane.

      1. Select **Create policy** and then choose **JSON** to open the **Policy editor**.

      1. Paste the following policy, replacing *\$1\$1training-bucket\$1* and *\$1\$1output-bucket\$1* with your bucket names, and then select **Next**.

------
#### [ JSON ]

****  

         ```
         {
             "Version":"2012-10-17",		 	 	 
             "Statement": [
                 {
                     "Effect": "Allow",
                     "Action": [
                         "s3:GetObject",
                         "s3:ListBucket"
                     ],
                     "Resource": [
                         "arn:aws:s3:::${training-bucket}",
                         "arn:aws:s3:::${training-bucket}/*"
                     ]
                 },
                 {
                     "Effect": "Allow",
                     "Action": [
                         "s3:GetObject",
                         "s3:PutObject",
                         "s3:ListBucket"
                     ],
                     "Resource": [
                         "arn:aws:s3:::${output-bucket}",
                         "arn:aws:s3:::${output-bucket}/*"
                     ]
                 }
             ]
         }
         ```

------

      1. Name the policy *MyFineTuningDataAccess* and select **Create policy**.

   1. Create an IAM role and attach the policy.

      1. From the left navigation pane, choose **Roles** and then select **Create role**.

      1. Select **Custom trust policy**, paste the following policy, and select **Next**.

------
#### [ JSON ]

****  

         ```
         {
             "Version":"2012-10-17",		 	 	 
             "Statement": [
                 {
                     "Effect": "Allow",
                     "Principal": {
                         "Service": "bedrock.amazonaws.com"
                     },
                     "Action": "sts:AssumeRole"
                 }
             ] 
         }
         ```

------

      1. Search for the *MyFineTuningDataAccess* policy you created, select the checkbox, and choose **Next**.

      1. Name the role *MyCustomizationRole* and select *Create role*.

------
#### [ CLI ]

   1. Create a file called *BedrockTrust.json* and paste the following policy into it.

------
#### [ JSON ]

****  

      ```
      {
          "Version":"2012-10-17",		 	 	 
          "Statement": [
              {
                  "Effect": "Allow",
                  "Principal": {
                      "Service": "bedrock.amazonaws.com"
                  },
                  "Action": "sts:AssumeRole"
              }
          ] 
      }
      ```

------

   1. Create another file called *MyFineTuningDataAccess.json* and paste the following policy into it, replacing *\$1\$1training-bucket\$1* and *\$1\$1output-bucket\$1* with your bucket names.

------
#### [ JSON ]

****  

      ```
      {
          "Version":"2012-10-17",		 	 	 
          "Statement": [
              {
                  "Effect": "Allow",
                  "Action": [
                      "s3:GetObject",
                      "s3:ListBucket"
                  ],
                  "Resource": [
                      "arn:aws:s3:::${training-bucket}",
                      "arn:aws:s3:::${training-bucket}/*"
                  ]
              },
              {
                  "Effect": "Allow",
                  "Action": [
                      "s3:GetObject",
                      "s3:PutObject",
                      "s3:ListBucket"
                  ],
                  "Resource": [
                      "arn:aws:s3:::${output-bucket}",
                      "arn:aws:s3:::${output-bucket}/*"
                  ]
              }
          ]
      }
      ```

------

   1. In a terminal, navigate to the folder containing the policies you created.

   1. Make a [CreateRole](https://docs.aws.amazon.com/IAM/latest/APIReference/API_CreateRole.html) request to create an IAM role called *MyCustomizationRole* and attach the *BedrockTrust.json* trust policy that you created.

      ```
      aws iam create-role \
          --role-name MyCustomizationRole \
          --assume-role-policy-document file://BedrockTrust.json
      ```

   1. Make a [CreatePolicy](https://docs.aws.amazon.com/IAM/latest/APIReference/API_CreatePolicy.html) request to create the S3 data access policy with the *MyFineTuningDataAccess.json* file you created. The response returns an `Arn` for the policy.

      ```
      aws iam create-policy \
          --policy-name MyFineTuningDataAccess \
          --policy-document file://myFineTuningDataAccess.json
      ```

   1. Make an [AttachRolePolicy](https://docs.aws.amazon.com/IAM/latest/APIReference/API_AttachRolePolicy.html) request to attach the S3 data access policy to your role, replacing the `policy-arn` with the ARN in the response from the previous step:

      ```
      aws iam attach-role-policy \
          --role-name MyCustomizationRole \
          --policy-arn ${policy-arn}
      ```

------
#### [ Python ]

   1. Run the following code to make a [CreateRole](https://docs.aws.amazon.com/IAM/latest/APIReference/API_CreateRole.html) request to create an IAM role called *MyCustomizationRole* and to make a [CreatePolicy](https://docs.aws.amazon.com/IAM/latest/APIReference/API_CreatePolicy.html) request to create an S3 data access policy called *MyFineTuningDataAccess*. For the S3 data access policy, replace *\$1\$1training-bucket\$1* and *\$1\$1output-bucket\$1* with your S3 bucket names.

      ```
      import boto3
      import json
      
      iam = boto3.client("iam")
      
      iam.create_role(
          RoleName="MyCustomizationRole",
          AssumeRolePolicyDocument=json.dumps({
              "Version": "2012-10-17",		 	 	 
              "Statement": [
                  {
                      "Effect": "Allow",
                      "Principal": {
                          "Service": "bedrock.amazonaws.com"
                      },
                      "Action": "sts:AssumeRole"
                  }
              ] 
          })
      )
      
      iam.create_policy(
          PolicyName="MyFineTuningDataAccess",
          PolicyDocument=json.dumps({
              "Version": "2012-10-17",		 	 	 
              "Statement": [
                  {
                      "Effect": "Allow",
                      "Action": [
                          "s3:GetObject",
                          "s3:ListBucket"
                      ],
                      "Resource": [
                          "arn:aws:s3:::${training-bucket}",
                          "arn:aws:s3:::${training-bucket}/*"
                      ]
                  },
                  {
                      "Effect": "Allow",
                      "Action": [
                          "s3:GetObject",
                          "s3:PutObject",
                          "s3:ListBucket"
                      ],
                      "Resource": [
                          "arn:aws:s3:::${output-bucket}",
                          "arn:aws:s3:::${output-bucket}/*"
                      ]
                  }
              ]
          })
      )
      ```

   1. An `Arn` is returned in the response. Run the following code snippet to make an [AttachRolePolicy](https://docs.aws.amazon.com/IAM/latest/APIReference/API_AttachRolePolicy.html) request, replacing *\$1\$1policy-arn\$1* with the returned `Arn`.

      ```
      iam.attach_role_policy(
          RoleName="MyCustomizationRole",
          PolicyArn="${policy-arn}"
      )
      ```

------

1. Select a language to see code samples to call the model customization API operations.

------
#### [ CLI ]

First, create a text file named *FineTuningData.json*. Copy the JSON code from below into the text file, replacing *\$1\$1training-bucket\$1* and *\$1\$1output-bucket\$1* with your S3 bucket names.

```
{
    "trainingDataConfig": {
        "s3Uri": "s3://${training-bucket}/train.jsonl"
    },
    "outputDataConfig": {
        "s3Uri": "s3://${output-bucket}"
    }
}
```

To submit a model customization job, navigate to the folder containing *FineTuningData.json* in a terminal and run the following command in the command line, replacing *\$1\$1your-customization-role-arn\$1* with the model customization role that you set up.

```
aws bedrock create-model-customization-job \
    --customization-type FINE_TUNING \
    --base-model-identifier arn:aws:bedrock:us-east-1::foundation-model/amazon.titan-text-express-v1 \
    --role-arn ${your-customization-role-arn} \
    --job-name MyFineTuningJob \
    --custom-model-name MyCustomModel \
    --hyper-parameters epochCount=1,batchSize=1,learningRate=.0001,learningRateWarmupSteps=0 \
    --cli-input-json file://FineTuningData.json
```

The response returns a *jobArn*. Allow the job some time to complete. You can check its status with the following command.

```
aws bedrock get-model-customization-job \
    --job-identifier "jobArn"
```

When the `status` is `COMPLETE`, you can see the `trainingMetrics` in the response. You can download the artifacts to the current folder by running the following command, replacing *aet.et-bucket* with your output bucket name and *jobId* with the ID of the customization job (the sequence following the last slash in the `jobArn`).

```
aws s3 cp s3://${output-bucket}/model-customization-job-jobId . --recursive
```

Purchase a no-commitment Provisioned Throughput for your custom model with the following command.

**Note**  
You will be charged hourly for this purchase. Use the console to see price estimates for different options.

```
aws bedrock create-provisioned-model-throughput \
    --model-id MyCustomModel \
    --provisioned-model-name MyProvisionedCustomModel \
    --model-units 1
```

The response returns a `provisionedModelArn`. Allow the Provisioned Throughput some time to be created. To check its status, provide the name or ARN of the provisioned model as the `provisioned-model-id` in the following command.

```
aws bedrock get-provisioned-model-throughput \
    --provisioned-model-id ${provisioned-model-arn}
```

When the `status` is `InService`, you can run inference with your custom model with the following command. You must provide the ARN of the provisioned model as the `model-id`. The output is written to a file named *output.txt* in your current folder.

```
aws bedrock-runtime invoke-model \
    --model-id ${provisioned-model-arn} \
    --body '{"inputText": "What is AWS?", "textGenerationConfig": {"temperature": 0.5}}' \
    --cli-binary-format raw-in-base64-out \
    output.txt
```

------
#### [ Python ]

Run the following code snippet to submit a fine-tuning job. Replace *\$1\$1your-customization-role-arn\$1* with the ARN of the *MyCustomizationRole* that you set up and replace *\$1\$1training-bucket\$1* and *\$1\$1output-bucket\$1* with your S3 bucket names.

```
import boto3

bedrock = boto3.client(service_name='bedrock')
    
# Set parameters
customizationType = "FINE_TUNING"
baseModelIdentifier = "arn:aws:bedrock:us-east-1::foundation-model/amazon.titan-text-express-v1"
roleArn = "${your-customization-role-arn}"
jobName = "MyFineTuningJob"
customModelName = "MyCustomModel"
hyperParameters = {
        "epochCount": "1",
        "batchSize": "1",
        "learningRate": ".0001",
        "learningRateWarmupSteps": "0"
    }
trainingDataConfig = {"s3Uri": "s3://${training-bucket}/myInputData/train.jsonl"}
outputDataConfig = {"s3Uri": "s3://${output-bucket}/myOutputData"}

# Create job
response_ft = bedrock.create_model_customization_job(
    jobName=jobName, 
    customModelName=customModelName,
    roleArn=roleArn,
    baseModelIdentifier=baseModelIdentifier,
    hyperParameters=hyperParameters,
    trainingDataConfig=trainingDataConfig,
    outputDataConfig=outputDataConfig
)

jobArn = response_ft.get('jobArn')
```

The response returns a *jobArn*. Allow the job some time to complete. You can check its status with the following command.

```
bedrock.get_model_customization_job(jobIdentifier=jobArn).get('status')
```

When the `status` is `COMPLETE`, you can see the `trainingMetrics` in the [GetModelCustomizationJob](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_GetModelCustomizationJob.html) response. You can also follow the steps at [Downloading objects](https://docs.aws.amazon.com/AmazonS3/latest/userguide/download-objects.html) to download the metrics.

Purchase a no-commitment Provisioned Throughput for your custom model with the following command.

```
response_pt = bedrock.create_provisioned_model_throughput(
    modelId="MyCustomModel",
    provisionedModelName="MyProvisionedCustomModel",
    modelUnits="1"
)

provisionedModelArn = response_pt.get('provisionedModelArn')
```

The response returns a `provisionedModelArn`. Allow the Provisioned Throughput some time to be created. To check its status, provide the name or ARN of the provisioned model as the `provisionedModelId` in the following command.

```
bedrock.get_provisioned_model_throughput(provisionedModelId=provisionedModelArn)
```

When the `status` is `InService`, you can run inference with your custom model with the following command. You must provide the ARN of the provisioned model as the `modelId`. 

```
import json
import logging
import boto3

from botocore.exceptions import ClientError


class ImageError(Exception):
    "Custom exception for errors returned by the model"

    def __init__(self, message):
        self.message = message


logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def generate_text(model_id, body):
    """
    Generate text using your provisioned custom model.
    Args:
        model_id (str): The model ID to use.
        body (str) : The request body to use.
    Returns:
        response (json): The response from the model.
    """

    logger.info(
        "Generating text with your provisioned custom model %s", model_id)

    brt = boto3.client(service_name='bedrock-runtime')

    accept = "application/json"
    content_type = "application/json"

    response = brt.invoke_model(
        body=body, modelId=model_id, accept=accept, contentType=content_type
    )
    response_body = json.loads(response.get("body").read())

    finish_reason = response_body.get("error")

    if finish_reason is not None:
        raise ImageError(f"Text generation error. Error is {finish_reason}")

    logger.info(
        "Successfully generated text with provisioned custom model %s", model_id)

    return response_body


def main():
    """
    Entrypoint for example.
    """
    try:
        logging.basicConfig(level=logging.INFO,
                            format="%(levelname)s: %(message)s")

        model_id = provisionedModelArn

        body = json.dumps({
            "inputText": "what is AWS?"
        })

        response_body = generate_text(model_id, body)
        print(f"Input token count: {response_body['inputTextTokenCount']}")

        for result in response_body['results']:
            print(f"Token count: {result['tokenCount']}")
            print(f"Output text: {result['outputText']}")
            print(f"Completion reason: {result['completionReason']}")

    except ClientError as err:
        message = err.response["Error"]["Message"]
        logger.error("A client error occurred: %s", message)
        print("A client error occured: " +
              format(message))
    except ImageError as err:
        logger.error(err.message)
        print(err.message)

    else:
        print(
            f"Finished generating text with your provisioned custom model {model_id}.")


if __name__ == "__main__":
    main()
```

------

# Troubleshooting model customization issues
<a name="fine-tuning-troubleshooting"></a>

This section summarizes errors that you might encounter and what to check if you do.

## Permissions issues
<a name="w2aac15c31c44b5b1"></a>

If you encounter an issue with permissions to access an Amazon S3 bucket, check that the following are true:

1. If the Amazon S3 bucket uses a customer managed AWS Key Management Service (KMS key) for server-side encryption, ensure that the IAM role passed to Amazon Bedrock has `kms:Decrypt` permissions for the KMS key. For example, see [Allow a user to enccrypt and decrypt with any AWS KMS key in a specific AWS account](https://docs.aws.amazon.com/kms/latest/developerguide/customer-managed-policies.html#iam-policy-example-encrypt-decrypt-one-account).

1. The Amazon S3 bucket is in the same Region as the Amazon Bedrock model customization job.

1. The IAM role trust policy includes the service SP (`bedrock.amazonaws.com`).

The following messages indicate issues with permissions to access training or validation data in an Amazon S3 bucket:

```
Could not validate GetObject permissions to access Amazon S3 bucket: training-data-bucket at key train.jsonl
Could not validate GetObject permissions to access Amazon S3 bucket: validation-data-bucket at key validation.jsonl
```

If you encounter one of the above errors, check that the IAM role passed to the service has `s3:GetObject` and `s3:ListBucket` permissions for the training and validation dataset Amazon S3 URIs.

The following message indicates issues with permissions to write the output data in an Amazon S3 bucket:

```
Amazon S3 perms missing (PutObject): Could not validate PutObject permissions to access S3 bucket: bedrock-output-bucket at key output/.write_access_check_file.tmp
```

If you encounter the above error, check that the IAM role passed to the service has `s3:PutObject` permissions for the output data Amazon S3 URI.

## Data issues
<a name="w2aac15c31c44b5b3"></a>

The following errors are related to issues with the training, validation, or output data files:

**Invalid file format**

```
Unable to parse Amazon S3 file: fileName.jsonl. Data files must conform to JSONL format.
```

If you encounter the above error, check that the following are true:

1. Each line is in JSON.

1. Each JSON has two keys, an *input* and an *output*, and each key is a string. For example:

   ```
   {
       "input": "this is my input",
       "output": "this is my output"
   }
   ```

1. There are no additional new lines or empty lines.

**Character quota exceeded**

```
Input size exceeded in file fileName.jsonl for record starting with...
```

If you encounter an error beginning with the text above, ensure that the number of characters conforms to the character quota in [Prepare data for fine-tuning your models](model-customization-prepare.md).

**Token count exceeded**

```
Maximum input token count 4097 exceeds limit of 4096
Maximum output token count 4097 exceeds limit of 4096
Max sum of input and output token length 4097 exceeds total limit of 4096
```

If you encounter an error similar to the preceeding example, make sure that the number of tokens conforms to the token quota in [Prepare data for fine-tuning your models](model-customization-prepare.md).

## Third-party license terms and policy issues
<a name="w2aac15c31c44b5b5"></a>

The following errors are related to third-party's license terms and their policies:

**Fine-tuning materials inconsistent with third-party's license terms**

```
Automated tests flagged this fine-tuning job as including materials that are potentially inconsistent with Anthropic's third-party license terms. Please contact support.
```

If you encounter the above error, ensure your training dataset does not contain content that is inconsistent with Anthropic's usage policies. If the issue persists, contact Support.

## Internal error
<a name="w2aac15c31c44b5b7"></a>

```
Encountered an unexpected error when processing the request, please try again
```

If you encounter the above error, there might be an issue with the service. Try the job again. If the issue persists, contact Support.