Select your cookie preferences

We use essential cookies and similar tools that are necessary to provide our site and services. We use performance cookies to collect anonymous statistics, so we can understand how customers use our site and make improvements. Essential cookies cannot be deactivated, but you can choose “Customize” or “Decline” to decline performance cookies.

If you agree, AWS and approved third parties will also use cookies to provide useful site features, remember your preferences, and display relevant content, including relevant advertising. To accept or decline all non-essential cookies, choose “Accept” or “Decline.” To make more detailed choices, choose “Customize.”

Modify a guardrail

Focus mode
Modify a guardrail - Amazon Bedrock

You can edit your guardrails by following these steps for the AWS Console or API:

Console
To edit a guardrail
  1. Sign in to the AWS Management Console using an IAM role with Amazon Bedrock permissions, and open the Amazon Bedrock console at https://console.aws.amazon.com/bedrock/.

  2. Choose Guardrails from the left navigation pane. Then, select a guardrail in the Guardrails section.

  3. To the name, description, tags, or model encryption settings for the guardrail. Select Edit in the Guardrail overview section.

  4. To edit specific configurations for the guardrail, select Working draft in the Working draft section.

  5. Select Edit for the sections containing the settings that you want to change. Make any edits that are needed to the messaging for denied prompts or responses.

  6. To edit filters for harmful categories, select Configure harmful categories filter. Select Text and/or Image to filter text or image content from prompts or responses to the model. Select None, Low, Medium, or High for the level of filtration you want to apply to each category. You can choose to have different filter levels for prompts or responses. You can select the filter for prompt attacks in the harmful categories. Configure how strict you want each filter to be for prompts that the user provides to the model.

  7. Select Edit for any sections containing the settings that you want to change.

  8. Select Save and exit to implement the edits on your guardrail.

API

To edit a guardrail, send a UpdateGuardrail request. Include both fields that you want to update as well as fields that you want to keep the same.

The following is the request format:

PUT /guardrails/guardrailIdentifier HTTP/1.1 Content-type: application/json { "blockedInputMessaging": "string", "blockedOutputsMessaging": "string", "contentPolicyConfig": { "filtersConfig": [ { "inputStrength": "NONE | LOW | MEDIUM | HIGH", "outputStrength": "NONE | LOW | MEDIUM | HIGH", "type": "SEXUAL | VIOLENCE | HATE | INSULTS" } ] }, "description": "string", "kmsKeyId": "string", "name": "string", "tags": [ { "key": "string", "value": "string" } ], "topicPolicyConfig": { "topicsConfig": [ { "definition": "string", "examples": [ "string" ], "name": "string", "type": "DENY" } ] } }

The following is the response format:

HTTP/1.1 202 Content-type: application/json { "guardrailArn": "string", "guardrailId": "string", "updatedAt": "string", "version": "string" }
To edit a guardrail
  1. Sign in to the AWS Management Console using an IAM role with Amazon Bedrock permissions, and open the Amazon Bedrock console at https://console.aws.amazon.com/bedrock/.

  2. Choose Guardrails from the left navigation pane. Then, select a guardrail in the Guardrails section.

  3. To the name, description, tags, or model encryption settings for the guardrail. Select Edit in the Guardrail overview section.

  4. To edit specific configurations for the guardrail, select Working draft in the Working draft section.

  5. Select Edit for the sections containing the settings that you want to change. Make any edits that are needed to the messaging for denied prompts or responses.

  6. To edit filters for harmful categories, select Configure harmful categories filter. Select Text and/or Image to filter text or image content from prompts or responses to the model. Select None, Low, Medium, or High for the level of filtration you want to apply to each category. You can choose to have different filter levels for prompts or responses. You can select the filter for prompt attacks in the harmful categories. Configure how strict you want each filter to be for prompts that the user provides to the model.

  7. Select Edit for any sections containing the settings that you want to change.

  8. Select Save and exit to implement the edits on your guardrail.

PrivacySite termsCookie preferences
© 2025, Amazon Web Services, Inc. or its affiliates. All rights reserved.