Create AI guardrails for Amazon Q in Connect
Important
-
You can create up to three custom guardrails.
-
Amazon Q in Connect guardrails support English-only. Evaluating text content in other languages can result in unreliable results.
An AI guardrail is a resource that enables you to implement safeguards based on your use cases and responsible AI policies.
Amazon Connect uses Amazon Bedrock guardrails. You can create and edit these guardrails in the Amazon Connect admin website.
Following is an overview of the policies that you can create and edit Amazon Connect admin website:
-
Content filters: Adjust filter strengths to help block input prompts or model responses containing harmful content. Filtering is done based on detection of certain predefined harmful content categories - Hate, Insults, Sexual, Violence, Misconduct and Prompt Attack.
-
Denied topics: Define a set of topics that are undesirable in the context of your application. The filter will help block them if detected in user queries or model responses. You can add up to 30 denied topics.
-
Word filters: Configure filters to help block undesirable words, phrases, and profanity (exact match). Such words can include offensive terms, competitor names, etc.
-
Sensitive information filters: Configure filters to help block or mask sensitive information, such as personally identifiable information (PII), or custom regex in user inputs and model responses.
Blocking or masking is done based on probabilistic detection of sensitive information in standard formats in entities such as SSN number, Date of Birth, address, etc. This also allows configuring regular expression based detection of patterns for identifiers.
-
Contextual grounding check: Help detect and filter hallucinations in model responses based on grounding in a source and relevance to the user query.
-
Blocked messaging: Customize the default message that's displayed to the user if your guardrail blocks the input or the model response.
Amazon Connect does not support Image content filter to help detect and filter inappropriate or toxic image content.
Important
When configuring or editing a guardrail, we strongly recommend that you experiment and benchmark with different configurations. It's possible that some of your combinations may have unintended consequences. Test the guardrail to ensure that the results meet your use-case requirements.
The following section explains how to access the AI guardrail builder and editor in the Amazon Connect admin website, using the example of changing the blocked message that is displayed to users.
Change the default blocked
message
The following image shows an example of the default blocked message that is displayed to a user. The default message is "Blocked input text by guardrail."

To change the default blocked message
-
Log in to the Amazon Connect admin website at https://
instance name
.my.connect.aws/. Use an admin account, or an account with Amazon Q - AI guardrails - Create permission in it's security profile. -
On the navigation menu, choose Amazon Q, AI guardrails.
-
On the AI Guardrails page, choose Create AI Guardrail. A dialog is displayed for to you assign a name and description.
-
In the Create AI Guardrail dialog box, enter a name and description, and then choose Create. If your business already has three guardrails, you'll get an error message, as shown in the following image.
If you receive this message, instead of creating another guardrail, consider editing an existing guardrail to meet your needs. Or, delete one so you can create another.
-
To change the default message that's displayed when guardrail blocks the model response, scroll to the Blocked messaging section.
-
Enter the block message text that you want to be displayed, choose Save, and then Publish.
Sample CLI commands to configure
AI guardrail policies
Following are examples of how to configure the AI guardrail policies by using the AWS CLI.
Block undesirable topics
Use the following sample AWS CLI command to block undesirable topics.
{
"assistantId": "a0a81ecf-6df1-4f91-9513-3bdcb9497e32",
"name": "test-ai-guardrail-2",
"description": "This is a test ai-guardrail",
"blockedInputMessaging": "Blocked input text by guardrail",
"blockedOutputsMessaging": "Blocked output text by guardrail",
"visibilityStatus": "PUBLISHED",
"topicPolicyConfig": {
"topicsConfig": [
{
"name": "Financial Advice",
"definition": "Investment advice refers to financial inquiries, guidance, or recommendations with the goal of generating returns or achieving specific financial objectives.",
"examples": ["- Is investment in stocks better than index funds?", "Which stocks should I invest into?", "- Can you manage my personal finance?"],
"type": "DENY"
}
]
}
}
Filter harmful and inappropriate content
Use the following sample AWS CLI command to filter harmful and inappropriate content.
{
"assistantId": "a0a81ecf-6df1-4f91-9513-3bdcb9497e32",
"name": "test-ai-guardrail-2",
"description": "This is a test ai-guardrail",
"blockedInputMessaging": "Blocked input text by guardrail",
"blockedOutputsMessaging": "Blocked output text by guardrail",
"visibilityStatus": "PUBLISHED",
"contentPolicyConfig": {
"filtersConfig": [
{
"inputStrength": "HIGH",
"outputStrength": "HIGH",
"type": "INSULTS"
}
]
}
}
Filter harmful and inappropriate words
Use the following sample AWS CLI command to filter harmful and inappropriate words.
{
"assistantId": "a0a81ecf-6df1-4f91-9513-3bdcb9497e32",
"name": "test-ai-guardrail-2",
"description": "This is a test ai-guardrail",
"blockedInputMessaging": "Blocked input text by guardrail",
"blockedOutputsMessaging": "Blocked output text by guardrail",
"visibilityStatus": "PUBLISHED",
"wordPolicyConfig": {
"wordsConfig": [
{
"text": "Nvidia",
},
]
}
}
Detect hallucinations in the model response
Use the following sample AWS CLI command to detect hallucinations in the model response.
{
"assistantId": "a0a81ecf-6df1-4f91-9513-3bdcb9497e32",
"name": "test-ai-guardrail-2",
"description": "This is a test ai-guardrail",
"blockedInputMessaging": "Blocked input text by guardrail",
"blockedOutputsMessaging": "Blocked output text by guardrail",
"visibilityStatus": "PUBLISHED",
"contextualGroundPolicyConfig": {
"filtersConfig": [
{
"type": "RELEVANCE",
"threshold": 0.50
},
]
}
}
Redact sensitive information
Use the following sample AWS CLI command to redact sensitive information such as personal identifiable information (PII).
{
"assistantId": "a0a81ecf-6df1-4f91-9513-3bdcb9497e32",
"name": "test-ai-guardrail-2",
"description": "This is a test ai-guardrail",
"blockedInputMessaging": "Blocked input text by guardrail",
"blockedOutputsMessaging": "Blocked output text by guardrail",
"visibilityStatus": "PUBLISHED",
"sensitiveInformationPolicyConfig": {
"piiEntitiesConfig": [
{
"type": "CREDIT_DEBIT_CARD_NUMBER",
"action":"BLOCK",
},
]
}
}