Use guardrails for your use case - Amazon Bedrock

Use guardrails for your use case

After you create a guardrail, you can apply with the following features:

  • Model inference – Apply a guardrail to submitted prompts and generated responses when running inference on a model.

  • Agents – Associate a guardrail with an agent to apply it to prompts sent to the agent and responses returned from it.

  • Knowledge base – Apply a guardrail when querying a knowledge base and generating responses from it.

  • Flow – Add a guardrail to a prompt node or knowledge base node in a flow to apply it to inputs and outputs of these nodes.

The following table describes how to include a guardrail for each of these features using the AWS Management Console or the Amazon Bedrock API.

Use case Console API
Model inference Select the guardrail when using a playground. Specify in the header in an InvokeModel or InvokeModelWithResponseStream request or include in the guardrailConfig field in the body of a Converse or ConverseStream request.
Associate with an agent When you create or update the agent, specify in the Guardrail details section of the Agent builder. Include a guardrailConfiguration field in the body of a CreateAgent or UpdateAgent request.
Use when querying a knowledge base Follow the steps in the Guardrails section of the query configurations. Add a guardrail when you set Configurations. Include a guardrailConfiguration field in the body of a RetrieveAndGenerate request.
Include in a prompt node in a flow When you create or update a flow, select the prompt node and specify the guardrail in the Configure section. When you define the prompt node in the nodes field in a CreateFlow or UpdateFlow request, include a guardrailConfiguration field in the PromptFlowNodeConfiguration.
Include in a knowledge base node in a flow When you create or update a flow, select the knowledge base node and specify the guardrail in the Configure section. When you define the knowledge base node in the nodes field in a CreateFlow or UpdateFlow request, include a guardrailConfiguration field in the KnowledgeBaseFlowNodeConfiguration.

This section covers using a guardrail with model inference and the Amazon Bedrock API. You can use the base inference operations (InvokeModel and InvokeModelWithResponseStream) and the Converse API (Converse and ConverseStream). With both sets of operations you can use a guardrail with synchronous and streaming model inference. You can also selectively evaluate user input and can configure streaming response behavior.