

# Build an end-to-end generative AI workflow with Amazon Bedrock Flows
<a name="flows"></a>

Amazon Bedrock Flows offers the ability for you to use supported foundation models (FMs) to build workflows by linking prompts, foundational models, and other AWS services to create end-to-end solutions.

With flows, you can quickly build complex generative AI workflows using a visual builder, easily integrate with Amazon Bedrock offerings such as FMs, knowledge bases, and other AWS services such as AWS Lambda by transferring data between them, and deploying immutable workflows to move from testing to production in few clicks.

Refer to the following resources for more information about Amazon Bedrock Flows:
+ Pricing for Amazon Bedrock Flows is dependent on the resources that you use. For example, if you invoke a flow with a prompt node that uses an Amazon Titan model, you'll be charged for invoking that model. For more information, see [Amazon Bedrock pricing](https://aws.amazon.com/bedrock/pricing/).
+ To see quotas for flows, see [Amazon Bedrock endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/bedrock.html) in the AWS General Reference.

The following are some example tasks that you can build a flow for in Amazon Bedrock:
+ **Create and send an email invite** – Create a flow connecting a prompt node, knowledge base node, and Lambda function node. Provide the following prompt to generate an email body: **Send invite to John Smith’s extended team for in-person documentation read for an hour at 2PM EST next Tuesday**. After processing the prompt, the flow queries a knowledge base to look up the email addresses of John Smith's extended team, and then sends the input to a Lambda function to send the invite to all the team members in the list.
+ **Troubleshoot using the error message and the ID of the resource that is causing the error** – The flow looks up the possible causes of the error from a documentation knowledge base, pulls system logs and other relevant information about the resource, and updates the faulty configurations and values for the resource.
+ **Generate reports** – Build a flow to generate metrics for top products. The flow looks for the sales metrics in a database, aggregates the metrics, generates a summary report for top product purchases, and publishes the report on the specified portal.
+ **Ingest data from a specified dataset** – Provide a prompt such as the following: **Start ingesting new datasets added after 3/31 and report failures**. The flow starts preparing data for ingestion and keeps reporting on the status. After the data preparation is complete, the flow starts the ingestion process filtering the failed data. After data ingestion is complete, the flow summarizes the failures and publishes a failure report.

Flows for Amazon Bedrock makes it easy for you link foundation models (FMs), prompts, and other AWS services to quickly create, test, and run your flows. You can manage flows using the visual builder in the Amazon Bedrock console or through the APIs. 

The general steps for creating, testing, and deploying a flow are as follows:

**Create the flow:**

1. Specify a flow name, description, and appropriate IAM permissions.

1. Design your flow by deciding the nodes you want to use.

1. Create or define all the resources you require for each node. For example, if you are planning to use an AWS Lambda function, define the functions you need for the node to complete its task.

1. Add nodes to your flow, configure them, and create connections between the nodes by linking the output of a node to the input of another node in the flow.

**Test the flow:**

1. Prepare the flow, so that the latest changes apply to the *working draft* of the flow, a version of the flow that you can use to iteratively test and update your flow

1. Test the flow by invoking it with sample inputs to see the outputs it yields.

1. When you're satisfied with a flow's configuration, you can create a snapshot of it by publishing a *version*. The version preserves flow definition as it exists at the time of the creation. Versions are immutable because they act as a snapshot of the flow at the time it was created.

**Deploy the flow**

1. Create an alias that points to the version of your flow that you want to use in your application.

1. Set up your application to make `InvokeFlow` requests to the alias. If you need to revert to an older version or upgrade to a newer one, you can change the routing configuration of the alias.

**Topics**
+ [

# How Amazon Bedrock Flows works
](flows-how-it-works.md)
+ [

# Supported Regions and models for flows
](flows-supported.md)
+ [

# Prerequisites for Amazon Bedrock Flows
](flows-prereq.md)
+ [

# Create and design a flow in Amazon Bedrock
](flows-create.md)
+ [

# View information about flows in Amazon Bedrock
](flows-view.md)
+ [

# Modify a flow in Amazon Bedrock
](flows-modify.md)
+ [

# Include guardrails in your flow in Amazon Bedrock
](flows-guardrails.md)
+ [

# Test a flow in Amazon Bedrock
](flows-test.md)
+ [

# Run Amazon Bedrock flows asynchronously with flow executions
](flows-create-async.md)
+ [

# Deploy a flow to your application using versions and aliases
](flows-deploy.md)
+ [

# Invoke an AWS Lambda function from an Amazon Bedrock flow in a different AWS account
](flow-cross-account-lambda.md)
+ [

# Converse with an Amazon Bedrock flow
](flows-multi-turn-invocation.md)
+ [

# Run Amazon Bedrock Flows code samples
](flows-code-ex.md)
+ [

# Delete a flow in Amazon Bedrock
](flows-delete.md)

# How Amazon Bedrock Flows works
<a name="flows-how-it-works"></a>

Amazon Bedrock Flows lets you build generative AI workflows by connecting nodes, each of which correspond to a step in the flow that invokes an Amazon Bedrock or related resource. To define inputs into and outputs from nodes, you use expressions to specify how the input is interpreted. To better understand these concepts, review the following topics:

**Topics**
+ [

# Key definitions for Amazon Bedrock Flows
](key-definitions-flow.md)
+ [

# Use expressions to define inputs by extracting the relevant part of a whole input in Amazon Bedrock Flows
](flows-expressions.md)
+ [

# Node types for your flow
](flows-nodes.md)

# Key definitions for Amazon Bedrock Flows
<a name="key-definitions-flow"></a>

The following list introduces you to the basic concepts of Amazon Bedrock Flows.
+ **Flow** – A flow is a construct consisting of a name, description, permissions, a collection of nodes, and connections between nodes. When a flow is invoked, the input in the invocation is sent through each node of the flow until an output node is reached. The response of the invocation returns the final output.
+ **Node** – A node is a step inside a flow. For each node, you configure its name, description, input, output, and any additional configurations. The configuration of a node differs based on its type. To learn more about different node types, see [Node types for your flow](flows-nodes.md).
+ **Connection** – There are two types of connections used in Amazon Bedrock Flows:
  + A **data connection** is drawn between the output of one node (the *source node*) and the input of another node (the *target node*) and sends data from an upstream node to a downstream node. In the Amazon Bedrock console, data connections are solid gray lines.
  + A **conditional connection** is drawn between a condition in a condition node and a downstream node and sends data from the node that precedes the condition node to a downstream node if the condition is fulfilled. In the Amazon Bedrock console, conditional connections are dotted purple lines.
+ **Expressions** – An expression defines how to extract an input from the whole input entering a node. To learn how to write expressions, see [Use expressions to define inputs by extracting the relevant part of a whole input in Amazon Bedrock FlowsDefine inputs with expressions](flows-expressions.md).
+ **Flow builder** – The Flow builder is a tool on the Amazon Bedrock console to build and edit flows through a visual interface. You use the visual interface to drag and drop nodes onto the interface and configure inputs and outputs for these nodes to define your flow.
+ In the following sections, we will use the following terms:
  + **Whole input** – The entire input that is sent from the previous node to the current node.
  + **Upstream** – Refers to nodes that occur earlier in the flow.
  + **Downstream** – Refers to nodes that occur later in the flow.
  + **Input** – A node can have multiple inputs. You use expressions to extract the relevant parts of the whole input to use for each individual input. In the Amazon Bedrock console flow builder, an input appears as a circle on the left edge of a node. Connect each input to an output of an upstream node.
  + **Output** – A node can have multiple outputs. In the Amazon Bedrock console flow builder, an output appears as a circle on the right edge of a node. Connect each output to at least one input in a downstream node.
  + **Branch** – If an output from a node is sent to more than one node, or if a condition node is included, the path of a flow will split into multiple branches. Each branch can potentially yield another output in the flow invocation response.

# Use expressions to define inputs by extracting the relevant part of a whole input in Amazon Bedrock Flows
<a name="flows-expressions"></a>

When you configure the inputs for a node, you must define it in relation to the whole input that will enter the node. The whole input can be a string, number, boolean, array, or object. To define an input in relation to the whole input, you use a subset of supported expressions based off [JsonPath](https://github.com/json-path/JsonPath). Every expression must begin with `$.data`, which refers to the whole input. Note the following for using expressions:
+ If the whole input is a string, number, or boolean, the only expression that you can use to define an individual input is `$.data`
+ If the whole input is an array or object, you can use extract a part of it to define an individual input.

As an example to understand how to use expressions, let's say that the whole input is the following JSON object:

```
{
    "animals": {
        "mammals": ["cat", "dog"],
        "reptiles": ["snake", "turtle", "iguana"]
    },
    "organisms": {
        "mammals": ["rabbit", "horse", "mouse"],
        "flowers": ["lily", "daisy"]
    },
    "numbers": [1, 2, 3, 5, 8]
}
```

You can use the following expressions to extract a part of the input (the examples refer to what would be returned from the preceding JSON object):


****  

| Expression | Meaning | Example | Example result | 
| --- | --- | --- | --- | 
| \$1.data | The entire input. | \$1.data | The entire object | 
| .name | The value for a field called name in a JSON object. | \$1.data.numbers | [1, 2, 3, 5, 8] | 
| [int] | The member at the index specified by int in an array. | \$1.data.animals.reptiles[2] | iguana | 
| [int1, int2, ...] | The members at the indices specified by each int in an array. | \$1.data.numbers[0, 3] | [1, 5] | 
| [int1:int2] | An array consisting of the items at the indices between int1 (inclusive) and int2 (exclusive) in an array. Omitting int1 or int2 is equivalent to the marking the beginning or end of the array. | \$1.data.organisms.mammals[1:] | ["horse", "mouse"] | 
| \$1 | A wildcard that can be used in place of a name or int. If there are multiple results, the results are returned in an array. | \$1.data.\$1.mammals | [["cat", "dog"], ["rabbit", "horse", "mouse"]] | 

# Node types for your flow
<a name="flows-nodes"></a>

Amazon Bedrock Flows provides the following node types to build your flow. When you configure a node, you provide the following fields:
+ Name – Enter a name for the node.
+ Type – In the console, you drag and drop the type of node to use. In the API, use the `type` field and the corresponding [FlowNodeConfiguration](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_FlowNodeConfiguration.html) in the `configuration` field.
+ Inputs – Provide the following information for each input:
  + Name – A name for the input. Some nodes have pre-defined names or types that you must use. To learn which ones have pre-defined names, see [Logic node types](#flows-nodes-logic-table).
  + Expression – Define the part of the whole input to use as the individual input. For more information, see [Use expressions to define inputs by extracting the relevant part of a whole input in Amazon Bedrock FlowsDefine inputs with expressions](flows-expressions.md).
  + Type – The data type for the input. When this node is reached at runtime, Amazon Bedrock applies the expression to the whole input and validates that the result matches the data type.
+ Outputs – Provide the following information for each output:
  + Name – A name for the output. Some nodes have pre-defined names or types that you must use. To learn which ones have pre-defined names, see [Logic node types](#flows-nodes-logic-table).
  + Type – The data type for the output. When this node is reached at runtime, Amazon Bedrock validates that the node output matches the data type.
+ Configuration – In the console, you define node-specific fields at the top of the node. In the API, use the appropriate [FlowNodeConfiguration](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_FlowNodeConfiguration.html) and fill in its fields.

Each node type is described below and its structure in the API is provided. Expand a section to learn more about that node type.

## Nodes for controlling flow logic
<a name="flows-nodes-logic"></a>

Use the following node types to control the logic of your flow.

### Flow input node
<a name="flows-nodes-input"></a>

Every flow contains only one flow input node and must begin with it. The flow input node takes the `content` from the `InvokeFlow` request, validates the data type, and sends it to the following node.

The following shows the general structure of an input [FlowNode](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_FlowNode.html) object in the API:

```
{
    "name": "string",
    "type": "Input",
    "outputs": [
        {
            "name": "document",
            "type": "String | Number | Boolean | Object | Array",
        }
    ],
    "configuration": {
        "input": CONTEXT-DEPENDENT
    }
}
```

### Flow output node
<a name="flows-nodes-output"></a>

A flow output node extracts the input data from the previous node, based on the defined expression, and returns it. In the console, the output is the response returned after choosing **Run** in the test window. In the API, the output is returned in the `content` field of the `flowOutputEvent` in the `InvokeFlow` response. A flow can have multiple flow output nodes.

A flow can have multiple flow output nodes if there are multiple branches in the flow.

The following shows the general structure of an output [FlowNode](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_FlowNode.html) object:

```
{
    "name": "string",
    "type": "Output",
    "inputs": [
        {
            "name": "document",
            "type": "String | Number | Boolean | Object | Array",
            "expression": "string"
        }
    ],
    "configuration": {
        "output": CONTEXT-DEPENDENT
    }
}
```

### Condition node
<a name="flows-nodes-condition"></a>

A condition node sends data from the previous node to different nodes, depending on the conditions that are defined. A condition node can take multiple inputs.

For an example, see [Create a flow with a condition node](flows-ex-condition.md).

**To define a condition node**

1. Add as many inputs as you need to evaluate the conditions you plan to add.

1. Enter a name for each input, specify the type to expect, and write an expression to extract the relevant part from the whole input.

1. Connect each input to the relevant output from an upstream node.

1. Add as many conditions as you need.

1. For each condition:

   1. Enter a name for the condition.

   1. Use relational and logical operators to define a condition that compares inputs to other inputs or to a constant.
**Note**  
Conditions are evaluated in order. If more than one condition is satisfied, the earlier condition takes precedence.

   1. Connect each condition to the downstream node to which you want to send the data if that condition is fulfilled.

#### Condition expressions
<a name="flows-nodes-condition-expr"></a>

To define a condition, you refer to an input by its name and compare it to a value using any of the following relational operators:


****  

| Operator | Meaning | Supported data types | Example usage | Example meaning | 
| --- | --- | --- | --- | --- | 
| == | Equal to (the data type must also be equal) | String, Number, Boolean | A == B | If A is equal to B | 
| \$1= | Not equal to | String, Number, Boolean | A \$1= B | If A isn't equal to B | 
| > | Greater than | Number | A > B | If A is greater than B | 
| >= | Greater than or equal to | Number | A >= B | If A is greater than or equal to B | 
| < | Less than | Number | A < B | If A is less than B | 
| <= | Less than or equal to | Number | A <= B | If A is less than or equal to B | 

You can compare inputs to other inputs or to a constant in a conditional expression. For example, if you have a numerical input called `profit` and another one called `expenses`, both **profit > expenses** or **profit <= 1000** are valid expressions.

You can use the following logical operators to combine expressions for more complex conditions. We recommend that you use parentheses to resolve ambiguities in grouping of expressions:


****  

| Operator | Meaning | Example usage | Example meaning | 
| --- | --- | --- | --- | 
| and | Both expressions are true | (A < B) and (C == 1) | If both expressions are true: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/bedrock/latest/userguide/flows-nodes.html) | 
| or | At least one expression is true | (A \$1= 2) or (B > C) | If either expressions is true: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/bedrock/latest/userguide/flows-nodes.html) | 
| not | The expression isn't true | not (A > B) | If A isn't greater than B (equivalent to A <= B) | 

In the API, you define the following in the `definition` field when you send a [CreateFlow](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_CreateFlow.html) or [UpdateFlow](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_UpdateFlow.html) request:

1. A condition [FlowNode](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_FlowNode.html) object in the `nodes` array. The general format is as follows (note that condition nodes don't have `outputs`):

   ```
   {
       "name": "string",
       "type": "Condition",
       "inputs": [
           {
               "name": "string",
               "type": "String | Number | Boolean | Object | Array",
               "expression": "string"
           }
       ],
       "configuration": {
           "condition": {
               "conditions": [
                   {
                       "name": "string",
                       "expression": "string"
                   },
                   ...
               ]
           }
       }
   }
   ```

1. For each input into the condition node, a [FlowConnection](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_FlowConnection.html) object in the `connections` array. Include a [FlowDataConnectionConfiguration](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_FlowDataConnectionConfiguration.html) object in the `configuration` field of the `FlowConnection` object. The general format of the`FlowConnection` object is as follows:

   ```
   {
       "name": "string",
       "source": "string",
       "target": "string",
       "type": "Data",
       "configuration": {
           "data": {
               "sourceOutput": "string",
               "expression": "string"
           }
       }
   }
   ```

1. For each condition (including the default condition) in the condition node, a [FlowConnection](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_FlowConnection.html) object in the `connections` array. Include a [FlowConditionalConnectionConfiguration](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_FlowConditionalConnectionConfiguration.html) object in the `configuration` field of the `FlowConnection` object. The general format of the [FlowConnection](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_FlowConnection.html) object is as follows:

   ```
   {
       "name": "string",
       "source": "string",
       "target": "string",
       "type": "Conditional",
       "configuration": {
           "conditional": {
               "condition": "string"
           }
       }
   }
   ```

   Use relational and logical operators to define the `condition` that connects this condition `source` node to a `target` node downstream. For the default condition, specify the condition as **default**.

### Iterator node
<a name="flows-nodes-iterator"></a>

An iterator node takes an array and iteratively returns its items as output to the downstream node. The inputs to the iterator node are processed one by one and not in parallel with each other. The flow output node returns the final result for each input in a different response. You can use also use a collector node downstream from the iterator node to collect the iterated responses and return them as an array, in addition to the size of the array.

The following shows the general structure of an iterator [FlowNode](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_FlowNode.html) object:

```
{
    "name": "string",
    "type": "Iterator",
    "inputs": [
        {
            "name": "array",
            "type": "Array",
            "expression": "string"
        }
    ],
    "outputs": [
        {
            "name": "arrayItem",
            "type": "String | Number | Boolean | Object | Array",
        },
        {
            "name": "arraySize",
            "type": "Number"
        }
    ],
    "configuration": {
        "iterator": CONTEXT-DEPENDENT
    }
}
```

### Collector node
<a name="flows-nodes-collector"></a>

A collector node takes an iterated input, in addition to the size that the array will be, and returns them as an array. You can use a collector node downstream from an iterator node to collect the iterated items after sending them through some nodes.

The following shows the general structure of a collector [FlowNode](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_FlowNode.html) object:

```
{
    "name": "string",
    "type": "Collector",
    "inputs": [
        {
            "name": "arrayItem",
            "type": "String | Number | Boolean | Object | Array",
            "expression": "string"
        },
        {
            "name": "arraySize",
            "type": "Number"
        }
    ],
    "outputs": [
        {
            "name": "collectedArray",
            "type": "Array"
        },
    ],
    "configuration": {
        "collector": CONTEXT-DEPENDENT
    }
}
```

### DoWhile loop node
<a name="flows-nodes-dowhile"></a>

A DoWhile loop node executes a sequence of nodes repeatedly while a specified condition remains true. The loop executes at least once before evaluating the condition, making it ideal for scenarios where you need to perform an action and then check if it should be repeated based on the result.

The DoWhile loop node takes input data and passes it through the loop body. After each iteration, the condition is evaluated to determine whether to continue looping or exit. The loop continues as long as the condition evaluates to true, or the `maxIterations` is not exceeded.

The following shows the general structure of a DoWhile loop [FlowNode](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_FlowNode.html) object:

```
{
    "name": "string",
    "type": "DoWhile",
    "inputs": [
        {
            "name": "loopInput",
            "type": "String | Number | Boolean | Object | Array",
            "expression": "string"
        }
    ],
    "outputs": [
        {
            "name": "loopOutput",
            "type": "String | Number | Boolean | Object | Array"
        },
        {
            "name": "iterationCount",
            "type": "Number"
        }
    ],
    "configuration": {
        "doWhile": {
            "condition": "string",
            "maxIterations": "number"
        }
    }
}
```

In the configuration:
+ `condition` – A boolean expression that determines whether to continue looping. Use the same relational and logical operators as condition nodes. The condition is evaluated after each iteration.
+ `maxIterations` – The maximum number of iterations. The default is 10. You must specify a positive number. This parameter helps you avoid infinite loops.

**Note**  
The `maxIterations` parameter has a default value of 10 and only accepts positive numbers. The loop exits when either the condition becomes false or the maximum number of iterations is reached.

## Nodes for handling data in the flow
<a name="flows-nodes-data"></a>

Use the following node types to handle data in your flow:

### Prompt node
<a name="flows-nodes-prompt"></a>

A prompt node defines a prompt to use in the flow. You can use a prompt from Prompt management or define one inline in the node. For more information, see [Construct and store reusable prompts with Prompt management in Amazon Bedrock](prompt-management.md).

For an example, see [Try example flows](flows-ex.md).

The inputs to the prompt node are values to fill in the variables. The output is the generated response from the model.

The following shows the general structure of a prompt [FlowNode](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_FlowNode.html) object:

```
{
    "name": "string",
    "type": "prompt",
    "inputs": [
        {
            "name": "content",
            "type": "String | Number | Boolean | Object | Array",
            "expression": "string"
        },
        ...
    ],
    "outputs": [
        {
            "name": "modelCompletion",
            "type": "String"
        }
    ],
    "configuration": {
        "prompt": {
            "sourceConfiguration": [PromptFlowNodeSourceConfiguration](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_PromptFlowNodeSourceConfiguration.html) object (see below),
            "guardrailConfiguration": {
                "guardrailIdentifier": "string",
                "guardrailVersion": "string"
            }
        }
    }
}
```

The [PromptFlowNodeSourceConfiguration](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_PromptFlowNodeSourceConfiguration.html) object depends on if you use a prompt from Prompt management or if you define it inline:
+ If you use a prompt from Prompt management, the object should be in the following general structure:

  ```
  {
      "resource": {
          "promptArn": "string"
      }
  }
  ```
+ If you define a prompt inline, follow the guidance for defining a variant in the API tab of [Create a prompt using Prompt management](prompt-management-create.md) (note that there is no `name` field in this object, however). The object you use should be in the following general structure:

  ```
  {
      "inline": {
          "modelId": "string",
          "templateType": "TEXT",
          "templateConfiguration": {
              "text": {
                  "text": "string",
                  "inputVariables": [
                      {
                          "name": "string"
                      },
                      ...
                  ]
              }
          },
          "inferenceConfiguration": {
              "text": {
                  "maxTokens": int,
                  "stopSequences": ["string", ...],
                  "temperature": float,
                  "topP": float
              }
          },
          "additionalModelRequestFields": {
              "key": "value",
              ...
          }
      }
  }
  ```

To apply a guardrail from Amazon Bedrock Guardrails to your prompt or the response generated from it, include the `guardrailConfiguration` field and specify the ID or ARN of the guardrail in the `guardrailIdentifier` field and the version of the guardrail in the `guardrailVersion` field.

### Agent node
<a name="flows-nodes-agent"></a>

An agent node lets you send a prompt to an agent, which orchestrates between FMs and associated resources to identify and carry out actions for an end-user. For more information, see [Automate tasks in your application using AI agents](agents.md).

In the configuration, specify the Amazon Resource Name (ARN) of the alias of the agent to use. The inputs into the node are the prompt for the agent and any associated [prompt or session attributes](agents-session-state.md). The node returns the agent's response as an output.

An Agent node can support multi-turn invocations, enabling interactive conversations between users and the Agent during flow execution. When an Agent node requires additional information or clarification, it can pause the flow execution and request specific input from the user. Once the user provides the requested information, the Agent node continues its processing with the new input. This continues until the agent node has all the required information to complete its execution

The following shows the general structure of an agent [FlowNode](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_FlowNode.html) object:

```
{
    "name": "string",
    "type": "Agent",
    "inputs": [
       {
            "name": "agentInputText"
            "type": "String | Number | Boolean | Object | Array",
            "expression": "string"
        },
        {
            "name": "promptAttributes"
            "type": "Object",
            "expression": "string"
        },
        {
            "name": "sessionAttributes"
            "type": "Object",
            "expression": "string"
        }
    ],
    "outputs": [
        {
            "name": "agentResponse",
            "type": "String"
        }
    ],
    "configuration": {
        "agent": {
            "agentAliasArn": "string"
        }
    }
}
```

### Knowledge base node
<a name="flows-nodes-kb"></a>

A knowledge base node lets you send a query to a knowledge base from Amazon Bedrock Knowledge Bases. For more information, see [Retrieve data and generate AI responses with Amazon Bedrock Knowledge Bases](knowledge-base.md).

In the configuration, provide the `knowledgeBaseId` minimally. You can optionally include the following fields depending on your use case:
+ `modelId` – Include a [model ID](models-supported.md) to use if you want to generate a response based on the retrieved results. To return the retrieved results as an array, omit the model ID.
+ `guardrailConfiguration` – Include the ID or ARN of the guardrail, defined in Amazon Bedrock Guardrails in the `guardrailIdentifier` field and the version of the guardrail in the `guardrailVersion` field.
**Note**  
Guardrails can only be applied when using `RetrieveAndGenerate` in a knowledge base node.

The input into the node is the query to the knowledge base. The output is either the model response, as a string, or an array of the retrieved results.

The following shows the general structure of a knowledge base [FlowNode](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_FlowNode.html) object:

```
{
    "name": "string",
    "type": "KnowledgeBase",
    "inputs": [
       {
            "name": "retrievalQuery",
            "type": "String",
            "expression": "string"
        }
    ],
    "outputs": [
        {
            "name": "retrievalResults" | "outputText",
            "type": "Array | String"
        }
    ],
    "configuration": {
        "knowledgeBase": {
            "knowledgeBaseId": "string",
            "modelId": "string",
            "guardrailConfiguration": {
                "guardrailIdentifier": "string",
                "guardrailVersion": "string"
            }
        }
    }
}
```

### S3 storage node
<a name="flows-nodes-storage"></a>

An S3 storage node lets you store data in the flow to an Amazon S3 bucket. In the configuration, you specify the S3 bucket to use for data storage. The inputs into the node are the content to store and the [object key](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-keys.html). The node returns the URI of the S3 location as its output.

The following shows the general structure of an S3 storage [FlowNode](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_FlowNode.html) object:

```
{
    "name": "string",
    "type": "Storage",
    "inputs": [
        {
            "name": "content",
            "type": "String | Number | Boolean | Object | Array",
            "expression": "string"
        },
        {
            "name": "objectKey",
            "type": "String",
            "expression": "string"
        }
    ],
    "outputs": [
        {
            "name": "s3Uri",
            "type": "String"
        }
    ],
    "configuration": {
        "retrieval": {
            "serviceConfiguration": {
                "s3": {
                    "bucketName": "string"
                }
            }
        }
    }
}
```

### S3 retrieval node
<a name="flows-nodes-retrieval"></a>

An S3 retrieval node lets you retrieve data from an Amazon S3 location to introduce to the flow. In the configuration, you specify the S3 bucket from which to retrieve data. The input into the node is the [object key](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-keys.html). The node returns the content in the S3 location as the output.

**Note**  
Currently, the data in the S3 location must be a UTF-8 encoded string.

The following shows the general structure of an S3 retrieval [FlowNode](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_FlowNode.html) object:

```
{
    "name": "string",
    "type": "Retrieval",
    "inputs": [
        {
            "name": "objectKey",
            "type": "String",
            "expression": "string"
        }
    ],
    "outputs": [
        {
            "name": "s3Content",
            "type": "String"
        }
    ],
    "configuration": {
        "retrieval": {
            "serviceConfiguration": {
                "s3": {
                    "bucketName": "string"
                }
            }
        }
    }
}
```

### Lambda function node
<a name="flows-nodes-lambda"></a>

A Lambda function node lets you call a Lambda function in which you can define code to carry out business logic. When you include a Lambda node in a flow, Amazon Bedrock sends an input event to the Lambda function that you specify.

In the configuration, specify the Amazon Resource Name (ARN) of the Lambda function. Define inputs to send in the Lambda input event. You can write code based on these inputs and define what the function returns. The function response is returned in the output.

The following shows the general structure of a Lambda function [FlowNode](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_FlowNode.html) object:

```
{
    "name": "string",
    "type": "LambdaFunction",
    "inputs": [
       {
            "name": "codeHookInput",
            "type": "String | Number | Boolean | Object | Array",
            "expression": "string"
        },
        ...
    ],
    "outputs": [
        {
            "name": "functionResponse",
            "type": "String | Number | Boolean | Object | Array"
        }
    ],
    "configuration": {
        "lambdaFunction": {
            "lambdaArn": "string"
        }
    }
}
```

#### Lambda input event for a flow
<a name="flows-nodes-lambda-input"></a>

The input event sent to a Lambda function in a Lambda node is of the following format:

```
{
   "messageVersion": "1.0",
   "flow": {
        "flowArn": "string",
        "flowAliasArn": "string"
   },
   "node": {
        "name": "string",
        "inputs": [
            {
               "name": "string",
               "type": "String | Number | Boolean | Object | Array",
               "expression": "string",
               "value": ...
            },
            ...
        ]
   }
}
```

The fields for each input match the fields that you specify when defining the Lambda node, while the value of the `value` field is populated with the whole input into the node after being resolved by the expression. For example, if the whole input into the node is `[1, 2, 3]` and the expression is `$.data[1]`, the value sent in the input event to the Lambda function would be `2`.

For more information about events in Lambda, see [Lambda concepts](https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-concepts.html#gettingstarted-concepts-event) in the [AWS Lambda Developer Guide](https://docs.aws.amazon.com/lambda/latest/dg/).

#### Lambda response for a flow
<a name="flows-nodes-lambda-response"></a>

When you write a Lambda function, you define the response returned by it. This response is returned to your flow as the output of the Lambda node.

### Inline code node
<a name="flows-nodes-inline-code"></a>

An inline code node lets you write and execute code directly in your flow, enabling data transformations, custom logic, and integrations without using an external Lambda function. When you include an inline code node in your flow, Amazon Bedrock executes your code in an isolated, AWS managed environment that isn't shared with anyone and doesn't have internet access.

**Note**  
Inline code node is in preview release for Amazon Bedrock and is subject to change.

In the node configuration, specify the code to execute along with the programming language (`Python_3` is currently the only option). Define inputs that your code can access as variables. The result of the last executed line in your code is returned as the node output.

The following example shows the general structure of an inline code [FlowNode](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_FlowNode.html) object:

```
{
    "name": "string",
    "type": "InlineCode",
    "inputs": [{
            "name": "string",
            "type": "String | Number | Boolean | Object | Array",
            "expression": "string"
        },
        ...
    ],
    "outputs": [{
        "name": "response",
        "type": "String | Number | Boolean | Object | Array"
    }],
    "configuration": {
        "inlineCode": {
            "code": "string",
            "language": "Python_3"
        }
    }
}
```

#### Considerations when using inline code nodes
<a name="flows-nodes-inline-code-usage"></a>

When using inline code nodes in your flow, consider the following:

**Important**  
We recommend that you test your code before adding it to an inline code node.
+ Inline code nodes aren't supported in [asynchronous flow execution](flows-create-async.md).
+ Currently, the only programming language supported by inline code nodes is Python 3.12 (`Python_3)`.
+ Inline code acts like an interactive Python session. Only the result of the last executed line is captured and returned as the node output.
+ Python console output (such output as from the `print` function) isn't captured.
+ Inputs for your inline code node are available as Python variables in your code. Use the exact name of the node input to reference them.
+ Configure the input and output types correctly to avoid runtime errors. You can configure up to five node inputs.
+ You can have up to five inline code nodes per flow.
+ You can have a maximum of 25 running inline code nodes per AWS account.
+ Your code can't exceed 5 MB.

#### Inline code node inputs
<a name="flows-nodes-inline-code-input"></a>

The inputs you define for an inline code node are available as Python variables in your code. For example, if you define an input named `userData`, you can access it directly in your code as `userData`.

The value of each input is populated based on the expression that you define. For example, if the input to the node is `{"name": "John", "age": 30}` and the expression is `$.name`, the value of the input variable would be `"John"`.

#### Inline code node output
<a name="flows-nodes-inline-code-output"></a>

The result of the last executed line in your code is returned as the output of the inline code node. This output is available to subsequent nodes in your flow.

For example, the following code returns a dictionary as the node output:

```
# Process input data
result = {"processed": True, "data": userData}

# The last line's result is returned as the node output
result
```

### Lex node
<a name="flows-nodes-lex"></a>

**Note**  
The Lex node relies on the Amazon Lex service, which might store and use customer content for the development and continuous improvement of other AWS services. As an AWS customer, you can opt out of having your content stored or used for service improvements. To learn how to implement an opt-out policy for Amazon Lex, see [AI services opt-out policies](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_ai-opt-out.html).

A Lex node lets you call a Amazon Lex bot to process an utterance using natural language processing and to identify an intent, based on the bot definition. For more information, see [Amazon Lex Developer Guide](https://docs.aws.amazon.com/lex/latest/dg/).

In the configuration, specify the Amazon Resource Name (ARN) of the alias of the bot to use and the locale to use. The inputs into the node are the utterance and any accompanying [request attributes](https://docs.aws.amazon.com/lexv2/latest/dg/context-mgmt-request-attribs.html) or [session attributes](https://docs.aws.amazon.com/lexv2/latest/dg/context-mgmt-request-attribs.html). The node returns the identified intent as the output.

**Note**  
Currently, the Lex node doesn't support multi-turn conversations. One Lex node can only process one utterance.

The following shows the general structure of a Lex [FlowNode](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_FlowNode.html) object:

```
{
    "name": "string",
    "type": "Lex",
    "inputs": [
       {
            "name": "inputText",
            "type": "String | Number | Boolean | Object | Array",
            "expression": "string"
        },
        {
            "name": "requestAttributes",
            "type": "Object",
            "expression": "string"
        },
        {
            "name": "sessionAttributes",
            "type": "Object",
            "expression": "string"
        }
    ],
    "outputs": [
        {
            "name": "predictedIntent",
            "type": "String"
        }
    ],
    "configuration": {
        "lex": {
            "botAliasArn": "string",
            "localeId": "string"
        }
    }
}
```

## Summary tables for node types
<a name="flows-nodes-summary-table"></a>

The following tables summarize the inputs and outputs that are allowed for each node type. Note the following:
+ If a name is marked as **Any**, you can provide any string as the name. Otherwise, you must use the value specified in the table.
+ If a type is marked as **Any**, you can specify any of the following data types: String, Number, Boolean, Object, Array. Otherwise, you must use the type specified in the table.
+ You can define multiple inputs for the **Condition**, **Prompt**, **Lambda function**, and **Inline code** nodes.


**Logic node types**  
<a name="flows-nodes-logic-table"></a>[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/bedrock/latest/userguide/flows-nodes.html)


**Data processing node types**  
<a name="flows-nodes-data-table"></a>[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/bedrock/latest/userguide/flows-nodes.html)

# Supported Regions and models for flows
<a name="flows-supported"></a>

Amazon Bedrock Flows is supported in the following AWS Regions:
+ ap-northeast-1
+ ap-northeast-2
+ ap-northeast-3
+ ap-south-1
+ ap-south-2
+ ap-southeast-1
+ ap-southeast-2
+ ca-central-1
+ eu-central-1
+ eu-central-2
+ eu-north-1
+ eu-south-1
+ eu-south-2
+ eu-west-1
+ eu-west-2
+ eu-west-3
+ sa-east-1
+ us-east-1
+ us-east-2
+ us-gov-east-1
+ us-gov-west-1
+ us-west-2

The models that are supported in Amazon Bedrock Flows depend on the nodes that you use in the flow:
+ Prompt node – You can use Prompt management with any text model supported for the [Converse](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html) API. For a list of supported models, see [Supported models and model features](conversation-inference-supported-models-features.md).
+ Agent node – For a list of supported models, see [Supported Regions for Amazon Bedrock Agents](agents-supported.md).
+ Knowledge base node – For a list of supported models, see [Supported models and Regions for Amazon Bedrock knowledge bases](knowledge-base-supported.md).

For a table of which models are supported in which Regions, see [Supported foundation models in Amazon Bedrock](models-supported.md).

# Prerequisites for Amazon Bedrock Flows
<a name="flows-prereq"></a>

Before creating a flow, review the following prerequisites and determine which ones you need to fulfill:

1. Define or create resources for one or more nodes you plan to add to your flow: 
   + For a prompt node – Create a prompt by using Prompt management. For more information, see [Construct and store reusable prompts with Prompt management in Amazon Bedrock](prompt-management.md). If you plan to define prompts inline when creating the node in the flow, you don't have to create a prompt in Prompt management.
   + For a knowledge base node – Create a knowledge base that you plan to use in the flow. For more information, see [Retrieve data and generate AI responses with Amazon Bedrock Knowledge Bases](knowledge-base.md).
   + For an agent node – Create an agent that you plan to use in the flow. For more information, see [Automate tasks in your application using AI agents](agents.md).
   + For an S3 storage node – Create an S3 bucket to store an output from a node in the flow.
   + For an S3 retrieval node – Create an S3 object in a bucket from which to retrieve data for the flow. The S3 object must be a UTF-8 encoded string.
   + For a Lambda node – Define a AWS Lambda function for the business logic you plan to implement in the flow. For more information, see the [AWS Lambda Developer Guide](https://docs.aws.amazon.com/lambda/latest/dg/).
   + For a Amazon Lex node – Create a Amazon Lex bot to identify intents. For more information, see the [Amazon Lex Developer Guide](https://docs.aws.amazon.com/lex/latest/dg/).

1. To use flows, you must have two different roles:

   1. **User role** – The IAM role that you use to log into the AWS Management Console or to make API calls must have permissions to carry out flows-related actions.

      If your role has the [AmazonBedrockFullAccess](security-iam-awsmanpol.md#security-iam-awsmanpol-AmazonBedrockFullAccess) policy attached, you don't need to configure additional permissions for this role. To restrict a role's permissions to only actions that are used for flows, attach the following identity-based policy to the IAM role:

------
#### [ JSON ]

****  

      ```
      {
          "Version":"2012-10-17",		 	 	 
          "Statement": [
              {
                  "Sid": "FlowPermissions",
                  "Effect": "Allow",
                  "Action": [  
                      "bedrock:CreateFlow",
                      "bedrock:UpdateFlow",
                      "bedrock:GetFlow",
                      "bedrock:ListFlows", 
                      "bedrock:DeleteFlow",
                      "bedrock:ValidateFlowDefinition", 
                      "bedrock:CreateFlowVersion",
                      "bedrock:GetFlowVersion",
                      "bedrock:ListFlowVersions",
                      "bedrock:DeleteFlowVersion",
                      "bedrock:CreateFlowAlias",
                      "bedrock:UpdateFlowAlias",
                      "bedrock:GetFlowAlias",
                      "bedrock:ListFlowAliases",
                      "bedrock:DeleteFlowAlias",
                      "bedrock:InvokeFlow",
                      "bedrock:TagResource",
                      "bedrock:UntagResource", 
                      "bedrock:ListTagsForResource"
                  ],
                  "Resource": "*"
              }
          ]   
      }
      ```

------

      You can further restrict permissions by omitting [actions](security_iam_service-with-iam.md#security_iam_service-with-iam-id-based-policies-actions) or specifying [resources](security_iam_service-with-iam.md#security_iam_service-with-iam-id-based-policies-resources) and [condition keys](security_iam_service-with-iam.md#security_iam_service-with-iam-id-based-policies-conditionkeys). An IAM identity can call API operations on specific resources. If you specify an API operation that can't be used on the resource specified in the policy, Amazon Bedrock returns an error.

   1. **Service role** – A role that allows Amazon Bedrock to perform actions on your behalf. You must specify this role when creating or updating a flow. You can create a [custom AWS Identity and Access Management [service role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_terms-and-concepts.html#iam-term-service-role)](flows-permissions.md).
**Note**  
If you plan to use the Amazon Bedrock console to automatically create a role when you create a flow, you don't need to manually set up this role.

# Create and design a flow in Amazon Bedrock
<a name="flows-create"></a>

In this section learn how to create and design flows with the Amazon Bedrock console. To help you get started, flows you create with the console are configured to run with a single prompt node. This section also includes further examples, and templates, for creating different types of flows.

If you want to use the AWS SDK to create a flow, see [Run Amazon Bedrock Flows code samples](flows-code-ex.md).

**Topics**
+ [

# Create your first flow in Amazon Bedrock
](flows-get-started.md)
+ [

# Design a flow in Amazon Bedrock
](flows-design.md)
+ [

# Try example flows
](flows-ex.md)
+ [

# Use a template to create an Amazon Bedrock flow
](flows-templates.md)

# Create your first flow in Amazon Bedrock
<a name="flows-get-started"></a>

Whenever you create a flow, the Amazon Bedrock console creates a getting started flow for you. The flow includes with a **Flow input** node, a **Prompt** node and a **Flow output** node. When you run the flow, you enter a topic for the flow which uses the prompt node to summarize the topic. Before you can run the flows, you need to set the model for the prompt. 

To create a flow, you provide a name and description for the flow. By default Amazon Bedrock creates a service role with the proper permissions. Optionally, you can specify an existing service role.

Amazon Bedrock encrypts your data at rest. By default, Amazon Bedrock encrypts this data using an AWS managed key. Optionally, you can encrypt the flow execution data using a customer managed key. For more information, see [Encryption of Amazon Bedrock Flows resources](encryption-flows.md).

After you finish with the getting started flow, or if you don't need it, you can continue building your flow. We recommend that you read [How Amazon Bedrock Flows works](flows-how-it-works.md) to familiarize yourself with concepts and terms in Amazon Bedrock Flows and to learn about the types of nodes that are available to you. For more information, see [Design a flow in Amazon Bedrock](flows-design.md).

**To create your first flow**

1. Sign in to the AWS Management Console with an IAM identity that has permissions to use the Amazon Bedrock console. Then, open the Amazon Bedrock console at [https://console.aws.amazon.com/bedrock](https://console.aws.amazon.com/bedrock).

1. Select **Amazon Bedrock Flows** from the left navigation pane.

1. In the **Amazon Bedrock Flows** section, choose **Create flow**.

1. Enter a **Name** for the flow and an optional **Description**.

1. For the **Service role name**, choose one of the following options:
   + **Create and use a new service role** – Let Amazon Bedrock create a service role for you to use.
   + **Use an existing service role ** – Select a custom service role that you set up previously. For more information, see [Create a service role for Amazon Bedrock Flows in Amazon Bedrock](flows-permissions.md).

1. (Optional) Encrypt your flow with a customer managed key by doing the following: 

   1. Select **Additional configurations**.

   1. In **KMS key selection**, select **Customize encryption settings (advanced)**. Then do one of the following in **Choose an AWS KMS key**:
      + To use an existing key, enter the ARN or find the key that you want to use. 
      + To create a new key, choose **Create an AWS KMS key** to open the AWS Key Management Service console and [create the key](https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html). When you create the key, note the ARN for key. Back in the Amazon Bedrock console, enter the ARN for the key in **Choose an AWS KMS key**.

   For more information, see [Encryption of Amazon Bedrock Flows resources](encryption-flows.md)

1. Choose **Create**. Amazon Bedrock creates the getting started flow and takes you to the **flow builder**.

1. In the **flow builder** section, note that the center pane (canvas) displays a **Flow input** node, a **Prompt** node and a **Flow output**. The nodes are already connected together. 

1. In the canvas, select the **prompt** node.

1. In the flow builder pane, select the **Configurations** section.

1. Under **Node name**, make sure the **Define in node** is selected.

1. In **Select a model**, select a model to use.

1. Choose **Save** to save your flow.

1. In the **Test flow** pane on the right, enter a topic for the flow to summarize.

1. Choose **Run** to run the flow. The flow displays the summarized topic.

# Design a flow in Amazon Bedrock
<a name="flows-design"></a>

In this section you design an Amazon Bedrock flow. Before designing a flow, we recommend that you read [How Amazon Bedrock Flows works](flows-how-it-works.md) to familiarize yourself with concepts and terms in Amazon Bedrock Flows and to learn about the types of nodes that are available to you. For example flows that you can try, see [Try example flows](flows-ex.md).

**To build your flow**

1. If you're not already in the **flow builder**, do the following:

   1. Sign in to the AWS Management Console with an IAM identity that has permissions to use the Amazon Bedrock console. Then, open the Amazon Bedrock console at [https://console.aws.amazon.com/bedrock](https://console.aws.amazon.com/bedrock).

   1. Select **Amazon Bedrock Flows** from the left navigation pane. Then, choose a flow in the **Amazon Bedrock Flows** section.

   1. Choose **Edit in flow builder**.

1. In the **flow builder** section, the center pane displays a **Flow input** node and a **Flow output** node. These are the input and output nodes for your flow.

1. Do the following to add and configure nodes:

   1. In the **Flow builder** pane, select **Nodes**.

   1. Drag a node you want to use for the first step of your flow and drop it in the center pane.

   1. The circles on the nodes are connection points. To connect your flow input node to the second node, drag a line from the circle on the **Flow input** node to the circle in the **Input** section of the node you just added.

   1. Select the node you just added.

   1. In the **Configure** section of the **Flow builder** pane, provide the configurations for the selected node and define names, data types, and expressions for the inputs and outputs of the node.

   1. In the **Flow builder** pane, select **Nodes**.

   1. Repeat steps to add and configure nodes the remaining nodes in your flow.
**Note**  
If you use a service role that Amazon Bedrock automatically created for you, the role will update with the proper permissions as you add nodes. If you use a custom service role however, you must add the proper permissions to the policy attached to your service role by referring to [Create a service role for Amazon Bedrock Flows in Amazon Bedrock](flows-permissions.md).

1. Connect the **Output** of the last node in your flow with the **Input** of the **Flow output** node. You can have multiple **Flow output** nodes. To add additional flow output nodes, drag the **Flow output** node and drop it next to the node where you want the flow to stop. Make sure to draw connections between the two nodes.

1. Continue to the next procedure to [Test a flow in Amazon Bedrock](flows-test.md) or come back later. To continue to the next step, choose **Save**. To come back later, choose **Save and exit**.

**Delete a node or a connection**

During the process of building your flow, you might need to delete a node or remove node connections.

**To delete a node**

1. Select a node you want to delete.

1. In the **Flow builder** pane, choose the delete icon (![\[Trapezoid-shaped diagram showing data flow from source to destination through AWS Transfer Family.\]](http://docs.aws.amazon.com/bedrock/latest/userguide/images/icons/trash.png)).
**Note**  
If you use a service role that Amazon Bedrock automatically created for you, the role will update with the proper permissions as you add nodes. If you delete nodes, however, the relevant permissions won't be deleted. We recommend that you delete the permissions that you no longer need by following the steps at [Modifying a role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_manage_modify.html).

**To remove a connection**
+ In the **Flow builder** page, hover over the connection you want to remove until you see the expand icon and then drag the connection away from the node.

The following requirements apply to building a flow:
+ Your flow must have only one flow input node and at least one flow output node.
+ You can't include inputs for a flow input node.
+ You can't include outputs for a flow output node.
+ Every output in a node must be connected to an input in a downstream node (in the API, this is done through a [FlowConnection](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_FlowConnection.html) with a [FlowDataConnectionConfiguration](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_FlowDataConnectionConfiguration.html)).
+ Every condition (including the default one) in a condition node must be connected to a downstream node (in the API, this is done through a [FlowConnection](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_FlowConnection.html) with a [FlowConditionalConnectionConfiguration](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_FlowConditionalConnectionConfiguration.html)).

The following pointers apply to building a flow:
+ Begin by setting the data type for the output of the flow input node. This data type should match what you expect to send as the input when you invoke the flow.
+ When you define the inputs for a flow using expressions, check that the result matches the data type that you choose for the input.
+ If you include an iterator node, include a collector node downstream after you've sent the output through the nodes that you need. The collector node will return the outputs in an array.

# Try example flows
<a name="flows-ex"></a>

This topic provides some example flows that you can try out to get started with using Amazon Bedrock Flows. You can also use templates to create an initial flow. For more information, see [Use a template to create an Amazon Bedrock flow](flows-templates.md).

Expand an example to see how to build it in the Amazon Bedrock console:

**Topics**
+ [

# Create a flow with a single prompt
](flows-ex-prompt.md)
+ [

# Create a flow with a condition node
](flows-ex-condition.md)

# Create a flow with a single prompt
<a name="flows-ex-prompt"></a>

The following image shows a flow consisting of a single prompt, defined inline in the node. The prompt generates a playlist of songs from a JSON object input that includes the genre and the number of songs to include in the playlist. 

![\[Example of using a prompt node with two variables.\]](http://docs.aws.amazon.com/bedrock/latest/userguide/images/flows/flows-prompt.png)


**To build and test this flow in the console**

1. Create a flow by following the instructions at [Create your first flow in Amazon Bedrock](flows-get-started.md).

1. Set up the prompt node by doing the following:

   1. Select the **Prompt** node in the center pane.

   1. Select the **Configure** tab in the **Flow builder** pane.

   1. Enter **MakePlaylist** as the **Node name**.

   1. Choose **Define in node**.

   1. Set up the following configurations for the prompt:

      1. Under **Select model**, select a model to run inference on the prompt.

      1. In the **Message** text box, enter **Make me a \$1\$1genre\$1\$1 playlist consisting of the following number of songs: \$1\$1number\$1\$1.**. This creates two variables that will appear as inputs into the node.

      1. (Optional) Modify the **Inference configurations**. 

      1. (Optional) If supported by the model, you can configure prompt **Caching** for the prompt message. For more information, see [Create and design a flow in Amazon Bedrock](flows-create.md).

   1. Expand the **Inputs** section. The names for the inputs are prefilled by the variables in the prompt message. Configure the inputs as follows:  
****    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/bedrock/latest/userguide/flows-ex-prompt.html)

      This configuration means that the prompt node expects a JSON object containing a field called `genre` that will be mapped to the `genre` input and a field called `number` that will be mapped to the `number` input.

   1. You can't modify the **Output**. It will be the response from the model, returned as a string.

1. Choose the **Flow input** node and select the **Configure** tab. Select **Object** as the **Type**. This means that flow invocation will expect to receive a JSON object.

1. Connect your nodes to complete the flow by doing the following:

   1. Drag a connection from the output node of the **Flow input** node to the **genre** input in the **MakePlaylist** prompt node.

   1. Drag a connection from the output node of the **Flow input** node to the **number** input in the **MakePlaylist** prompt node.

   1. Drag a connection from the output node of the **modelCompletion** output in the **MakePlaylist** prompt node to the **document** input in the **Flow output** node.

1. Choose **Save** to save your flow. Your flow should now be prepared for testing.

1. Test your flow by entering the following JSON object is the **Test flow** pane on the right. Choose **Run** and the flow should return a model response.

   ```
   {
       "genre": "pop",
       "number": 3
   }
   ```

# Create a flow with a condition node
<a name="flows-ex-condition"></a>

The following image shows a flow with one condition node returns one of three possible values based on the condition that is fulfilled:

![\[Example of using a condition node with two conditions.\]](http://docs.aws.amazon.com/bedrock/latest/userguide/images/flows/flows-condition.png)


**To build and test this flow in the console:**

1. Create a flow by following the instructions at [Create your first flow in Amazon Bedrock](flows-get-started.md).

1. Delete the **Prompt** node in the center pane.

1. Set up the condition node by doing the following:

   1. From the **Flow builder** left pane, select the **Nodes** tab.

   1. Drag a **Condition** node into your flow in the center pane.

   1. Select the **Configure** tab in the **Flow builder** pane.

   1. Expand the **Inputs** section. Configure the inputs as follows:  
****    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/bedrock/latest/userguide/flows-ex-condition.html)

      This configuration means that the condition node expects a JSON object that contains the fields `retailPrice`, `marketPrice`, and `type`.

   1. Configure the conditions by doing the following:

      1. In the **Conditions** section, optionally change the name of the condition. Then add the following condition in the **Condition** text box: **(retailPrice > 10) and (type == "produce")**.

      1. Add a second condition by choosing **Add condition**. Optionally change the name of the second condition. Then add the following condition in the **Condition** text box: **(retailPrice < marketPrice)**.

1. Choose the **Flow input** node and select the **Configure** tab. Select **Object** as the **Type**. This means that flow invocation will expect to receive a JSON object.

1. Add flow output nodes so that you have three in total. Configure them as follows in the **Configure** tab of the **Flow builder** pane of each flow output node:

   1. Set the input type of the first flow output node as **String** and the expression as **\$1.data.action[0]** to return the first value in the array in the `action` field of the incoming object.

   1. Set the input type of the second flow output node as **String** and the expression as **\$1.data.action[1]** to return the second value in the array in the `action` field of the incoming object.

   1. Set the input type of the third flow output node as **String** and the expression as **\$1.data.action[2]** to return the third value in the array in the `action` field of the incoming object.

1. Connect the first condition to the first flow output node, the second condition to the second flow output node, and the default condition to the third flow output node.

1. Connect the inputs and outputs in all the nodes to complete the flow by doing the following:

   1. Drag a connection from the output node of the **Flow input** node to the **retailPrice** input in the condition node.

   1. Drag a connection from the output node of the **Flow input** node to the **marketPrice** input in the condition node.

   1. Drag a connection from the output node of the **Flow input** node to the **type** input in the condition node.

   1. Drag a connection from the output of the **Flow input** node to the **document** input in each of the three output nodes.

1. Choose **Save** to save your flow. Your flow should now be prepared for testing.

1. Test your flow by entering the following JSON objects is the **Test flow** pane on the right. Choose **Run** for each input:

   1. The following object fulfills the first condition (the `retailPrice` is more than 10 and the `type` is "produce") and returns the first value in `action` ("don't buy"):

      ```
      {
          "retailPrice": 11, 
          "marketPrice": 12, 
          "type": "produce", 
          "action": ["don't buy", "buy", "undecided"]
      }
      ```
**Note**  
Even though both the first and second conditions are fulfilled, the first condition takes precedence since it comes first.

   1. The following object fulfills the second condition (the `retailPrice` is less than the `marketPrice`) and returns the second value in `action` ("buy"):

      ```
      {
          "retailPrice": 11, 
          "marketPrice": 12, 
          "type": "meat", 
          "action": ["don't buy", "buy", "undecided"]
      }
      ```

   1. The following object fulfills neither the first condition (the `retailPrice` is more than 10, but the `type` is not "produce") nor the second condition (the `retailPrice` isn't less than the `marketPrice`), so the third value in `action` ("undecided") is returned:

      ```
      {
          "retailPrice": 11, 
          "marketPrice": 11, 
          "type": "meat", 
          "action": ["don't buy", "buy", "undecided"]
      }
      ```

# Use a template to create an Amazon Bedrock flow
<a name="flows-templates"></a>

To help you get started with defining and orchestrating Amazon Bedrock Flows, you can use templates to create flows for a variety of flow configurations. For example, you can use a template to see a flow that includes a knowledge base or a flow that uses conditions to direct flow logic. 

You access the templates from the [Amazon Bedrock Flows Samples](https://github.com/aws-samples/amazon-bedrock-flows-samples?tab=readme-ov-file) GitHub repository. The Amazon Bedrock console also provides a link to the repository from the canvas page for a flow. 

The flow templates are provided as [JSON templates](https://github.com/aws-samples/amazon-bedrock-flows-samples/tree/main/templates) for each supported flow definition and a Python script that you use to create and run the flow. You can also access the flow from the Amazon Bedrock console.

The repository provides the following templates:
+  [Knowledege base flow](https://github.com/aws-samples/amazon-bedrock-flows-samples?tab=readme-ov-file#1-knowledgebase-flow-1) – Shows how to integrate and query a [knowledge base](https://github.com/aws-samples/amazon-bedrock-flows-samples?tab=readme-ov-file#1-knowledgebase-flow-1), including RAG (Retrieval Augmented Generation) and Knowledge base search and retrieval.
+  [Multi-turn Conversation agent flow](https://github.com/aws-samples/amazon-bedrock-flows-samples?tab=readme-ov-file#2-multi-turn-conversation-agent-flow-1) – Shows how to perform interactive, stateful conversations with a flow. For more information, see [Converse with an Amazon Bedrock flow](flows-multi-turn-invocation.md).
+  [Conditions Flow](https://github.com/aws-samples/amazon-bedrock-flows-samples?tab=readme-ov-file#3-conditions-flow-1) – Shows how to perform conditional logic and branching within a flow. 
+ [ Prompt Node with Guardrail Flow](https://github.com/aws-samples/amazon-bedrock-flows-samples?tab=readme-ov-file#4-prompt-node-with-guardrail-flow-1) – Shows how to safeguard a prompt node with a guardrail.
+  [Iterator and Collector Flow](https://github.com/aws-samples/amazon-bedrock-flows-samples?tab=readme-ov-file#5-iterator--collector-flow-1) – Shows how to process multiple inputs and aggregate responses.
+  [Multi-agent flow](https://github.com/aws-samples/amazon-bedrock-flows-samples?tab=readme-ov-file#5-iterator--collector-flow-1) – Shows various agent-based workflows, including multi-agent collaboration and task delegation.

Before you can run the script you need to create the Amazon Bedrock resources, such as a knowledge base or agent, that the flow uses. It is your responsibility to delete the these resources when you no longer need them. 

To create and run a flow from a template, you run the script (`flow_manager.py`). The script prompts for any additional information that it needs, such as the flow template you want to use and identifiers for resources that the template needs. You can include a test prompt to try with the flow.

Optionally, you can set the AWS Region that you want the flow to be created in. The script creates the necessary resources with a default set of [IAM role permissions](https://github.com/aws-samples/amazon-bedrock-flows-samples?tab=readme-ov-file#iam-role-permissions). You can also choose to use an IAM role that you create.

If you want to use the flow in the Amazon Bedrock console, don't use the `--cleanup` parameter as this deletes the flow after the script runs the flow. If you don't use `--cleanup`, you will have to delete the flow, when you no longer need it. 

For more information, see [https://github.com/aws-samples/amazon-bedrock-flows-samples?tab=readme-ov-file\$1how-to-use](https://github.com/aws-samples/amazon-bedrock-flows-samples?tab=readme-ov-file#how-to-use).



# View information about flows in Amazon Bedrock
<a name="flows-view"></a>

To learn how to view information about a flow, choose the tab for your preferred method, and then follow the steps:

------
#### [ Console ]

**To view the details of a flow**

1. Sign in to the AWS Management Console with an IAM identity that has permissions to use the Amazon Bedrock console. Then, open the Amazon Bedrock console at [https://console.aws.amazon.com/bedrock](https://console.aws.amazon.com/bedrock).

1. Select **Amazon Bedrock Flows** from the left navigation pane. Then, in the **Amazon Bedrock Flows** section, select a flow.

1. View the details of the flow in the **Flow details** pane.

------
#### [ API ]

To get information about a flow, send a [GetFlow](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_GetFlow.html) request with an [Agents for Amazon Bedrock build-time endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt) and specify the ARN or ID of the flow as the `flowIdentifier`.

To list information about your flows, send a [ListFlows](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_ListFlows.html) request with an [Agents for Amazon Bedrock build-time endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt). You can specify the following optional parameters:


****  

| Field | Short description | 
| --- | --- | 
| maxResults | The maximum number of results to return in a response. | 
| nextToken | If there are more results than the number you specified in the maxResults field, the response returns a nextToken value. To see the next batch of results, send the nextToken value in another request. | 

------

# Modify a flow in Amazon Bedrock
<a name="flows-modify"></a>

To learn how to modify a flow, choose the tab for your preferred method, and then follow the steps:

------
#### [ Console ]

**To modify the details of a flow**

1. Sign in to the AWS Management Console with an IAM identity that has permissions to use the Amazon Bedrock console. Then, open the Amazon Bedrock console at [https://console.aws.amazon.com/bedrock](https://console.aws.amazon.com/bedrock).

1. Select **Amazon Bedrock Flows** from the left navigation pane. Then, in the **Amazon Bedrock Flows** section, select a flow.

1. In the **Flow details** section, choose **Edit**. 

1. You can edit the name, description, and associate a different service role for the flow.

1. Select **Save changes**.

**To modify a flow**

1. If you're not already in the **flow builder**, do the following:

   1. Sign in to the AWS Management Console with an IAM identity that has permissions to use the Amazon Bedrock console. Then, open the Amazon Bedrock console at [https://console.aws.amazon.com/bedrock](https://console.aws.amazon.com/bedrock).

   1. Select **Amazon Bedrock Flows** from the left navigation pane. Then, choose a flow in the **Amazon Bedrock Flows** section.

   1. Choose **Edit in flow builder**.

1. Add, remove, and modify nodes and connections as necessary. For more information, refer to [Create and design a flow in Amazon Bedrock](flows-create.md) and [Node types for your flow](flows-nodes.md).

1. When you're done modifying your flow, choose **Save** or **Save and exit**.

------
#### [ API ]

To edit a flow, send an [UpdateFlow](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_UpdateFlow.html) request with an [Agents for Amazon Bedrock build-time endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt). Include both fields that you want to maintain and fields that you want to change. For considerations on the fields in the request, see [Create and design a flow in Amazon Bedrock](flows-create.md).

------

# Include guardrails in your flow in Amazon Bedrock
<a name="flows-guardrails"></a>

Amazon Bedrock Flows integrates with Amazon Bedrock Guardrails to let you identify and block or filter unwanted content in your flow. To learn how to apply guardrails to supported node types in a flow, see the following table:


****  

| Node type | Console | API | 
| --- | --- | --- | 
| Prompt node | When you [create](flows-create.md) or [update](flows-modify.md) a flow, select the prompt node and specify the guardrail in the Configure section. | When you define the prompt node in the nodes field in a [CreateFlow](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_CreateFlow.html) or [UpdateFlow](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_UpdateFlow.html) request, include a guardrailConfiguration field in the [PromptFlowNodeConfiguration](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_PromptFlowNodeConfiguration.html). | 
| Knowledge base node | When you [create](flows-create.md) or [update](flows-modify.md) a flow, select the knowledge base node and specify the guardrail in the Configure section. You can only include a guardrail when generating responses based on retrieved results. | When you define the knowledge base node in the nodes field in a [CreateFlow](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_CreateFlow.html) or [UpdateFlow](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_UpdateFlow.html) request, include a guardrailConfiguration field in the [KnowledgeBaseFlowNodeConfiguration](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_KnowledgeBaseFlowNodeConfiguration.html). You can only include a guardrail when using RetrieveAndGenerate so you must include a modelId. | 

For more information about guardrails, see [Detect and filter harmful content by using Amazon Bedrock Guardrails](guardrails.md).

For more information about node types, see [Node types for your flow](flows-nodes.md).

# Test a flow in Amazon Bedrock
<a name="flows-test"></a>

After you’ve created a flow, you will have a *working draft*. The working draft is a version of the flow that you can iteratively build and test. Each time you make changes to your flow, the working draft is updated.

When you test your flow Amazon Bedrock first verifies the following and throws an exception if the verification fails:
+ Connectivity between all flow nodes.
+ At least one flow output node is configured.
+ Input and output variable types are matched as required.
+ Condition expressions are valid and a default outcome is provided.

If the verification fails, you'll need to fix the errors before you can test and validate the performance of your flow. Following are steps for testing your flow, choose the tab for your preferred method, and then follow the steps:

------
#### [ Console ]

**To test your flow**

1. If you're not already in the **Flow builder**, do the following:

   1. Sign in to the AWS Management Console with an IAM identity that has permissions to use the Amazon Bedrock console. Then, open the Amazon Bedrock console at [https://console.aws.amazon.com/bedrock](https://console.aws.amazon.com/bedrock).

   1. Select **Amazon Bedrock Flows** from the left navigation pane. Then, in the **Amazon Bedrock Flows** section, select a flow you want to test.

   1. Choose **Edit in flow builder**.

1. In the **Flow builder page**, in the right pane, enter an input to invoke your flow. Check that the input data type matches the output data type that you configured for the flow input node.

1. Choose **Run**.

1. Nodes or connections in the flow configuration that trigger errors become higlighted in red and ones that trigger warnings become highlighted in yellow. Read the error messages and warnings, fix the identified issues, save the flow, and run your test again.
**Note**  
You must save the flow for the changes you made to be applied when you test the flow.

1. (Optional) To view the inputs, outputs, and execution duration for each node, choose **Show trace** in the response. For more information, see [Track each step in your flow by viewing its trace in Amazon BedrockTrack each step in your flow by viewing its trace](flows-trace.md). To return to the visual builder, choose **Hide trace** or select the collapse icon.

1. After you are satisfied with your flow performance, choose **Save and exit**.

1. You can continue to iterate on building your flow. When you're satisfied with it and are ready to deploy it to production, create a version of the flow and an alias to point to the version. For more information, see [Deploy a flow to your application using versions and aliases](flows-deploy.md).

------
#### [ API ]

To test your flow, send an [InvokeFlow](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent-runtime_InvokeFlow.html) request with an [Agents for Amazon Bedrock runtime endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-rt). Include the ARN or ID of the flow in the `flowIdentifier` field and the ARN or ID of the alias to use in the `flowAliasIdentifier` field.

To view the inputs and outputs for each node, set the `enableTrace` field to `TRUE`. For more information, see [Track each step in your flow by viewing its trace in Amazon BedrockTrack each step in your flow by viewing its trace](flows-trace.md).

The request body specifies the input for the flow and is of the following format:

```
{
   "inputs": [ 
      { 
         "content": { 
            "document": "JSON-formatted string"
         },
         "nodeName": "string",
         "nodeOutputName": "string"
      }
   ],
   "enableTrace": TRUE | FALSE
}
```

Provide the input in the `document` field, provide a name for the input in the `nodeName` field, and provide a name for the input in the `nodeOutputName` field.

The response is returned in a stream. Each event returned contains output from a node in the `document` field, the node that was processed in the `nodeName` field, and the type of node in the `nodeType` field. These events are of the following format:

```
{
    "flowOutputEvent": { 
        "content": { 
            "document": "JSON-formatted string"
        },
        "nodeName": "string",
        "nodeType": "string"
    }
}
```

If the flow finishes, a `flowCompletionEvent` field with the `completionReason` is also returned. If there's an error, the corresponding error field is returned.

------

# Track each step in your flow by viewing its trace in Amazon Bedrock
<a name="flows-trace"></a>

When you invoke a flow, you can view the *trace* to see the inputs to and outputs from each node. The trace helps you track the path from the input to the response that it ultimately returns. You can use the trace to troubleshoot errors that occur, to identify steps that lead to an unexpected outcome or performance bottleneck, and to consider ways in which you can improve the flow.

To view the trace, do the following:
+ In the console, follow the steps in the **Console** tab at [Test a flow in Amazon Bedrock](flows-test.md) and choose **Show trace** in the response from flow invocation.
+ In the API, set the `enableTrace` field to `true` in an [InvokeFlow](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent-runtime_InvokeFlow.html) request. Each `flowOutputEvent` in the response is returned alongside a `flowTraceEvent`.

Each trace event includes the name of the node that either received an input or yielded an output and the date at time at which the input or output was processed. Select a tab to learn more about a type of trace event:

------
#### [ FlowTraceConditionNodeResultEvent ]

This type of trace identifies which conditions are satisfied for a condition node and helps you identify the branch or branches of the flow that are activated during the invocation. The following JSON object shows what a [FlowTraceEvent](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_FlowTraceEvent.html) looks like for the result of a condition node:

```
{
    "trace": {
        "conditionNodeOutputTrace": {
            "nodeName": "string",
            "satisfiedConditions": [
                {
                    "conditionName": "string"
                },
                ...
            ],
            "timestamp": timestamp
        }
    }
}
```

------
#### [ FlowTraceNodeInputEvent ]

This type of trace displays the input that was sent to a node. If the event is downstream from an iterator node but upstream from a collector node, the `iterationIndex` field indicates the index of the item in the array that the input is from. The following JSON object shows what a [FlowTraceEvent](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_FlowTraceEvent.html) looks like for the input into a node.

```
{
    "trace": {
        "nodeInputTrace": {
            "fields": [
                {
                    "content": {
                        "document": JSON object
                    },
                    "nodeInputName": "string"
                },
                ...
            ],
            "nodeName": "string",
            "timestamp": timestamp,
            "iterationIndex": int
        }
    }
}
```

------
#### [ FlowTraceNodeOutputEvent ]

This type of trace displays the output that was produced by a node. If the event is downstream from an iterator node but upstream from a collector node, the `iterationIndex` field indicates the index of the item in the array that the output is from. The following JSON object shows what a [FlowTraceEvent](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_FlowTraceEvent.html) looks like for the output from a node.

```
{
    "trace": {
        "nodeOutputTrace": {
            "fields": [
                {
                    "content": {
                        "document": JSON object
                    },
                    "nodeOutputName": "string"
                },
                ...
            ],
            "nodeName": "string",
            "timestamp": timestamp,
            "iterationIndex": int
        }
    }
}
```

------

# Run Amazon Bedrock flows asynchronously with flow executions
<a name="flows-create-async"></a>

With flow executions, you can run Amazon Bedrock flows asynchronously. This lets your flows run for longer durations and also yield control so that your application can perform other tasks.

When you run a flow by using the Amazon Bedrock console or with the [InvokeFlow](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent-runtime_InvokeFlow.html) operation, the flow runs until it finishes or times out at one hour (whichever is first). When you run a flow execution, your flow can run much longer: Individual nodes can run up to five minutes, and your entire flow can run for up to 24 hours.

**Note**  
Flow executions is in preview release for Amazon Bedrock and is subject to change.

## Required permissions for running flow executions
<a name="flows-create-async-permissions"></a>
+ Make sure that your Amazon Bedrock Flows service role has all necessary permissions. For more information, see [Create a service role for Amazon Bedrock Flows in Amazon Bedrock](flows-permissions.md).
+ (Optional) Encrypt your flow execution data with a customer managed AWS KMS key. For more information, see [Encryption of Amazon Bedrock Flows resources](encryption-flows.md).

## Create and manage a flow execution
<a name="flows-create-async-how-to"></a>

You can create a flow execution in the console or by using the [StartFlowExecution](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent-runtime_StartFlowExecution.html) operation.

------
#### [ Console ]

1. Create a flow by following the instructions at [Create and design a flow in Amazon Bedrock](flows-create.md).

1. Create an alias for the flow by following the instructions at [Create an alias of a flow in Amazon Bedrock](flows-alias-create.md).

1. If you're not already in the **Flow builder**, do the following:

   1. Sign in to the AWS Management Console with an IAM identity that has permissions to use the Amazon Bedrock console. Then, open the Amazon Bedrock console at [https://console.aws.amazon.com/bedrock](https://console.aws.amazon.com/bedrock).

   1. Select **Amazon Bedrock Flows** from the left navigation pane, and then choose your flow.

1. Choose the **Executions** tab, then choose **Create execution**. 

1. In the **Create execution** dialog, enter the following

   1. For **Name**, enter a name for the flow execution. 

   1. For **Select alias**, choose the alias of the flow that you want to use.

   1. For **Prompt input**, enter the prompt that you want to start the flow with.

   1. Choose **Create** to create the flow execution and start running it.

1. On the flow details page, choose the **Executions** tab and take note of the flow execution's status in **Execution status**.

1. (Optional) Choose an execution to open the flow and see the execution summary.

   In **Execution output**, you see the output from the flow.

1. (Optional) To stop a flow execution, select the execution and choose **Stop**.

------
#### [ API ]

**Start a flow execution**  
To run a flow execution, send a [StartFlowExecution](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent-runtime_StartFlowExecution.html) request with an [Agents for Amazon Bedrock runtime endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-rt). In the request, specify the flow ID and flow alias ID of the flow that you want to run. You can also specify the following:
+ **inputs** – An array containing the [input](flows-nodes.md#flows-nodes-input) node that you want the flow to start running from. You specify the input to send to the prompt flow input node in the `content` field.
+ **name** – A name for the flow execution.

```
{
    "inputs": [{
        "nodeName": "FlowInputNode",
        "nodeOutputName": "document",
        "content": {
            "document": "Test"
        }
    }],
    "name": "MyExecution"
}
```

The response is the Amazon Resource Name (ARN) of the flow execution. You can use the `executionArn` to poll for the current state of the flow, such as when the flow execution finishes or a condition node evaluates its conditions.

```
{
      "executionArn": "arn:aws:bedrock:us-west-2:111122223333:flow/FLOWID/alias/TSTALIASID/execution/MyExecution"
}
```

**Track the progress of a flow execution**  
Use the [GetFlowExecution](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent-runtime_GetFlowExecution.html) operation to get the current status of a flow that you identify by its execution ARN. A flow status is either `Running`, `Succeeded`, `Failed`, `TimedOut`, or `Aborted`.

```
{
      "endedAt": null,
      "errors": null,
      "executionArn": "arn:aws:bedrock:us-west-2:111122223333:flow/FLOWID/alias/TSTALIASID/execution/MyExecution",
      "flowAliasIdentifier": "TSTALIASID",
      "flowIdentifier": "FLOWID",
      "flowVersion": "DRAFT",
      "startedAt": "2025-03-20T23:32:28.899221162Z",
      "status": "Running"
}
```

Errors (such as a Lambda node that times out) are returned in the `errors` array like the following example:

```
"errors": [{
    "nodeName": "LambdaNode1",
    "errorType": "ExecutionTimedOut",
    "message": "Call to lambda function timed out"
}],
```

**Get the results of a flow execution**  
Amazon Bedrock writes the outputs of a flow to the flow's [output](flows-nodes.md#flows-nodes-output) nodes. You can get the outputs once the flow completes or while the flow is running (depending on your use case).

If you want the flow to complete first, make a call to `GetFlowExecution` and make sure that the value of the `status` field in the response is `Succeeded`.

To get a list of output events from the flow execution, make a call to [ListFlowExecutionEvents](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent-runtime_ListFlowExecutionEvents.html). In the response, check for `flowOutputEvent` objects in `flowExecutionEvents`. For example, you can get a flow's output in the `content` field:

```
{
      "flowOutputEvent": {
        "content": {
          "document": "The model response."
        },
        "nodeName": "FlowOutputNode"
      }
}
```

You can limit the output from `ListFlowExecutions` to just input and output nodes by setting the `eventType` query parameter to `Flow`.

**View events**  
To help debug your flow execution, you can use the [ListFlowExecutionEvents](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent-runtime_ListFlowExecutionEvents.html) operation to view events that nodes generate while the flow is running. Set the `eventType` query parameter to `Node` to see the inputs and outputs of all nodes (including intermediate nodes) in the response that's similar to the following example:

```
{
    "flowExecutionEvents": [{
            "nodeOutputEvent": {
                "fields": [{
                    "content": {
                        "document": "History book"
                    },
                    "name": "document"
                }],
                "nodeName": "FlowInputNode",
                "timestamp": "2025-05-05T18:38:56.637867516Z"
            }
        },
        {
            "nodeInputEvent": {
                "fields": [{
                    "content": {
                        "document": "History book"
                    },
                    "name": "book"
                }],
                "nodeName": "Prompt_1",
                "timestamp": "2025-05-05T18:38:57.434600163Z"
            }
        },
        {
            "nodeOutputEvent": {
                "fields": [{
                    "content": {
                        "document": "Here's a summary of the history book."
                    },
                    "name": "modelCompletion"
                }],
                "nodeName": "Prompt_1",
                "timestamp": "2025-05-05T18:39:06.034157077Z"
            }
        },
        {
            "nodeInputEvent": {
                "fields": [{
                    "content": {
                        "document": "Here's a summary of the history book."
                    },
                    "name": "document"
                }],
                "nodeName": "FlowOutputNode",
                "timestamp": "2025-05-05T18:39:06.453128251Z"
            }
        }
    ]
}
```

**Get a snapshot of your flow execution**  
Amazon Bedrock automatically takes a snapshot of a flow definition and metadata when a flow execution starts. This is helpful since a flow can be updated while it's running asynchronously. To retrieve this snapshot, call the [GetExecutionFlowSnapshot](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent-runtime_GetExecutionFlowSnapshot.html) operation. The response includes the following flow fields:
+ **customerEncryptionKeyArn** – The ARN of the AWS KMS key that encrypts the flow.
+ **definition** – The [definition](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_FlowDefinition.html) of the flow.
+ **executionRoleArn** – The ARN of the IAM service role that's used by the flow execution.
+ **flowAliasIdentifier** – The flow's alias ID.
+ **flowIdentifier** – The flow's ID.
+ **flowVersion** – The flow's version.

```
{
      "customerEncryptionKeyArn": null,
      "definition": "{flow-definition}",
      "executionRoleArn": "arn:aws:iam::111122223333:role/name",
      "flowAliasIdentifier": "TSTALIASID",
      "flowIdentifier": "FLOWID",
      "flowVersion": "DRAFT"
}
```

**List your flow executions**  
You can get a list of your flow executions by calling the [ListFlowExecutions](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent-runtime_ListFlowExecutions.html) operation. The response includes a `flowExecutionSummaries` array with information about each of your flow executions in the current AWS Region for a flow or flow alias. Each element includes information such as the execution ARN, the start time, and the current status of the flow.

```
{
    "flowExecutionSummaries": [{
        "createdAt": "2025-03-11T23:21:02.875598966Z",
        "endedAt": null,
        "executionArn": "arn:aws:bedrock:us-west-2:111122223333:flow/FLOWID/alias/TSTALIASID/execution/MyExecution",
        "flowAliasIdentifier": "TSTALIASID",
        "flowIdentifier": "FLOWID",
        "flowVersion": "DRAFT",
        "status": "Running"
    }]
}
```

**Stop a running flow execution**  
If you need to stop a running flow execution, call the [StopFlowExecution](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent-runtime_StopFlowExecution.html) operation and pass the flow ID, flow alias ID, and the flow execution ID for the execution that you want to stop. 

------

## Flow execution statuses
<a name="flows-async-statuses"></a>

A flow execution can have one of the following statuses:
+ **Running** – The flow execution is in progress.
+ **Succeeded** – The flow execution completed successfully.
+ **Failed** – The flow execution failed due to an error.
+ **TimedOut** – The flow execution exceeded the maximum runtime of 24 hours.
+ **Aborted** – The flow execution was manually stopped using the [StopFlowExecution](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent-runtime_StopFlowExecution.html) operation.

Flow executions that are no longer running are automatically deleted after 90 days.

## Best practices for flow executions
<a name="flows-async-best-practices"></a>

Consider the following when using flow executions:
+ Regularly poll your flow execution's status using [GetFlowExecution](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent-runtime_GetFlowExecution.html) until your flow reaches a terminal state (which is anything other than `Running`).
+ When your flow execution reaches a terminal state, use [ListFlowExecutionEvents](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent-runtime_ListFlowExecutionEvents.html) to get the results of your flow. For example, you might use these results to build some logic around your flow.
+ Get a snapshot of your flow execution using [GetExecutionFlowSnapshot](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent-runtime_GetExecutionFlowSnapshot.html) to help with debugging if issues come up with the execution.

# Deploy a flow to your application using versions and aliases
<a name="flows-deploy"></a>

When you first create a flow, a working draft version (`DRAFT`) and a test alias (`TSTALIASID`) that points to the working draft version are created. When you make changes to your flow, the changes apply to the working draft, and so it is the latest version of your flow. You iterate on your working draft until you're satisfied with the behavior of your flow. Then, you can set up your flow for deployment by creating *versions* of your flow.

A *version* is a snapshot that preserves the resource as it exists at the time it was created. You can continue to modify the working draft and create versions of your flow as necessary. Amazon Bedrock creates versions in numerical order, starting from 1. Versions are immutable because they act as a snapshot of your flow at the time you created it. To make updates to a flow that you've deployed to production, you must create a new version from the working draft and make calls to the alias that points to that version.

To deploy your flow, you must create an *alias* that points to a version of your flow. Then, you make `InvokeFlow` requests to that alias. With aliases, you can switch efficiently between different versions of your flow without keeping track of the version. For example, you can change an alias to point to a previous version of your flow if there are changes that you need to revert quickly.

The following topics describe how to create versions and aliases of your flow.

**Topics**
+ [

# Create a version of a flow in Amazon Bedrock
](flows-version-create.md)
+ [

# View information about versions of flows in Amazon Bedrock
](flows-version-view.md)
+ [

# Delete a version of a flow in Amazon Bedrock
](flows-version-delete.md)
+ [

# Create an alias of a flow in Amazon Bedrock
](flows-alias-create.md)
+ [

# View information about aliases of flows in Amazon Bedrock
](flows-alias-view.md)
+ [

# Modify an alias of a flow in Amazon Bedrock
](flows-alias-modify.md)
+ [

# Delete an alias of a flow in Amazon Bedrock
](flows-alias-delete.md)

# Create a version of a flow in Amazon Bedrock
<a name="flows-version-create"></a>

When you're satisfied with the configuration of your flow, create a immutable version of the flow that you can point to with an alias. To learn how to create a version of your flow, Choose the tab for your preferred method, and then follow the steps:

------
#### [ Console ]

**To create a version of your Amazon Bedrock Flows**

1. Sign in to the AWS Management Console with an IAM identity that has permissions to use the Amazon Bedrock console. Then, open the Amazon Bedrock console at [https://console.aws.amazon.com/bedrock](https://console.aws.amazon.com/bedrock).

1. Select **Amazon Bedrock Flows** from the left navigation pane. Then, choose a flow in the **Amazon Bedrock Flows** section.

1. In the **Versions** section, choose **Publish version**.

1. After the version is published, a success banner appears at the top.

------
#### [ API ]

To create a version of your flow, send a [CreateFlowVersion](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_CreateFlowVersion.html) request with an [Agents for Amazon Bedrock build-time endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt) and specify the ARN or ID of the flow as the `flowIdentifier`.

The response returns an ID and ARN for the version. Versions are created incrementally, starting from 1.

------

# View information about versions of flows in Amazon Bedrock
<a name="flows-version-view"></a>

To learn how to view information about the versions of a flow, choose the tab for your preferred method, and then follow the steps:

------
#### [ Console ]

**To view information about a version of a flow**

1. Open the [AWS Management Console](https://console.aws.amazon.com) and sign in to your account. Navigate to Amazon Bedrock.

1. Select **Flows** from the left navigation pane. Then, in the **Flows** section, select a flow you want to view.

1. Choose the version to view from the **Versions** section.

1. To view details about the nodes and configurations attached to version of the flow, select the node and view the details in the **Flow builder** pane. To make modifications to the flow, use the working draft and create a new version.

------
#### [ API ]

To get information about a version of your flow, send a [GetFlowVersion](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_GetFlowVersion.html) request with an [Agents for Amazon Bedrock build-time endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt) and specify the ARN or ID of the flow as the `flowIdentifier`. In the `flowVersion` field, specify the version number.

To list information for all versions of a flow, send a [ListFlowVersions](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_ListFlowVersions.html) request with an [Agents for Amazon Bedrock build-time endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt) and specify the ARN or ID of the flow as the `flowIdentifier`. You can specify the following optional parameters:


****  

| Field | Short description | 
| --- | --- | 
| maxResults | The maximum number of results to return in a response. | 
| nextToken | If there are more results than the number you specified in the maxResults field, the response returns a nextToken value. To see the next batch of results, send the nextToken value in another request. | 

------

# Delete a version of a flow in Amazon Bedrock
<a name="flows-version-delete"></a>

To learn how to delete a version of a flow, choose the tab for your preferred method, and then follow the steps:

------
#### [ Console ]

**To delete a version of a flow**

1. Open the [AWS Management Console](https://console.aws.amazon.com) and sign in to your account. Navigate to Amazon Bedrock.

1. Select **Flows** from the left navigation pane. Then, in the **Flows** section, select a flow.

1. Choose **Delete**.

1. A dialog box appears warning you about the consequences of deletion. To confirm that you want to delete the version, enter **delete** in the input field and choose **Delete**.

1. A banner appears to inform you that the version is being deleted. When deletion is complete, a success banner appears.

------
#### [ API ]

To delete a version of a flow, send a [DeleteFlowVersion](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_DeleteFlowVersion.html) request with an [Agents for Amazon Bedrock build-time endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt). Specify the ARN or ID of the flow in the `flowIdentifier` field and the version to delete in the `flowVersion` field.

------

# Create an alias of a flow in Amazon Bedrock
<a name="flows-alias-create"></a>

To invoke a flow, you must first create an alias that points to a version of the flow. To learn how to create an alias, choose the tab for your preferred method, and then follow the steps:

------
#### [ Console ]

**To create an alias for your Amazon Bedrock Flows**

1. Sign in to the AWS Management Console with an IAM identity that has permissions to use the Amazon Bedrock console. Then, open the Amazon Bedrock console at [https://console.aws.amazon.com/bedrock](https://console.aws.amazon.com/bedrock).

1. Select **Amazon Bedrock Flows** from the left navigation pane. Then, choose a flow in the **Flows** section.

1. In the **Aliases** section, choose **Create alias**.

1. Enter a unique name for the alias and provide an optional description.

1. Choose one of the following options:
   + To create a new version, choose **Create a new version and to associate it to this alias**.
   + To use an existing version, choose **Use an existing version to associate this alias**. From the dropdown menu, choose the version that you want to associate the alias to.

1. Select **Create alias**. A success banner appears at the top.

------
#### [ API ]

To create an alias to point to a version of your flow, send a [CreateFlowAlias](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_CreateFlowAlias.html) request with an [Agents for Amazon Bedrock build-time endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt).

The following fields are required:


****  

| Field | Basic description | 
| --- | --- | 
| flowIdentifier | The ARN or ID of the flow for which to create an alias. | 
| name | A name for the alias. | 
| routingConfiguration | Specify the version to map the alias to in the flowVersion field. | 

The following fields are optional:


****  

| Field | Use-case | 
| --- | --- | 
| description | To provide a description for the alias. | 
| clientToken | To prevent reduplication of the request. | 

------

Creation of an alias produces a resource with an identifier and an Amazon Resource Name (ARN) that you can specify when you invoke a flow from your application. To learn how to invoke a flow, see [Test a flow in Amazon Bedrock](flows-test.md).

# View information about aliases of flows in Amazon Bedrock
<a name="flows-alias-view"></a>

To learn how to view information about the aliases of a flow, choose the tab for your preferred method, and then follow the steps:

------
#### [ Console ]

**To view the details of an alias**

1. Open the [AWS Management Console](https://console.aws.amazon.com) and sign in to your account. Navigate to Amazon Bedrock.

1. Select **Flows** from the left navigation pane. Then, in the **Flows** section, select a flow.

1. Choose the alias to view from the **Aliases** section.

1. You can view the name and description of the alias and tags that are associated with the alias.

------
#### [ API ]

To get information about an alias of your flow, send a [GetFlowAlias](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_GetFlowAlias.html) request with an [Agents for Amazon Bedrock build-time endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt) and specify the ARN or ID of the flow as the `flowIdentifier`. In the `aliasIdentifier` field, specify the ID or ARN of the alias.

To list information for all aliases of a flow, send a [ListFlowAliass](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_ListFlowAliass.html) request with an [Agents for Amazon Bedrock build-time endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt) and specify the ARN or ID of the flow as the `flowIdentifier`. You can specify the following optional parameters:


****  

| Field | Short description | 
| --- | --- | 
| maxResults | The maximum number of results to return in a response. | 
| nextToken | If there are more results than the number you specified in the maxResults field, the response returns a nextToken value. To see the next batch of results, send the nextToken value in another request. | 

------

# Modify an alias of a flow in Amazon Bedrock
<a name="flows-alias-modify"></a>

To learn how to modify an alias of a flow, choose the tab for your preferred method, and then follow the steps:

------
#### [ Console ]

**To modify an alias**

1. Open the [AWS Management Console](https://console.aws.amazon.com) and sign in to your account. Navigate to Amazon Bedrock.

1. Select **Flows** from the left navigation pane. Then, in the **Flows** section, select a flow.

1. In the **Aliases** section, choose the option button next to the alias that you want to edit.

1. You can edit the name and description of the alias. Additionally, you can perform one of the following actions:
   + To create a new version and associate this alias with that version, choose **Create a new version and associate it to this alias**.
   + To associate this alias with a different existing version, choose **Use an existing version and associate this alias**.

1. Select **Save**.

------
#### [ API ]

To update an alias, send an [UpdateFlowAlias](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_UpdateFlowAlias.html) request with an [Agents for Amazon Bedrock build-time endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt). Include both fields you want to maintain and fields that you want to change in the request.

------

# Delete an alias of a flow in Amazon Bedrock
<a name="flows-alias-delete"></a>

To learn how to delete an alias of flow, choose the tab for your preferred method, and then follow the steps:

------
#### [ Console ]

**To delete an alias**

1. Open the [AWS Management Console](https://console.aws.amazon.com) and sign in to your account. Navigate to Amazon Bedrock.

1. Select **Flows** from the left navigation pane. Then, in the **Flows** section, select a flow.

1. To choose the alias for deletion, in the **Aliases** section, choose the option button next to the alias that you want to delete.

1. Choose **Delete**.

1. A dialog box appears warning you about the consequences of deletion. To confirm that you want to delete the alias, enter **delete** in the input field and choose **Delete**.

1. A banner appears to inform you that the alias is being deleted. When deletion is complete, a success banner appears.

------
#### [ API ]

To delete a flow alias, send a [DeleteFlowAlias](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_DeleteFlowAlias.html) request with an [Agents for Amazon Bedrock build-time endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt). Specify the ARN or ID of the flow in the `flowIdentifier` field and the ARN or ID of the alias to delete in the `aliasIdentifier` field.

------

# Invoke an AWS Lambda function from an Amazon Bedrock flow in a different AWS account
<a name="flow-cross-account-lambda"></a>

An Amazon Bedrock flow can invoke a AWS Lambda function that is in a different AWS account from the flow. Use the following procedure to configure the Lambda function (*Account A*) and the flow (*Account B*). 

**To configure a flow flow to call a Lambda function in a different AWS account**

1. In Account A (Lambda function), add a resource-based policy to the Lambda function, using the Flow Execution Role from Account B as the principal. For more information, see [Granting Lambda function access to other accounts](https://docs.aws.amazon.com/lambda/latest/dg/permissions-function-cross-account.html) in the *AWS Lambda* documentation.

1. In Account B (Amazon Bedrock flow), add permission for the [invoke](https://docs.aws.amazon.com/lambda/latest/api/API_Invoke.html) operation to the flow execution role for the Lambda function ARN that you are using. For more information, see [Update permissions for a role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_update-role-permissions.html) in the *AWS Identity and Access Management* documentation.

# Converse with an Amazon Bedrock flow
<a name="flows-multi-turn-invocation"></a>

**Note**  
Amazon Bedrock Flows multi-turn conversation is in preview release for Amazon Bedrock and is subject to change.

Amazon Bedrock Flows multi-turn conversation enables dynamic, back-and-forth conversations between users and flows, similar to a natural dialogue. When an agent node requires clarification or additional context, it can intelligently pause the flow's execution and prompt the user for specific information. This creates a more interactive and context-aware experience, as the node can adapt its behavior based on user responses. For example, if an initial user query is ambiguous or incomplete, the node can ask follow-up questions to gather the necessary details. Once the user provides the requested information, the flow seamlessly resumes execution with the enriched input, ensuring more accurate and relevant results. This capability is particularly valuable for complex scenarios where a single interaction may not be sufficient to fully understand and address the user's needs.

**Topics**
+ [

## How to process a multi-turn conversation in a flow
](#flows-multi-turn-invocation-how)
+ [

## Creating and running an example flow
](#flows-multi-turn-invocation-example-flow)

## How to process a multi-turn conversation in a flow
<a name="flows-multi-turn-invocation-how"></a>

To use a multi-turn conversation in a flow, you need an [agent node](flows-nodes.md#flows-nodes-agent) connected to an Amazon Bedrock agent. When you run the flow, a multi-turn conversation happens when the agent needs further information from the user before it can continue. This section describes a flow that uses an agent with the following instructions:

```
You are a playlist creator for a radio station. 
When asked to create a playlist, ask for the number of songs,
the genre of music, and a theme for the playlist.
```

For information about creating an agent see [Automate tasks in your application using AI agents](agents.md). 

### Step 1: Start the flow
<a name="flows-multi-turn-invocation-start-flow"></a>

You start a flow by calling the [InvokeFlow](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent-runtime_InvokeFlow.html) operation. You include the initial content that you want to send to the flow. In the following example, the `document` field contains a request to *Create a playlist*. Each conversation has a unique identifer (*execution ID*) that identifies the conversation within the flow. To get the execution ID, you don't send the`executionID` field in your first call to `InvokeFlow`. The response from `InvokeFlow` includes the execution ID. In your code, use the identifer to track multiple conversations and identify a conversation in further calls to the `InvokeFlow` operation. 

The following is example JSON for a request to `InvokeFlow`.

```
{
  "flowIdentifier": "XXXXXXXXXX",
  "flowAliasIdentifier": "YYYYYYYYYY",
  "inputs": [
    {
      "content": {
        "document": "Create a playlist."
      },
      "nodeName": "FlowInputNode",
      "nodeOutputName": "document"
    }
  ]
}
```

### Step 2: Retrieve agent requests
<a name="flows-multi-turn-invocation-retrieve-requests"></a>

If the agent node in the flow decides that it needs more information from the user, the response stream (`responseStream`) from `InvokeFlow` includes an `FlowMultiTurnInputRequestEvent` event object. The event has the requested information in the `content` (`FlowMultiTurnInputContent`) field. In the following example, the request in the `document` field is for information about the number of songs, genre of music, and theme for the playlist. In your code, you then need to get that information from the user.

The following is an example `FlowMultiTurnInputRequestEvent` JSON object.

```
{
    "nodeName": "AgentsNode_1",
    "nodeType": "AgentNode",
    "content": {
        "document": "Certainly! I'd be happy to create a playlist for you. To make sure it's tailored to your preferences, could you please provide me with the following information:
        1. How many songs would you like in the playlist?
        2. What genre of music do you prefer? 
        3. Is there a specific theme or mood you'd like for the playlist? Once you provide these details, I'll be able to create a customized playlist just for you."
    }
}
```

Since the flow cannot continue until more input is received, the flow also emits a `FlowCompletionEvent` event. A flow always emits the `FlowMultiTurnInputRequestEvent` before the `FlowCompletionEvent`. If the value of `completionReason` in the `FlowCompletionEvent` event is `INPUT_REQUIRED`, the flow need more information before it can continue. 

The following is an example `FlowCompletionEvent` JSON object.

```
{
    "completionReason": "INPUT_REQUIRED"
}
```

### Step 3: Send the user response to the flow
<a name="flows-multi-turn-invocation-continue"></a>

Send the user response back to the flow by calling the `InvokeFlow` operation again. Be sure to include the `executionId`for the conversation.

The following is example JSON for the request to `InvokeFlow`. The `document` field contains the response from the user.

```
{
  "flowIdentifier": "AUS7BMHXBE",
  "flowAliasIdentifier": "4KUDB8VBEF",
  "executionId": "b6450554-f8cc-4934-bf46-f66ed89b60a0",
  "inputs": [
    {
      "content": {
        "document": "1. 5 songs 2. Welsh rock music 3. Castles"
      },
      "nodeName": "AgentsNode_1",
      "nodeInputName": "agentInputText"
    }
  ]
}
```

If the flow needs more information, the flow creates further `FlowMultiTurnInputRequestEvent` events.

### Step 4: End the flow
<a name="flows-multi-turn-invocation-end"></a>

When no more information is needed, the flow emits a `FlowOutputEvent` event which contains the final response.

The following is an example `FlowOutputEvent` JSON object.

```
{
    "nodeName": "FlowOutputNode",
    "content": {
        "document": "Great news! I've created a 5-song Welsh rock playlist centered around the theme of castles. 
        Here's the playlist I've put together for you: Playlist Name: Welsh Rock Castle Anthems 
        Description: A 5-song Welsh rock playlist featuring songs about castles 
        Songs: 
        1. Castell y Bere - Super Furry Animals 
        2. The Castle - Manic Street Preachers 
        3. Caerdydd (Cardiff Castle) - Stereophonics 
        4. Castell Coch - Catatonia 
        5. Chepstow Castle - Feeder 
        This playlist combines Welsh rock bands with songs that reference castles or specific Welsh castles. 
        Enjoy your castle-themed Welsh rock music experience!"
     }
}
```

The flow also emits a `FlowCompletionEvent` event. The value of `completionReason` is `SUCCESS`. 

The following is an example `FlowCompletionEvent` JSON object.

```
{
    "completionReason": "SUCCESS"
}
```

The following sequence diagram shows the steps in a multi-turn flow.

![\[Flow steps for a multi-turn conversation.\]](http://docs.aws.amazon.com/bedrock/latest/userguide/images/flows/flows-multi-turn-steps.png)


## Creating and running an example flow
<a name="flows-multi-turn-invocation-example-flow"></a>

In this example, you create a flow that uses an agent to create playlists for a radio station. The agent asks clarifying questions to determine the number of songs, the genre of music, and the theme for the playlist.

**To create the flow**

1. Create an agent in the Amazon Bedrock console by following the instructions at [Create and configure agent manually](agents-create.md). 
   + For step *2.d*, enter **You are a playlist creator for a radio station. When asked to create a playlist, ask for the number of songs, the genre of music, and a theme for the playlist.**.
   + For step *2.e*, in **User input**, choose **Enabled**. Doing this lets the agent request more information, as needed.

1. Create the flow by following the instructions at [Create and design a flow in Amazon Bedrock](flows-create.md). Make sure the flow has an input node, an agents node, and an output node. 

1. Link the agent node to the agent that you created in step 1. The flow should look like the following image.  
![\[Flow multi-turn conversation\]](http://docs.aws.amazon.com/bedrock/latest/userguide/images/flows/flows-multi-turn.png)

1. Run the flow in the Amazon Bedrock console. For testing you can trace the steps that the flow makes. For more information, see [Test a flow in Amazon Bedrock](flows-test.md).

The following Python code example shows how use the flow. 

To run the code, specify the following:
+ `region_name` – The AWS Region in which you are running the flow.
+ `FLOW_ID` – The ID of the flow.
+ `FLOW_ALIAS_ID` – The alias ID of the flow.

For information about getting the IDs, see [View information about flows in Amazon Bedrock](flows-view.md). The code prompts for an initial request to send to the flow and requests more input as needed by the flow. The code doesn't manage other requests from the agent, such as requests to call AWS Lambda functions. For more information, see [How Amazon Bedrock Agents works](agents-how.md). While running, the code generates `FlowTraceEvent` objects that you can use to track the path from the input to the response that flow returns. For more information, see [Track each step in your flow by viewing its trace in Amazon BedrockTrack each step in your flow by viewing its trace](flows-trace.md).

```
"""
Runs an Amazon Bedrock flow and handles muli-turn interaction for a single conversation.

"""
import logging
import boto3
import botocore



import botocore.exceptions

logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)


def invoke_flow(client, flow_id, flow_alias_id, input_data, execution_id):
    """
    Invoke an Amazon Bedrock flow and handle the response stream.

    Args:
        client: Boto3 client for Amazon Bedrock agent runtime
        flow_id: The ID of the flow to invoke
        flow_alias_id: The alias ID of the flow
        input_data: Input data for the flow
        execution_id: Execution ID for continuing a flow. Use the value None on first run.

    Returns:
        Dict containing flow_complete status, input_required info, and execution_id
    """

    response = None
    request_params = None

    if execution_id is None:
        # Don't pass execution ID for first run.
        request_params = {
            "flowIdentifier": flow_id,
            "flowAliasIdentifier": flow_alias_id,
            "inputs": [input_data],
            "enableTrace": True
        }
    else:
        request_params = {
            "flowIdentifier": flow_id,
            "flowAliasIdentifier": flow_alias_id,
            "executionId": execution_id,
            "inputs": [input_data],
            "enableTrace": True
        }

    response = client.invoke_flow(**request_params)
    if "executionId" not in request_params:
        execution_id = response['executionId']

    input_required = None
    flow_status = ""

    # Process the streaming response
    for event in response['responseStream']:
        # Check if flow is complete.
        if 'flowCompletionEvent' in event:
            flow_status = event['flowCompletionEvent']['completionReason']

        # Check if more input us needed from user.
        elif 'flowMultiTurnInputRequestEvent' in event:
            input_required = event

        # Print the model output.
        elif 'flowOutputEvent' in event:
            print(event['flowOutputEvent']['content']['document'])

        elif 'flowTraceEvent' in event:
            logger.info("Flow trace:  %s", event['flowTraceEvent'])

    return {
        "flow_status": flow_status,
        "input_required": input_required,
        "execution_id": execution_id
    }


if __name__ == "__main__":

    session = boto3.Session(profile_name='default', region_name='YOUR_FLOW_REGION')
    bedrock_agent_client = session.client('bedrock-agent-runtime')
    
    # Replace these with your actual flow ID and alias ID
    FLOW_ID = 'YOUR_FLOW_ID'
    FLOW_ALIAS_ID = 'YOUR_FLOW_ALIAS_ID'


    flow_execution_id = None
    finished = False

    # Get the intial prompt from the user.
    user_input = input("Enter input: ")

    flow_input_data = {
        "content": {
            "document": user_input
        },
        "nodeName": "FlowInputNode",
        "nodeOutputName": "document"
    }

    logger.info("Starting flow %s", FLOW_ID)

    try:
        while not finished:
            # Invoke the flow until successfully finished.

            result = invoke_flow(
                bedrock_agent_client, FLOW_ID, FLOW_ALIAS_ID, flow_input_data, flow_execution_id)
            status = result['flow_status']
            flow_execution_id = result['execution_id']
            more_input = result['input_required']
            if status == "INPUT_REQUIRED":
                # The flow needs more information from the user.
                logger.info("The flow %s requires more input", FLOW_ID)
                user_input = input(
                    more_input['flowMultiTurnInputRequestEvent']['content']['document'] + ": ")
                flow_input_data = {
                    "content": {
                        "document": user_input
                    },
                    "nodeName": more_input['flowMultiTurnInputRequestEvent']['nodeName'],
                    "nodeInputName": "agentInputText"

                }
            elif status == "SUCCESS":
                # The flow completed successfully.
                finished = True
                logger.info("The flow %s successfully completed.", FLOW_ID)

    except botocore.exceptions.ClientError as e:
        print(f"Client error: {str(e)}")
        logger.error("Client error: %s", {str(e)})

    except Exception as e:
        print(f"An error occurred: {str(e)}")
        logger.error("An error occurred: %s", {str(e)})
        logger.error("Error type: %s", {type(e)})
```

# Run Amazon Bedrock Flows code samples
<a name="flows-code-ex"></a>

The following code samples assume that you've fulfilled the following prerequisites:

1. Set up a role to have permissions to Amazon Bedrock actions. If you haven't, refer to [Quickstart](getting-started.md).

1. Set up your credentials to use the AWS API. If you haven't, refer to [Get started with the API](getting-started-api.md).

1. Create a service role to carry out flow-related actions on your behalf. If you haven't, refer to [Create a service role for Amazon Bedrock Flows in Amazon Bedrock](flows-permissions.md).

To create a flow, send a [CreateFlow](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_CreateFlow.html) request with an [Agents for Amazon Bedrock build-time endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt). For example code, see [Run Amazon Bedrock Flows code samples](#flows-code-ex)

The following fields are required:


****  

| Field | Basic description | 
| --- | --- | 
| name | A name for the flow. | 
| executionRoleArn | The ARN of the [service role with permissions to create and manage flows](flows-permissions.md). | 

The following fields are optional:


****  

| Field | Use case | 
| --- | --- | 
| definition | Contains the nodes and connections that make up the flow. | 
| description | To describe the flow. | 
| tags | To associate tags with the flow. For more information, see [Tagging Amazon Bedrock resources](tagging.md). | 
| customerEncryptionKeyArn | To encrypt the resource with a KMS key. For more information, see [Encryption of Amazon Bedrock Flows resources](encryption-flows.md). | 
| clientToken | To ensure the API request completes only once. For more information, see [Ensuring idempotency](https://docs.aws.amazon.com/ec2/latest/devguide/ec2-api-idempotency.html). | 

While the `definition` field is optional, it is required for the flow to be functional. You can choose to create a flow without the definition first and instead update the flow later.

For each node in your `nodes` list, you specify the type of node in the `type` field and provide the corresponding configuration of the node in the `config` field. For details about the API structure of different types of nodes, see [Node types for your flow](flows-nodes.md).

To try out some code samples for Amazon Bedrock Flows, choose the tab for your preferred method, and then follow the steps:

------
#### [ Python ]

1. Create a flow using a [CreateFlow](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_CreateFlow.html) request with an [Agents for Amazon Bedrock build-time endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt) with the following nodes:
   + An input node.
   + A prompt node with a prompt defined inline that creates a music playlist using two variables (`genre` and `number`).
   + An output node that returns the model completion.

   Run the following code snippet to load the AWS SDK for Python (Boto3), create an Amazon Bedrock Agents client, and create a flow with the nodes (replace the `executionRoleArn` field with the ARN of your the service role that you created for flow):

   ```
   # Import Python SDK and create client
   import boto3
   
   client = boto3.client(service_name='bedrock-agent')
   
   # Replace with the service role that you created. For more information, see https://docs.aws.amazon.com/bedrock/latest/userguide/flows-permissions.html
   FLOWS_SERVICE_ROLE = "arn:aws:iam::123456789012:role/MyFlowsRole"
   
   # Define each node
   
   # The input node validates that the content of the InvokeFlow request is a JSON object.
   input_node = {
       "type": "Input",
       "name": "FlowInput",
       "outputs": [
           {
               "name": "document",
               "type": "Object"
           }
       ]
   }
   
   # This prompt node defines an inline prompt that creates a music playlist using two variables.
   # 1. {{genre}} - The genre of music to create a playlist for
   # 2. {{number}} - The number of songs to include in the playlist
   # It validates that the input is a JSON object that minimally contains the fields "genre" and "number", which it will map to the prompt variables.
   # The output must be named "modelCompletion" and be of the type "String".
   prompt_node = {
       "type": "Prompt",
       "name": "MakePlaylist",
       "configuration": {
           "prompt": {
               "sourceConfiguration": {
                   "inline": {
                       "modelId": "amazon.nova-lite-v1:0",
                       "templateType": "TEXT",
                       "inferenceConfiguration": {
                           "text": {
                               "temperature": 0.8
                           }
                       },
                       "templateConfiguration": { 
                           "text": {
                               "text": "Make me a {{genre}} playlist consisting of the following number of songs: {{number}}."
                           }
                       }
                   }
               }
           }
       },
       "inputs": [
           {
               "name": "genre",
               "type": "String",
               "expression": "$.data.genre"
           },
           {
               "name": "number",
               "type": "Number",
               "expression": "$.data.number"
           }
       ],
       "outputs": [
           {
               "name": "modelCompletion",
               "type": "String"
           }
       ]
   }
   
   # The output node validates that the output from the last node is a string and returns it as is. The name must be "document".
   output_node = {
       "type": "Output",
       "name": "FlowOutput",
       "inputs": [
           {
               "name": "document",
               "type": "String",
               "expression": "$.data"
           }
       ]
   }
   
   # Create connections between the nodes
   connections = []
   
   #   First, create connections between the output of the flow input node and each input of the prompt node
   for input in prompt_node["inputs"]:
       connections.append(
           {
               "name": "_".join([input_node["name"], prompt_node["name"], input["name"]]),
               "source": input_node["name"],
               "target": prompt_node["name"],
               "type": "Data",
               "configuration": {
                   "data": {
                       "sourceOutput": input_node["outputs"][0]["name"],
                       "targetInput": input["name"]
                   }
               }
           }
       )
   
   # Then, create a connection between the output of the prompt node and the input of the flow output node
   connections.append(
       {
           "name": "_".join([prompt_node["name"], output_node["name"]]),
           "source": prompt_node["name"],
           "target": output_node["name"],
           "type": "Data",
           "configuration": {
               "data": {
                   "sourceOutput": prompt_node["outputs"][0]["name"],
                   "targetInput": output_node["inputs"][0]["name"]
               }
           }
       }
   )
   
   # Create the flow from the nodes and connections
   response = client.create_flow(
       name="FlowCreatePlaylist",
       description="A flow that creates a playlist given a genre and number of songs to include in the playlist.",
       executionRoleArn=FLOWS_SERVICE_ROLE,
       definition={
           "nodes": [input_node, prompt_node, output_node],
           "connections": connections
       }
   )
   
   flow_id = response.get("id")
   ```

1. List the flows in your account, including the one you just created, by running the following code snippet to make a [ListFlows](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_ListFlows.html) request with an [Agents for Amazon Bedrock build-time endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt):

   ```
   client.list_flows()
   ```

1. Get information about the flow that you just created by running the following code snippet to make a [GetFlow](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_GetFlow.html) request with an [Agents for Amazon Bedrock build-time endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt):

   ```
   client.get_flow(flowIdentifier=flow_id)
   ```

1. Prepare your flow so that the latest changes from the working draft are applied and so that it's ready to version. Run the following code snippet to make a [PrepareFlow](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_PrepareFlow.html) request with an [Agents for Amazon Bedrock build-time endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt):

   ```
   client.prepare_flow(flowIdentifier=flow_id)
   ```

1. Version the working draft of your flow to create a static snapshot of your flow and then retrieve information about it with the following actions:

   1. Create a version by running the following code snippet to make a [CreateFlowVersion](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_CreateFlowVersion.html) request with an [Agents for Amazon Bedrock build-time endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt):

      ```
      response = client.create_flow_version(flowIdentifier=flow_id)
                                      
      flow_version = response.get("version")
      ```

   1. List all versions of your flow by running the following code snippet to make a [ListFlowVersions](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_ListFlowVersions.html) request with an [Agents for Amazon Bedrock build-time endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt):

      ```
      client.list_flow_versions(flowIdentifier=flow_id)
      ```

   1. Get information about the version by running the following code snippet to make a [GetFlowVersion](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_GetFlowVersion.html) request with an [Agents for Amazon Bedrock build-time endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt):

      ```
      client.get_flow_version(flowIdentifier=flow_id, flowVersion=flow_version)
      ```

1. Create an alias to point to the version of your flow that you created and then retrieve information about it with the following actions:

   1. Create an alias and point it to the version you just created by running the following code snippet to make a [CreateFlowAlias](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_CreateFlowAlias.html) request with an [Agents for Amazon Bedrock build-time endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt):

      ```
      response = client.create_flow_alias(
          flowIdentifier=flow_id,
          name="latest",
          description="Alias pointing to the latest version of the flow.",
          routingConfiguration=[
              {
                  "flowVersion": flow_version
              }
          ]
      )
      
      flow_alias_id = response.get("id")
      ```

   1. List all aliases of your flow by running the following code snippet to make a [ListFlowAliass](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_ListFlowAliass.html) request with an [Agents for Amazon Bedrock build-time endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt):

      ```
      client.list_flow_aliases(flowIdentifier=flow_id)
      ```

   1. Get information about the alias that you just created by running the following code snippet to make a [GetFlowAlias](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_GetFlowAlias.html) request with an [Agents for Amazon Bedrock build-time endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt):

      ```
      client.get_flow_alias(flowIdentifier=flow_id, aliasIdentifier=flow_alias_id)
      ```

1. Run the following code snippet to create an Amazon Bedrock Agents Runtime client and invoke a flow. The request fills in the variables in the prompt in your flow and returns the response from the model to make a [InvokeFlow](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent-runtime_InvokeFlow.html) request with an [Agents for Amazon Bedrock runtime endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-rt):

   ```
   client_runtime = boto3.client('bedrock-agent-runtime')
   
   response = client_runtime.invoke_flow(
       flowIdentifier=flow_id,
       flowAliasIdentifier=flow_alias_id,
       inputs=[
           {
               "content": {
                   "document": {
                       "genre": "pop",
                       "number": 3
                   }
               },
               "nodeName": "FlowInput",
               "nodeOutputName": "document"
           }
       ]
   )
   
   result = {}
   
   for event in response.get("responseStream"):
       result.update(event)
   
   if result['flowCompletionEvent']['completionReason'] == 'SUCCESS':
       print("Flow invocation was successful! The output of the flow is as follows:\n")
       print(result['flowOutputEvent']['content']['document'])
   
   else:
       print("The flow invocation completed because of the following reason:", result['flowCompletionEvent']['completionReason'])
   ```

   The response should return a playlist of pop music with three songs.

1. Delete the alias, version, and flow that you created with the following actions:

   1. Delete the alias by running the following code snippet to make a [DeleteFlowAlias](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_DeleteFlowAlias.html) request with an [Agents for Amazon Bedrock build-time endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt):

      ```
      client.delete_flow_alias(flowIdentifier=flow_id, aliasIdentifier=flow_alias_id)
      ```

   1. Delete the version by running the following code snippet to make a [DeleteFlowVersion](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_DeleteFlowVersion.html) request with an [Agents for Amazon Bedrock build-time endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt):

      ```
      client.delete_flow_version(flowIdentifier=flow_id, flowVersion=flow_version)
      ```

   1. Delete the flow by running the following code snippet to make a [DeleteFlow](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_DeleteFlow.html) request with an [Agents for Amazon Bedrock build-time endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt):

      ```
      client.delete_flow(flowIdentifier=flow_id)
      ```

------

# Delete a flow in Amazon Bedrock
<a name="flows-delete"></a>

If you no longer need a flow, you can delete it. Flows that you delete are retained in the AWS servers for up to fourteen days. To learn how to delete a flow, choose the tab for your preferred method, and then follow the steps:

------
#### [ Console ]

**To delete a flow**

1. Sign in to the AWS Management Console with an IAM identity that has permissions to use the Amazon Bedrock console. Then, open the Amazon Bedrock console at [https://console.aws.amazon.com/bedrock](https://console.aws.amazon.com/bedrock).

1. Select **Amazon Bedrock Flows** from the left navigation pane. Then, in the **Amazon Bedrock Flows** section, select a flow to delete.

1. Choose **Delete**.

1. A dialog box appears warning you about the consequences of deletion. To confirm that you want to delete the flow, enter **delete** in the input field and choose **Delete**.

1. A banner appears to inform you that the flow is being deleted. When deletion is complete, a success banner appears.

------
#### [ API ]

To delete a flow, send a [DeleteFlow](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_DeleteFlow.html) request with an [Agents for Amazon Bedrock build-time endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt) and specify the ARN or ID of the flow as the `flowIdentifier`.

------