

# Flow components and features
<a name="flow-components-and-features"></a>

These topics provide detailed information about individual step types and their capabilities. For configuration instructions, see [Editing flows](editing-flows.md).

# Terminology and key concepts
<a name="terminology-and-key-concepts"></a>

Understanding the core terminology and concepts helps you effectively create, run, share, and maintain flows within your organization.

## Steps and @ references
<a name="steps-and-references"></a>

A flow is made up of steps that each perform a specific function, such as calling an action, querying your data, or searching the web. Most steps are controlled through a natural language prompt.

Steps pass data to each other using @ references. When you write a prompt inside a step, type @ to see a menu of previous steps. Select one to include that step's output as context in your prompt. For example, if Step 1 collects a customer's issue and Step 3 needs to classify it, Step 3's prompt might say: "Classify the issue in @Customer Issue by severity (low, medium, high)."

The available step types are organized into the following groups:

AI responses  
+ **Chat agent** — Gets a response from a custom agent and can take actions in connected applications.
+ **Research** — Invokes Amazon Quick Research to generate research reports as part of your workflow.
+ **Web Search** — Generates responses using internet search results.
+ **General knowledge** — Gets a response directly from Amazon Bedrock models, with configurable response preferences for speed or versatility.
+ **UI Agent** — Navigates public websites that do not require a login and performs tasks like scrolling to find information or filling forms.
+ **Create Image** — Generates AI images from text inputs.

Flow logic  
+ **Reasoning Group** — Groups related steps together with natural language instructions that define conditions, loops, validation, and execution order.

Data insights  
+ **Quick data** — Retrieves responses from spaces and knowledge bases.
+ **Dashboards and topics** — Gets insights from Amazon Quick Sight dashboards and topics.

Actions  
+ **Application actions** — Performs read or write operations in connected third-party applications through pre-built connectors.

User input  
+ **Text** — Collects free-form text input from users.
+ **Files** — Accepts file uploads from users for document processing.

## Editor and Run mode
<a name="editor-and-run-mode"></a>

Editor mode is where you build your flow. You see all the steps laid out and can select each one to change its configuration. Run mode is where you test and execute your flow, with a chat panel where you can ask follow-up questions or refine the output.

# AI response steps
<a name="ai-response-steps"></a>

AI response steps generate content using AI models. Amazon Quick Flows provides the following AI response step types.

## Chat agent
<a name="chat-agent-step"></a>

Amazon Quick Flows allows you to use your chat agents to generate outputs from configured spaces or take action with configured action integrations, all within a workflow step.

Chat agents contain domain-specific knowledge, custom instructions, and connected tools. When you integrate a chat agent into a flow, you can automatically apply this specialized knowledge across multiple workflows without recreating it. For example, if you built a sales assistant chat agent that understands product details and follows brand guidelines, you can embed it in your outreach flow to ensure consistent communication at scale.

For configuration instructions, see [Editing flows](editing-flows.md).

**Note**  
The chat agent step is a single-turn interaction. The agent responds to the task you instruct it to do, but does not support a back-and-forth conversation within the same step.

## Research
<a name="research-step"></a>

The research step invokes Amazon Quick Research to generate research reports within your flow. This lets you embed research directly into multi-step workflows — for example, creating account plans, conducting policy reviews, researching patent prior art, or generating industry reports.

For full details about Quick Research capabilities and limitations, see [Using Amazon Quick Research](using-amazon-quick-research.md). For configuration instructions, see [Editing flows](editing-flows.md).

You can reference the research output in later steps — for example, to send a summary over email to your team.

## Web search
<a name="web-search-step"></a>

The web search step lets your flows retrieve current information from the internet. This is useful when you need to access real-time data, verify facts, or gather information from public sources beyond your organization's internal knowledge base.

Write a prompt describing what to search for. The search results can be referenced by later steps in your flow using @ references.

For configuration instructions, see [Editing flows](editing-flows.md).

**Note**  
Search results may vary over time as internet content changes. Some content may not be accessible through web search.

## General knowledge
<a name="general-knowledge-step"></a>

The General knowledge step generates text responses using Amazon Bedrock models. Instead of selecting a specific model, you choose a response preference, and Amazon Quick Flows automatically selects the most appropriate model based on your preference and the requirements of your flow.

Choose from:
+ **Fast responses** — Optimized for speed across image, video, and text inputs.
+ **Versatility and performance** — Balanced capabilities for diverse tasks.

Optionally adjust the creativity slider to control the randomness of the response.

If you do not see response preferences, verify that your administrator has enabled "Enable bedrock model usage in General knowledge step for output refinement" in the Custom Permissions page.

For configuration instructions, see [Editing flows](editing-flows.md).

## UI agent
<a name="ui-agent-step"></a>

The UI agent step (Preview) lets your flows interact with public websites that do not require a login. The agent can autonomously navigate websites, click, type, read data, and produce structured outputs — all described in natural language.

**Writing effective instructions**
+ Be clear and specific about the task you want performed.
+ Use single, complete URLs (for example, "Go to https://example.com/reports").
+ Add constraints to narrow the scope (for example, "only look at the pricing section").
+ Specify when the agent should stop (for example, "stop after finding the first matching result").
+ Define the output format if needed (for example, "return the data as a bulleted list").

For configuration instructions, see [Editing flows](editing-flows.md).

**Note**  
UI agent is currently in Preview. Some websites implement anti-automation measures such as CAPTCHA challenges that may limit UI agent capabilities. Websites that require login are not currently supported.

## Create Image
<a name="create-image-step"></a>

The Create Image step generates AI images from text prompts. You can configure creativity level, exclude terms, and image seed in the advanced settings.

For configuration instructions, see [Editing flows](editing-flows.md).

# Flow logic steps
<a name="flow-logic-steps"></a>

Flow logic steps control how your flow runs.

## Reasoning Group
<a name="reasoning-group-step"></a>

Reasoning groups give you control over how parts of your flow run using natural language instructions. A reasoning group contains its own set of steps — like an isolated workflow within your larger workflow — that runs based on conditions you define. You can add most step types to a reasoning group, except reasoning groups and research steps. Templates are available to help you get started.

### Loops
<a name="reasoning-group-loops"></a>

You can repeat the steps in a group for each value in a list from a previous step's output. Reference the previous step in your instructions, and the Flows runtime handles the iteration for you. For example, if a previous step returns a list of customer emails, a reasoning group can process each email in turn.

### Conditions
<a name="reasoning-group-conditions"></a>

You can run the steps in a group based on natural language conditions that evaluate a previous step's output. For example, "Run if @Customer Priority is HIGH PRIORITY" routes only urgent items through the group's steps.

### Validation
<a name="reasoning-group-validation"></a>

You can check inputs or outputs before proceeding. For example, a reasoning group can verify that a required field is present before passing data to an action step.

For configuration instructions, see [Editing flows](editing-flows.md). For reasoning group limits, see [Quick Flows limits](quick-flows-limits.md).

# Data insight steps
<a name="data-insight-steps"></a>

Data insight steps retrieve information from your Quick data sources.

## Quick data
<a name="quick-suite-data-step"></a>

The Quick data step retrieves responses from your spaces and knowledge bases. Write a prompt describing what content to retrieve, and optionally link specific resources. By default, responses are generated from all knowledge sources the user has access to.

The system searches across indexed documents in your spaces, including knowledge bases from connected sources. For more information about setting up spaces and knowledge bases, see [Working with integrations](working-with-integrations.md).

For configuration instructions, see [Editing flows](editing-flows.md).

## Dashboards and topics
<a name="dashboards-and-topics-step"></a>

The Dashboards and topics step generates insights from your existing Amazon Quick Sight dashboards and topics. Responses can include charts, graphs, tables, and other visualizations.

Select a Quick Sight source (Dashboard or Topic) and write a prompt describing the insights you want. You can specify filters, date ranges, and other criteria in natural language.

For configuration instructions, see [Editing flows](editing-flows.md).

# Action steps
<a name="action-steps-in-flows"></a>

Action steps let your flows perform read or write operations in connected applications. The available operations depend on the action integrations that you have configured or that have been shared with you. For prerequisites, authentication methods, and available integrations, see [Working with integrations](working-with-integrations.md).

## Action parameters
<a name="action-parameters"></a>

Some actions require or accept parameters such as filters or specific input values. To determine which parameters an action accepts, you can ask Quick — for example, "What parameters does list emails accept when filtering emails?"

Parameters can be:
+ Supplied directly in the prompt (for example, "List emails with subject containing 'quarterly report'")
+ Referenced from a previous step using @ references (for example, a user input step that collects a search term)
+ Acquired dynamically during flow execution based on the flow context

## Working with list results
<a name="working-with-list-results"></a>

When you use an action that returns a list of items (for example, listing emails or tickets), the results may not include every item or every detail. This is because many applications return results in batches rather than all at once, and Quick retrieves only the first batch to keep your flow fast and responsive.

If you need the full details for each item in a list, you can use a reasoning group to go through the results one at a time and retrieve the complete information for each. For example, you might list your open support tickets in one step, then use a reasoning group to get the full details of each ticket.

When doing this, be mindful of how many items you are processing. A large number of items means more steps to run, which increases the time your flow takes to complete and the amount of data in your results. Where possible, use filters in your initial list action to narrow down the results before processing them.

For configuration instructions, see [Editing flows](editing-flows.md).

# User input steps
<a name="user-input-steps"></a>

User input steps collect information from users when they run a flow.

## Text
<a name="text-input-step"></a>

The text input step collects free-form text from users. You can set a placeholder, a default value, and allow users to override the default at runtime.

Use placeholder text to guide users on what to enter. For example, you can present a set of options like "Enter 1 for Sales, 2 for Marketing, 3 for Support" to help users provide structured input that your flow can act on.

For configuration instructions, see [Editing flows](editing-flows.md).

## Files
<a name="file-upload-step"></a>

The file upload step accepts a document, image, or video from users. You can upload a default file and allow users to override it at runtime. You can upload one file per step.

File uploads are subject to the same size and format restrictions as uploading files in chat. If your content exceeds these limits, consider using a space or knowledge base to process the request instead.

For configuration instructions, see [Editing flows](editing-flows.md).