

# AWS Transfer Family managed workflows
<a name="transfer-workflows"></a>

 AWS Transfer Family supports managed workflows for file processing. With managed workflows, you can kick off a workflow after a file has been transferred over SFTP, FTPS, or FTP. Using this feature, you can securely and cost effectively meet your compliance requirements for business-to-business (B2B) file exchanges by coordinating all the necessary steps required for file processing. In addition, you benefit from end-to-end auditing and visibility.

![\[Flow diagram showing how managed workflows assist with file processing.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/workflows-diagram.png)


By orchestrating file-processing tasks, managed workflows help you preprocess data before it is consumed by your downstream applications. Such file-processing tasks might include:
+ Moving files to user-specific folders.
+ Decrypting files as part of a workflow.
+ Tagging files.
+ Performing custom processing by creating and attaching an AWS Lambda function to a workflow.
+ Sending notifications when a file has been successfully transferred. (For a blog post that details this use case, see [ Customize file delivery notifications using AWS Transfer Family managed workflows](https://aws.amazon.com/blogs/storage/customize-file-delivery-notifications-using-aws-transfer-family-managed-workflows/).)

To quickly replicate and standardize common post-upload file processing tasks spanning multiple business units in your organization, you can deploy workflows by using infrastructure as code (IaC). You can specify a managed workflow to be initiated on files that are uploaded in full. You can also specify a different managed workflow to be initiated on files that are only partially uploaded because of a premature session disconnect. Built-in exception handling helps you quickly react to file-processing outcomes, while offering you control over how to handle failures. In addition, each workflow step produces detailed logs, which you can audit to trace the data lineage.

To get started, perform the following tasks:

1. Set up your workflow to contain preprocessing actions, such as copying, tagging, and other steps based on your requirements. See [Create a workflow](create-workflow.md) for details.

1. Configure an execution role, which Transfer Family uses to run the workflow. See [IAM policies for workflows](workflow-execution-role.md) for details.

1. Map the workflow to a server, so that on file arrival, the actions specified in this workflow are evaluated and initiated in real time. See [Configure and run a workflow](create-workflow.md#configure-workflow) for details.

**Related information**
+ To monitor your workflow executions, see [Using CloudWatch metrics for Transfer Family servers](metrics.md).
+ For detailed execution logs and troubleshooting information, see [Troubleshoot workflow-related errors using Amazon CloudWatch](workflow-issues.md#workflows-cloudwatch-errors).
+ Transfer Family provides a blog post and a workshop that walk you through building a file transfer solution. This solution leverages AWS Transfer Family for managed SFTP/FTPS endpoints and Amazon Cognito and DynamoDB for user management. 

  The blog post is available at [Using Amazon Cognito as an identity provider with AWS Transfer Family and Amazon S3](https://aws.amazon.com/blogs/storage/using-amazon-cognito-as-an-identity-provider-with-aws-transfer-family-and-amazon-s3/). You can view the details for the workshop [here](https://catalog.workshops.aws/transfer-family-sftp/en-US). 
+ The following video provides a brief introduction to Transfer Family managed workflows.  
[![AWS Videos](http://img.youtube.com/vi/https://www.youtube.com/embed/t-iNqCRospw/0.jpg)](http://www.youtube.com/watch?v=https://www.youtube.com/embed/t-iNqCRospw)
+ The following workshop provides hands on labs to build fully automated and event-driven workflows involving file transfer to or from external SFTP servers to Amazon S3, and common pre- and post-processing of those files: [Event-driven MFT workshop](https://catalog.us-east-1.prod.workshops.aws/workshops/e55c90e0-bbb0-47e1-be83-6bafa3a59a8a/en-US).

  This video provides a walk through of this workshop.  
[![AWS Videos](http://img.youtube.com/vi/https://www.youtube.com/embed/oojopisG4lA/0.jpg)](http://www.youtube.com/watch?v=https://www.youtube.com/embed/oojopisG4lA)

**Topics**
+ [

# Create a workflow
](create-workflow.md)
+ [

# Use predefined steps
](nominal-steps-workflow.md)
+ [

# Use custom file-processing steps
](custom-step-details.md)
+ [

# IAM policies for workflows
](workflow-execution-role.md)
+ [

## Exception handling for a workflow
](#exception-workflow)
+ [

# Monitor workflow execution
](cloudwatch-workflow.md)
+ [

# Create a workflow from a template
](workflow-template.md)
+ [

## Remove a workflow from a Transfer Family server
](#remove-workflow-association)
+ [

## Managed workflows restrictions and limitations
](#limitations-workflow)

For more help getting started with managed workflows, see the following resources: 
+ [AWS Transfer Family managed workflows](https://www.youtube.com/watch?v=t-iNqCRospw) demo video
+ [Building a cloud-native file transfer platform using AWS Transfer Family workflows](https://aws.amazon.com/blogs/architecture/building-a-cloud-native-file-transfer-platform-using-aws-transfer-family-workflows/) blog post

# Create a workflow
<a name="create-workflow"></a>

You can create a managed workflow by using the AWS Management Console, as described in this topic. To make the workflow creation process as easy as possible, contextual help panels are available for most of the sections in the console.

A workflow has two kinds of steps:
+ **Nominal steps** – Nominal steps are file-processing steps that you want to apply to incoming files. If you select more than one nominal step, each step is processed in a linear sequence.
+ **Exception-handling steps** – Exception handlers are file-processing steps that AWS Transfer Family executes in case any nominal steps fail or result in validation errors.

**Create a workflow**

1. Open the AWS Transfer Family console at [https://console.aws.amazon.com/transfer/](https://console.aws.amazon.com/transfer/).

1. In the left navigation pane, choose **Workflows**.

1. On the **Workflows** page, choose **Create workflow**.

1. On the **Create workflow** page, enter a description. This description appears on the **Workflows** page.

1. In the **Nominal steps** section, choose **Add step**. Add one or more steps.

   1. Choose a step type from the available options. For more information about the various step types, see [Use predefined steps](nominal-steps-workflow.md).

   1. Choose **Next**, then configure parameters for the step. 

   1. Choose **Next**, then review the details for the step. 

   1. Choose **Create step** to add the step and continue.

   1. Continue adding steps as needed. The maximum number of steps in a workflow is 8.

   1. After you have added all of the necessary nominal steps, scroll down to the **Exception handlers – *optional*** section, and choose **Add step**. 
**Note**  
So that you are informed of failures in real time, we recommend that you set up exception handlers and steps to execute when your workflow fails.

1. To configure exception handlers, add steps in the same manner as described previously. If a file causes any step to throw an exception, your exception handlers are invoked one by one. 

1. (Optional) Scroll down to the **Tags** section, and add tags for your workflow.

1. Review the configuration, and choose **Create workflow**. 
**Important**  
After you've created a workflow, you can't edit it, so make sure to review the configuration carefully.

## Configure and run a workflow
<a name="configure-workflow"></a>

Before you can run a workflow, you need to associate it with a Transfer Family server.

**To configure Transfer Family to run a workflow on uploaded files**

1. Open the AWS Transfer Family console at [https://console.aws.amazon.com/transfer/](https://console.aws.amazon.com/transfer/).

1. In the left navigation pane, choose **Servers**. 
   + To add the workflow to an existing server, choose the server that you want to use for your workflow.
   + Alternatively, create a new server and add the workflow to it. For more information, see [Configuring an SFTP, FTPS, or FTP server endpoint](tf-server-endpoint.md).

1. On the details page for the server, scroll down to the **Additional details** section, and then choose **Edit**. 
**Note**  
 By default, servers do not have any associated workflows. You use the **Additional details** section to associate a workflow with the selected server. 

1. On the **Edit additional details** page, in the **Managed workflows** section, select a workflow to be run on all uploads.
**Note**  
If you do not already have a workflow, choose **Create a new Workflow** to create one.

   1. Choose the workflow ID to use. 

   1. Choose an execution role. This is the role that Transfer Family assumes when executing the workflow's steps. For more information, see [IAM policies for workflows](workflow-execution-role.md). Choose **Save**.  
![\[The Managed workflows screen, showing values for workflow and execution role.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/workflows-addtoserver.png)

**Note**  
If you no longer want a workflow to be associated with the server, you can remove the association. For details, see [Remove a workflow from a Transfer Family server](transfer-workflows.md#remove-workflow-association).

**To execute a workflow**

To execute a workflow, you upload a file to a Transfer Family server that you configured with an associated workflow.

**Note**  
Anytime you remove a workflow from a server and replace it with a new one, or update server configuration (which impacts a workflow's execution role), you must wait approximately 10 minutes before executing the new workflow. The Transfer Family server caches the workflow details, and it takes 10 minutes for the server to refresh its cache.  
Additionally, you must log out of any active SFTP sessions, and then log back in after the 10-minute waiting period to see the changes.

**Example**  

```
# Execute a workflow
> sftp bob@s-1234567890abcdef0.server.transfer.us-east-1.amazonaws.com

Connected to s-1234567890abcdef0.server.transfer.us-east-1.amazonaws.com.
sftp> put doc1.pdf
Uploading doc1.pdf to /amzn-s3-demo-bucket/home/users/bob/doc1.pdf
doc1.pdf                                                                    100% 5013KB 601.0KB/s   00:08    
sftp> exit
>
```

After your file has been uploaded, the action defined is performed on your file. For example, if your workflow contains a copy step, the file is copied to the location that you defined in that step. You can use Amazon CloudWatch Logs to track the steps that executed and their execution status.

## View workflow details
<a name="view-details-workflow"></a>

You can view details about previously created workflows or to workflow executions. To view these details, you can use the console or the AWS Command Line Interface (AWS CLI). 

------
#### [ Console ]

**View workflow details**

1. Open the AWS Transfer Family console at [https://console.aws.amazon.com/transfer/](https://console.aws.amazon.com/transfer/).

1. In the left navigation pane, choose **Workflows**. 

1. On the **Workflows** page, choose a workflow. 

   The workflow details page opens.   
![\[The Workflows detail screen for a Transfer Family workflow, showing the description, steps, exception handlers, and in-flight executions.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/workflows-overview.png)

------
#### [ CLI ]

To view the workflow details, use the `describe-workflow` CLI command, as shown in the following example. Replace the workflow ID `w-1234567890abcdef0` with your own value. For more information, see [ describe-workflow](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/transfer/describe-workflow.html) in the *AWS CLI Command Reference*.

```
# View Workflow details
> aws transfer describe-workflow --workflow-id w-1234567890abcdef0
{
    "Workflow": {
        "Arn": "arn:aws:transfer:us-east-1:111122223333:workflow/w-1234567890abcdef0",
        "WorkflowId": "w-1234567890abcdef0",
        "Name": "Copy file to shared_files",
        "Steps": [
            {
                "Type": "COPY",
                "CopyStepDetails": {
                "Name": "Copy to shared",
                "FileLocation": {
                    "S3FileLocation": {
                        "Bucket": "amzn-s3-demo-bucket",
                        "Key": "home/shared_files/"
                    }
                }
                }
            }
        ],
        "OnException": {}
    }
}
```

------

If your workflow was created as part of an AWS CloudFormation stack, you can manage the workflow using the CloudFormation console ([https://console.aws.amazon.com/cloudformation](https://console.aws.amazon.com/cloudformation/)).

![\[The Workflows details screen for a workflow that is part of an AWS CloudFormation stack, showing the message that you manage this workflow in CloudFormation.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/workflows-cloudformation-link.png)


# Use predefined steps
<a name="nominal-steps-workflow"></a>

When you're creating a workflow, you can choose to add one of the following predefined steps discussed in this topic. You can also choose to add your own custom file-processing steps. For more information, see [Use custom file-processing steps](custom-step-details.md).

**Topics**
+ [

## Copy file
](#copy-step-details)
+ [

## Decrypt file
](#decrypt-step-details)
+ [

## Tag file
](#tag-step-details)
+ [

## Delete file
](#delete-step-details)
+ [

## Named variables for workflows
](#workflow-named-variables)
+ [

## Example tag and delete workflow
](#sourcefile-workflow)

## Copy file
<a name="copy-step-details"></a>

A copy file step creates a copy of the uploaded file in a new Amazon S3 location. Currently, you can use a copy file step only with Amazon S3.

The following copy file step copies files into the `test` folder in *amzn-s3-demo-destination-bucket*. 

If the copy file step is not the first step of your workflow, you can specify the **File location**. By specifying the file location, you can copy either the file that was used in the previous step or the original file that was uploaded. You can use this feature to make multiple copies of the original file while keeping the source file intact for file archival and records retention. For an example, see [Example tag and delete workflow](#sourcefile-workflow).

![\[Workflow screen with Copy the file created from previous step... button selected.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/workflows-step-copy.png)


### Provide the bucket and key details
<a name="copy-provide-bucket"></a>

You must provide the bucket name and a key for the destination of the copy file step. The key can be either a path name or a file name. Whether the key is treated as a path name or a file name is determined by whether you end the key with the forward slash (`/`) character.

If the final character is `/`, your file is copied to the folder, and its name does not change. If the final character is alphanumeric, your uploaded file is renamed to the key value. In this case, if a file with that name already exists, the behavior depends on the setting for the **Overwrite existing** field.
+ If **Overwrite existing** is selected, the existing file is replaced with the file being processed.
+ If **Overwrite existing** is not selected, nothing happens, and the workflow processing stops.
**Tip**  
If concurrent writes are executed on the same file path, it may result in unexpected behavior when overwriting files.

For example, if your key value is `test/`, your uploaded files are copied to the `test` folder. If your key value is `test/today`, (and **Overwrite existing** is selected) every file you upload is copied to a file named `today` in the `test` folder, and each succeeding file overwrites the previous one.

**Note**  
Amazon S3 supports buckets and objects, and there is no hierarchy. However, you can use prefixes and delimiters in object key names to imply a hierarchy and organize your data in a way similar to folders.

### Use a named variable in a copy file step
<a name="named-variable-copy"></a>

In a copy file step, you can use a variable to dynamically copy your files into user-specific folders. Currently, you can use `${transfer:UserName}` or `${transfer:UploadDate}` as a variable to copy files to a destination location for the given user who's uploading files, or based on the current date.

In the following example, if the user `richard-roe` uploads a file, it gets copied into the `amzn-s3-demo-destination-bucket/richard-roe/processed/` folder. If the user `mary-major` uploads a file, it gets copied into the `amzn-s3-demo-destination-bucket/mary-major/processed/` folder.

![\[Parameter screen for a copy step, showing the bucket and the key, parameterized using UserName.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/workflows-step-copy-dynamic.png)


Similarly, you can use `${transfer:UploadDate}` as a variable to copy files to a destination location named for the current date. In the following example, if you set the destination to `${transfer:UploadDate}/processed` on February 1, 2022, files uploaded are copied into the `amzn-s3-demo-destination-bucket/2022-02-01/processed/` folder.

![\[Parameter screen for a copy step, showing the bucket and the key, parameterized using UploadDate.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/workflows-step-copy-dynamic-date.png)


You can also use both of these variables together, combining their functionality. For example, you could set the **Destination key prefix** to **folder/\$1\$1transfer:UserName\$1/\$1\$1transfer:UploadDate\$1/**, which would created nested folders, for example `folder/marymajor/2023-01-05/`.

### IAM permissions for copy step
<a name="copy-step-iam"></a>

To allow a copy step to succeed, make sure the execution role for your workflow contains the following permissions.

```
{
    "Sid": "ListBucket",
    "Effect": "Allow",
    "Action": "s3:ListBucket",
    "Resource": [
        "arn:aws:s3:::amzn-s3-demo-destination-bucket"
    ]
}, {
    "Sid": "HomeDirObjectAccess",
    "Effect": "Allow",
    "Action": [
        "s3:PutObject",
        "s3:GetObject",
        "s3:DeleteObjectVersion",
        "s3:DeleteObject",
        "s3:GetObjectVersion"
    ],
    "Resource": "arn:aws:s3:::amzn-s3-demo-destination-bucket/*"
}
```

**Note**  
The `s3:ListBucket` permission is only necessary if you do not select **Overwrite existing**. This permission checks your bucket to see if a file with the same name already exists. If you have selected **Overwrite existing**, the workflow doesn't need to check for the file, and can just write it.  
If your Amazon S3 files have tags, you need to add one or two permissions to your IAM policy.  
Add `s3:GetObjectTagging` for an Amazon S3 file that isn't versioned.
Add `s3:GetObjectVersionTagging` for an Amazon S3 file that is versioned.

## Decrypt file
<a name="decrypt-step-details"></a>

The AWS storage blog has a post that describes how to simply decrypt files without writing any code using Transfer Family Managed workflows, [Encrypt and decrypt files with PGP and AWS Transfer Family](https://aws.amazon.com/blogs/storage/encrypt-and-decrypt-files-with-pgp-and-aws-transfer-family/).

### Supported symmetric encryption algorithms
<a name="symmetric-algorithms"></a>

For PGP decryption, Transfer Family supports symmetric encryption algorithms that are used to encrypt the actual file data within PGP files.
+ For detailed information about supported symmetric encryption algorithms, see [PGP symmetric encryption algorithms](key-management.md#pgp-symmetric-algorithms).
+ For information about PGP key pair algorithms used with these symmetric algorithms, see [PGP key pair algorithms](key-management.md#pgp-key-algorithms).

### Use PGP decryption in your workflow
<a name="configure-decryption"></a>

Transfer Family has built-in support for Pretty Good Privacy (PGP) decryption. You can use PGP decryption on files that are uploaded over SFTP, FTPS, or FTP to Amazon Simple Storage Service (Amazon S3) or Amazon Elastic File System (Amazon EFS). 

To use PGP decryption, you must create and store the PGP private keys that will be used for decryption of your files. Your users can then encrypt files by using corresponding PGP encryption keys before uploading the files to your Transfer Family server. After you receive the encrypted files, you can decrypt those files in your workflow. For a detailed tutorial, see [Setting up a managed workflow for decrypting a file](workflow-decrypt-tutorial.md).

For information about supported PGP algorithms and recommendations, see [PGP encryption and decryption algorithms](key-management.md#pgp-encryption-algorithms).

**To use PGP decryption in your workflow**

1. Identify a Transfer Family server to host your workflow, or create a new one. You need to have the server ID before you can store your PGP keys in AWS Secrets Manager with the correct secret name.

1. Store your PGP key in AWS Secrets Manager under the required secret name. For details, see [Manage PGP keys](manage-pgp-keys.md). Workflows can automatically locate the correct PGP key to be used for decryption based on the secret name in Secrets Manager.
**Note**  
When you store secrets in Secrets Manager, your AWS account incurs charges. For information about pricing, see [AWS Secrets Manager Pricing](https://aws.amazon.com/secrets-manager/pricing).

1. Encrypt a file by using your PGP key pair. (For a list of supported clients, see [Supported PGP clients](pgp-key-clients.md).) If you are using the command line, run the following command. To use this command, replace `username@example.com` with the email address that you used to create the PGP key pair. Replace `testfile.txt` with the name of the file that you want to encrypt. 

   ```
   gpg -e -r username@example.com testfile.txt
   ```
**Important**  
When encrypting files for use with AWS Transfer Family workflows, always ensure you specify a non-anonymous recipient using the `-r` parameter. Anonymous encryption (without specifying a recipient) can cause decryption failures in the workflow because the system won't be able to identify which key to use for decryption. Debugging information for this issue is available at [Troubleshoot anonymous recipient encryption issues](workflow-issues.md#workflows-decrypt-anonymous). 

1. Upload the encrypted file to your Transfer Family server.

1. Configure a decryption step in your workflow. For more information, see [Add a decryption step](#decrypt-step-procedure).

### Add a decryption step
<a name="decrypt-step-procedure"></a>

A decryption step decrypts an encrypted file that was uploaded to Amazon S3 or Amazon EFS as part of your workflow. For details about configuring decryption, see [Use PGP decryption in your workflow](#configure-decryption).

When you create your decryption step for a workflow, you must specify the destination for the decrypted files. You must also select whether to overwrite existing files if a file already exists at the destination location. You can monitor the decryption workflow results and get audit logs for each file in real time by using Amazon CloudWatch Logs.

After you choose the **Decrypt file** type for your step, the **Configure parameters** page appears. Fill in the values for the **Configure PGP decryption parameters** section.

The available options are as follows:
+ **Step name** – Enter a descriptive name for the step.
+ **File location** – By specifying the file location, you can decrypt either the file that was used in the previous step or the original file that was uploaded. 
**Note**  
This parameter is not available if this step is the first step of the workflow.
+ **Destination for decrypted files** – Choose an Amazon S3 bucket or an Amazon EFS file system as the destination for the decrypted file.
  + If you choose Amazon S3, you must provide a destination bucket name and a destination key prefix. To parameterize the destination key prefix by username, enter **\$1\$1transfer:UserName\$1** for **Destination key prefix**. Similarly, to parameterize the destination key prefix by upload date, enter **\$1\$1Transfer:UploadDate\$1** for **Destination key prefix**.
  + If you choose Amazon EFS, you must provide a destination file system and path.
**Note**  
The storage option that you choose here must match the storage system that's used by the Transfer Family server with which this workflow is associated. Otherwise, you will receive an error when you attempt to run this workflow.
+ **Overwrite existing** – If you upload a file, and a file with the same filename already exists at the destination, the behavior depends on the setting for this parameter:
  + If **Overwrite existing** is selected, the existing file is replaced with the file being processed.
  + If **Overwrite existing** is not selected, nothing happens, and the workflow processing stops.
**Tip**  
If concurrent writes are executed on the same file path, it may result in unexpected behavior when overwriting files.

The following screenshot shows an example of the options that you might choose for your decrypt file step. 

![\[The AWS Transfer Family console, showing the Configure PGP decryption parameters section with sample values.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/workflows-step-decrypt-details.png)


### IAM permissions for decrypt step
<a name="decrypt-step-iam"></a>

To allow a decrypt step to succeed, make sure the execution role for your workflow contains the following permissions.

```
{
    "Sid": "ListBucket",
    "Effect": "Allow",
    "Action": "s3:ListBucket",
    "Resource": [
        "arn:aws:s3:::amzn-s3-demo-destination-bucket"
    ]
}, {
    "Sid": "HomeDirObjectAccess",
    "Effect": "Allow",
    "Action": [
        "s3:PutObject",
        "s3:GetObject",
        "s3:DeleteObjectVersion",
        "s3:DeleteObject",
        "s3:GetObjectVersion"
    ],
    "Resource": "arn:aws:s3:::amzn-s3-demo-destination-bucket/*"
}, {
    "Sid": "Decrypt",
    "Effect": "Allow",
    "Action": [
        "secretsmanager:GetSecretValue",
    ],
    "Resource": "arn:aws:secretsmanager:region:account-id:secret:aws/transfer/*"
}
```

**Note**  
The `s3:ListBucket` permission is only necessary if you do not select **Overwrite existing**. This permission checks your bucket to see if a file with the same name already exists. If you have selected **Overwrite existing**, the workflow doesn't need to check for the file, and can just write it.  
If your Amazon S3 files have tags, you need to add one or two permissions to your IAM policy.  
Add `s3:GetObjectTagging` for an Amazon S3 file that isn't versioned.
Add `s3:GetObjectVersionTagging` for an Amazon S3 file that is versioned.

## Tag file
<a name="tag-step-details"></a>

To tag incoming files for further downstream processing, use a tag step. Enter the value of the tag that you would like to assign to the incoming files. Currently, the tag operation is supported only if you are using Amazon S3 for your Transfer Family server storage.

The following example tag step assigns `scan_outcome` and `clean` as the tag key and value, respectively.

![\[Workflows screen showing the details for a tagging step.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/workflows-step-tag.png)


To allow a tag step to succeed, make sure the execution role for your workflow contains the following permissions.

```
{
            "Sid": "Tag",
            "Effect": "Allow",
            "Action": [
                "s3:PutObjectTagging",
                "s3:PutObjectVersionTagging"
            ],
            "Resource": [
                "arn:aws:s3:::amzn-s3-demo-bucket/*"
            ]
}
```

**Note**  
If your workflow contains a tag step that runs before either a copy or decrypt step, you need to add one or two permissions to your IAM policy.  
Add `s3:GetObjectTagging` for an Amazon S3 file that isn't versioned.
Add `s3:GetObjectVersionTagging` for an Amazon S3 file that is versioned.

## Delete file
<a name="delete-step-details"></a>

To delete a processed file from a previous workflow step or to delete the originally uploaded file, use a delete file step.

![\[Workflows screen showing the details for a delete step.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/workflows-step-delete.png)


To allow a delete step to succeed, make sure the execution role for your workflow contains the following permissions.

```
{
            "Sid": "Delete",
            "Effect": "Allow",
            "Action": [
                "s3:DeleteObjectVersion",
                "s3:DeleteObject"
            ],
            "Resource": "arn:aws:secretsmanager:region:account-ID:secret:aws/transfer/*"
        }
```

## Named variables for workflows
<a name="workflow-named-variables"></a>

For copy and decrypt steps, you can use a variable to dynamically perform actions. Currently, AWS Transfer Family supports the following named variables.
+ Use `${transfer:UserName}` to copy or decrypt files to a destination based on the user who's uploading the files.
+ Use `${transfer:UploadDate}` to copy or decrypt files to a destination location based on the current date.

## Example tag and delete workflow
<a name="sourcefile-workflow"></a>

The following example illustrates a workflow that tags incoming files that need to be processed by a downstream application, such as a data analytics platform. After tagging the incoming file, the workflow then deletes the originally uploaded file to save on storage costs.

------
#### [ Console ]

**Example tag and move workflow**

1. Open the AWS Transfer Family console at [https://console.aws.amazon.com/transfer/](https://console.aws.amazon.com/transfer/).

1. In the left navigation pane, choose **Workflows**.

1. On the **Workflows** page, choose **Create workflow**.

1. On the **Create workflow** page, enter a description. This description appears on the **Workflows** page.

1. Add the first step (copy).

   1. In the **Nominal steps** section, choose **Add step**.

   1. Choose **Copy file**, then choose **Next**.

   1. Enter a step name, then select a destination bucket and a key prefix.  
![\[Workflows screen showing the details for a copy step, showing destination bucket and key prefix.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/workflows-step-copy-first-step.png)

   1. Choose **Next**, then review the details for the step. 

   1. Choose **Create step** to add the step and continue.

1. Add the second step (tag).

   1. In the **Nominal steps** section, choose **Add step**.

   1. Choose **Tag file**, then choose **Next**.

   1. Enter a step name.

   1. For **File location**, select **Tag the file created from previous step**.

   1. Enter a **Key** and **Value**.  
![\[The Configuration screen for a tagging workflow step, with the Tag the file created from previous step radio button selected.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/workflows-step-tag.png)

   1. Choose **Next**, then review the details for the step. 

   1. Choose **Create step** to add the step and continue.

1. Add the third step (delete).

   1. In the **Nominal steps** section, choose **Add step**.

   1. Choose **Delete file**, then choose **Next**.  
![\[The Configuration screen for a delete workflow step, with the Delete the original source file radio button selected.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/workflows-step-delete.png)

   1. Enter a step name.

   1. For **File location**, select **Delete the original source file**.

   1. Choose **Next**, then review the details for the step. 

   1. Choose **Create step** to add the step and continue.

1. Review the workflow configuration, and then choose **Create workflow**. 

------
#### [ CLI ]

**Example tag and move workflow**

1. Save the following code into a file; for example, `tagAndMoveWorkflow.json`. Replace each `user input placeholder` with your own information. 

   ```
   [
      {
          "Type": "COPY",
          "CopyStepDetails": {
             "Name": "CopyStep",
             "DestinationFileLocation": {
                "S3FileLocation": {
                   "Bucket": "amzn-s3-demo-bucket",
                   "Key": "test/"
                }
             }
          }
      },
      {
          "Type": "TAG",
          "TagStepDetails": {
             "Name": "TagStep",
             "Tags": [
                {
                   "Key": "name",
                   "Value": "demo"
                }
             ],
             "SourceFileLocation": "${previous.file}"
          }
      },
      {
         "Type": "DELETE",
         "DeleteStepDetails":{
            "Name":"DeleteStep",
            "SourceFileLocation": "${original.file}"
         }
     }
   ]
   ```

   The first step copies the uploaded file to a new Amazon S3 location. The second step adds a tag (key-value pair) to the file (`previous.file`) that was copied to the new location. And, finally, the third step deletes the original file (`original.file`).

1. Create a workflow from the saved file. Replace each `user input placeholder` with your own information.

   ```
   aws transfer create-workflow --description "short-description" --steps file://path-to-file --region region-ID
   ```

   For example: 

   ```
   aws transfer create-workflow --description "copy-tag-delete workflow" --steps file://tagAndMoveWorkflow.json --region us-east-1
   ```
**Note**  
For more details about using files to load parameters, see [ How to load parameters from a file](https://docs.aws.amazon.com//cli/latest/userguide/cli-usage-parameters-file.html).

1. Update an existing server.
**Note**  
This step assumes you already have a Transfer Family server and you want to associate a workflow with it. If not, see [Configuring an SFTP, FTPS, or FTP server endpoint](tf-server-endpoint.md). Replace each `user input placeholder` with your own information.

   ```
   aws transfer update-server --server-id server-ID --region region-ID 
     --workflow-details '{"OnUpload":[{ "WorkflowId": "workflow-ID","ExecutionRole": "execution-role-ARN"}]}'
   ```

   For example:

   ```
   aws transfer update-server --server-id s-1234567890abcdef0 --region us-east-2 
     --workflow-details '{"OnUpload":[{ "WorkflowId": "w-abcdef01234567890","ExecutionRole": "arn:aws:iam::111111111111:role/nikki-wolf-execution-role"}]}'
   ```

------

# Use custom file-processing steps
<a name="custom-step-details"></a>

By using a custom file-processing step, you can Bring Your Own file-processing logic using AWS Lambda. Upon file arrival, a Transfer Family server invokes a Lambda function that contains custom file-processing logic, such as encrypting files, scanning for malware, or checking for incorrect file types. In the following example, the target AWS Lambda function is used to process the output file from the previous step.

![\[The custom step screen, with the Apply custom processing to the file created from previous step radio button selected, and a Lambda function displayed in the Target field.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/workflows-step-custom.png)


**Note**  
For an example Lambda function, see [Example Lambda function for a custom workflow step](#example-workflow-lambda). For example events (including the location for files passed into the Lambda), see [Example events sent to AWS Lambda upon file upload](#example-workflow-lambdas).

With a custom workflow step, you must configure the Lambda function to call the [SendWorkflowStepState](https://docs.aws.amazon.com/transfer/latest/APIReference/API_SendWorkflowStepState.html) API operation. `SendWorkflowStepState` notifies the workflow execution that the step was completed with either a success or a failure status. The status of the `SendWorkflowStepState` API operation invokes an exception handler step or a nominal step in the linear sequence, based on the outcome of the Lambda function. 

If the Lambda function fails or times out, the step fails, and you see `StepErrored` in your CloudWatch logs. If the Lambda function is part of the nominal step and the function responds to `SendWorkflowStepState` with `Status="FAILURE"` or times out, the flow continues with the exception handler steps. In this case, the workflow does not continue to execute the remaining (if any) nominal steps. For more details, see [Exception handling for a workflow](transfer-workflows.md#exception-workflow).

When you call the `SendWorkflowStepState` API operation, you must send the following parameters:

```
{
    "ExecutionId": "string",
    "Status": "string",
    "Token": "string",
    "WorkflowId": "string"
}
```

You can extract the `ExecutionId`, `Token`, and `WorkflowId` from the input event that is passed when the Lambda function executes (examples are shown in the following sections). The `Status` value can be either `SUCCESS` or `FAILURE`. 

To be able to call the `SendWorkflowStepState` API operation from your Lambda function, you must use a version of the AWS SDK that was published after [Managed Workflows were introduced](doc-history.md#workflows-introduced).

## Using multiple Lambda functions consecutively
<a name="multiple-lambdas"></a>

When you use multiple custom steps one after the other, the **File location** option works differently than if you use only a single custom step. Transfer Family doesn't support passing the Lambda-processed file back to use as the next step's input. So, if you have multiple custom steps all configured to use the `previous.file` option, they all use the same file location (the input file location for the first custom step).

**Note**  
The `previous.file` setting also works differently if you have a predefined step (tag, copy, decrypt, or delete) after a custom step. If the predefined step is configured to use the `previous.file` setting, the predefined step uses the same input file that's used by the custom step. The processed file from the custom step is not passed to the predefined step. 

## Accessing a file after custom processing
<a name="process-uploaded-file"></a>

If you're using Amazon S3 as your storage, and if your workflow includes a custom step that performs actions on the originally uploaded file, subsequent steps cannot access that processed file. That is, any step after the custom step cannot reference the updated file from the custom step output. 

For example, suppose that you have the following three steps in your workflow. 
+ **Step 1** – Upload a file named `example-file.txt`.
+ **Step 2** – Invoke a Lambda function that changes `example-file.txt` in some way.
+ **Step 3** – Attempt to perform further processing on the updated version of `example-file.txt`.

If you configure the `sourceFileLocation` for Step 3 to be `${original.file}`, Step 3 uses the original file location from when the server uploaded the file to storage in Step 1. If you're using `${previous.file}` for Step 3, Step 3 reuses the file location that Step 2 used as input.

Therefore, Step 3 causes an error. For example, if step 3 attempts to copy the updated `example-file.txt`, you receive the following error:

```
{
    "type": "StepErrored",
    "details": {
        "errorType": "NOT_FOUND",
        "errorMessage": "ETag constraint not met (Service: null; Status Code: 412; Error Code: null; Request ID: null; S3 Extended Request ID: null; Proxy: null)",
        "stepType": "COPY",
        "stepName": "CopyFile"
    },
```

This error occurs because the custom step modifies the entity tag (ETag) for `example-file.txt` so that it doesn't match the original file.

**Note**  
This behavior doesn't occur if you're using Amazon EFS because Amazon EFS doesn't use entity tags to identify files.

## Example events sent to AWS Lambda upon file upload
<a name="example-workflow-lambdas"></a>

The following examples show the events that are sent to AWS Lambda when a file upload is complete. One example uses a Transfer Family server where the domain is configured with Amazon S3. The other example uses a Transfer Family server where the domain uses Amazon EFS. 

------
#### [ Custom step that uses an Amazon S3 domain ]

```
{
    "token": "MzI0Nzc4ZDktMGRmMi00MjFhLTgxMjUtYWZmZmRmODNkYjc0",
    "serviceMetadata": {
        "executionDetails": {
            "workflowId": "w-1234567890example",
            "executionId": "abcd1234-aa11-bb22-cc33-abcdef123456"
        },
        "transferDetails": {
            "sessionId": "36688ff5d2deda8c",
            "userName": "myuser",
            "serverId": "s-example1234567890"
        }
    },
    "fileLocation": {
        "domain": "S3",
        "bucket": "amzn-s3-demo-bucket",
        "key": "path/to/mykey",
        "eTag": "d8e8fca2dc0f896fd7cb4cb0031ba249",
        "versionId": null
    }
}
```

------
#### [ Custom step that uses an Amazon EFS domain ]

```
{
    "token": "MTg0N2Y3N2UtNWI5Ny00ZmZlLTk5YTgtZTU3YzViYjllNmZm",
    "serviceMetadata": {
        "executionDetails": {
            "workflowId": "w-1234567890example",
            "executionId": "abcd1234-aa11-bb22-cc33-abcdef123456"
        },
        "transferDetails": {
            "sessionId": "36688ff5d2deda8c",
            "userName": "myuser",
            "serverId": "s-example1234567890"
        }
    },
    "fileLocation": {
        "domain": "EFS",
        "fileSystemId": "fs-1234567",
        "path": "/path/to/myfile"
    }
}
```

------

## Example Lambda function for a custom workflow step
<a name="example-workflow-lambda"></a>

The following Lambda function extracts the information regarding the execution status, and then calls the [SendWorkflowStepState](https://docs.aws.amazon.com/transfer/latest/APIReference/API_SendWorkflowStepState.html) API operation to return the status to the workflow for the step—either `SUCCESS` or `FAILURE`. Before your function calls the `SendWorkflowStepState` API operation, you can configure Lambda to take an action based on your workflow logic. 

```
import json
import boto3

transfer = boto3.client('transfer')

def lambda_handler(event, context):
    print(json.dumps(event))

    # call the SendWorkflowStepState API to notify the workflow about the step's SUCCESS or FAILURE status
    response = transfer.send_workflow_step_state(
        WorkflowId=event['serviceMetadata']['executionDetails']['workflowId'],
        ExecutionId=event['serviceMetadata']['executionDetails']['executionId'],
        Token=event['token'],
        Status='SUCCESS|FAILURE'
    )

    print(json.dumps(response))

    return {
      'statusCode': 200,
      'body': json.dumps(response)
    }
```

## IAM permissions for a custom step
<a name="custom-step-iam"></a>

To allow a step that calls a Lambda to succeed, make sure the execution role for your workflow contains the following permissions.

```
{
    "Sid": "Custom",
    "Effect": "Allow",
    "Action": [
        "lambda:InvokeFunction"
    ],
    "Resource": [
        "arn:aws:lambda:region:account-id:function:function-name"
    ]
}
```

# IAM policies for workflows
<a name="workflow-execution-role"></a>

When you add a workflow to a server, you must select an execution role. The server uses this role when it executes the workflow. If the role does not have the proper permissions, AWS Transfer Family cannot run the workflow. 

This section describes one possible set of AWS Identity and Access Management (IAM) permissions that you can use to execute a workflow. Other examples are described later in this topic. 

**Note**  
If your Amazon S3 files have tags, you need to add one or two permissions to your IAM policy.  
Add `s3:GetObjectTagging` for an Amazon S3 file that isn't versioned.
Add `s3:GetObjectVersionTagging` for an Amazon S3 file that is versioned.

**To create an execution role for your workflow**

1. Create a new IAM role, and add the AWS managed policy `AWSTransferFullAccess` to the role. For more information about creating a new IAM role, see [Create an IAM role and policy](requirements-roles.md).

1. Create another policy with the following permissions, and attach it to your role. Replace each `user input placeholder` with your own information.  
****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "ConsoleAccess",
               "Effect": "Allow",
               "Action": "s3:GetBucketLocation",
               "Resource": "*"
           },
           {
               "Sid": "ListObjectsInBucket",
               "Effect": "Allow",
               "Action": "s3:ListBucket",
               "Resource": [
                   "arn:aws:s3:::amzn-s3-demo-bucket"
               ]
           },
           {
               "Sid": "AllObjectActions",
               "Effect": "Allow",
               "Action": "s3:*Object",
               "Resource": [
                   "arn:aws:s3:::amzn-s3-demo-bucket/*"
               ]
           },
           {
               "Sid": "GetObjectVersion",
               "Effect": "Allow",
               "Action": "s3:GetObjectVersion",
               "Resource": [
                   "arn:aws:s3:::amzn-s3-demo-bucket/*"
               ]
           },
           {
               "Sid": "Custom",
               "Effect": "Allow",
               "Action": [
                   "lambda:InvokeFunction"
               ],
               "Resource": [
                   "arn:aws:lambda:us-east-1:123456789012:function:function-name"
               ]
           },
           {
               "Sid": "Tag",
               "Effect": "Allow",
               "Action": [
                   "s3:PutObjectTagging",
                   "s3:PutObjectVersionTagging"
               ],
               "Resource": [
                   "arn:aws:s3:::amzn-s3-demo-bucket/*"
               ]
           }
       ]
   }
   ```

1. Save this role and specify it as the execution role when you add a workflow to a server.
**Note**  
When you're constructing IAM roles, AWS recommends that you restrict access to your resources as much as is possible for your workflow.

## Workflow trust relationships
<a name="workflows-trust"></a>

Workflow execution roles also require a trust relationship with `transfer.amazonaws.com`. To establish a trust relationship for AWS Transfer Family, see [To establish a trust relationship](requirements-roles.md#establish-trust-transfer).

While you're establishing your trust relationship, you can also take steps to avoid the *confused deputy* problem. For a description of this problem, as well as examples of how to avoid it, see [Cross-service confused deputy prevention](confused-deputy.md).

## Example execution role: Decrypt, copy, and tag
<a name="example-workflow-role-copy-tag"></a>

If you have workflows that include tagging, copying, and decrypt steps, you can use the following IAM policy. Replace each `user input placeholder` with your own information. 

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "CopyRead",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:GetObjectTagging",
                "s3:GetObjectVersionTagging"
            ],
            "Resource": "arn:aws:s3:::amzn-s3-demo-source-bucket/*"
        },
        {
            "Sid": "CopyWrite",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:PutObjectTagging"
            ],
            "Resource": "arn:aws:s3:::amzn-s3-demo-destination-bucket/*"
        },
        {
            "Sid": "CopyList",
            "Effect": "Allow",
            "Action": "s3:ListBucket",
            "Resource": [
                "arn:aws:s3:::amzn-s3-demo-source-bucket",
                "arn:aws:s3:::amzn-s3-demo-destination-bucket"
            ]
        },
        {
            "Sid": "Tag",
            "Effect": "Allow",
            "Action": [
                "s3:PutObjectTagging",
                "s3:PutObjectVersionTagging"
            ],
            "Resource": "arn:aws:s3:::amzn-s3-demo-destination-bucket/*",
            "Condition": {
                "StringEquals": {
                    "s3:RequestObjectTag/Archive": "yes"
                }
            }
        },
        {
            "Sid": "ListBucket",
            "Effect": "Allow",
            "Action": "s3:ListBucket",
            "Resource": [
                "arn:aws:s3:::amzn-s3-demo-destination-bucket"
            ]
        },
        {
            "Sid": "HomeDirObjectAccess",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:DeleteObjectVersion",
                "s3:DeleteObject",
                "s3:GetObjectVersion"
            ],
            "Resource": "arn:aws:s3:::amzn-s3-demo-destination-bucket/*"
        },
        {
            "Sid": "Decrypt",
            "Effect": "Allow",
            "Action": [
                "secretsmanager:GetSecretValue"
            ],
            "Resource": "arn:aws:secretsmanager:us-east-1:123456789012:secret:aws/transfer/*"
        }
    ]
}
```

## Example execution role: Run function and delete
<a name="example-workflow-role-custom-delete"></a>

In this example, you have a workflow that invokes an AWS Lambda function. If the workflow deletes the uploaded file and has an exception handler step to act upon a failed workflow execution in the previous step, use the following IAM policy. Replace each `user input placeholder` with your own information. 

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "Delete",
            "Effect": "Allow",
            "Action": [
                "s3:DeleteObject",
                "s3:DeleteObjectVersion"
            ],
            "Resource": "arn:aws:s3:::bucket-name"
        },
        {
            "Sid": "Custom",
            "Effect": "Allow",
            "Action": [
                "lambda:InvokeFunction"
            ],
            "Resource": [
                "arn:aws:lambda:us-east-1:123456789012:function:function-name"
            ]
        }
    ]
}
```

## Exception handling for a workflow
<a name="exception-workflow"></a>

If any errors occur during a workflow's execution, the exception-handling steps that you specified are executed. You specify the error-handling steps for a workflow in the same manner as you specify the nominal steps for the workflow. For example, suppose that you've configured custom processing in nominal steps to validate incoming files. If the file validation fails, an exception-handling step can send an email to the administrator.

The following example workflow contains two steps: 
+ One nominal step that checks whether the uploaded file is in CSV format
+ An exception-handling step that sends an email in case the uploaded file is not in CSV format, and the nominal step fails

To initiate the exception-handling step, the AWS Lambda function in the nominal step must respond with `Status="FAILURE"`. For more information about error handling in workflows, see [Use custom file-processing steps](custom-step-details.md).

![\[\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/workflow-exception-sample.png)


# Monitor workflow execution
<a name="cloudwatch-workflow"></a>

Amazon CloudWatch monitors your AWS resources and the applications that you run in the AWS Cloud in real time. You can use Amazon CloudWatch to collect and track metrics, which are variables that you can measure for your workflows. You can view workflow metrics and consolidated logs by using Amazon CloudWatch.

## CloudWatch logging for a workflow
<a name="cloudwatch-workflow-logs"></a>

CloudWatch provides consolidated auditing and logging for workflow progress and results.

**View Amazon CloudWatch logs for workflows**

1. Open the Amazon CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

1. In the left navigation pane, choose **Logs**, then choose **Log groups**.

1. On the **Log groups** page, on the navigation bar, choose the correct Region for your AWS Transfer Family server.

1. Choose the log group that corresponds to your server.

   For example, if your server ID is `s-1234567890abcdef0`, your log group is `/aws/transfer/s-1234567890abcdef0`.

1. On the log group details page for your server, the most recent log streams are displayed. There are two log streams for the user that you are exploring: 
   + One for each Secure Shell (SSH) File Transfer Protocol (SFTP) session.
   + One for the workflow that is being executed for your server. The format for the log stream for the workflow is `username.workflowID.uniqueStreamSuffix`.

   For example, if your user is `mary-major`, you have the following log streams:

   ```
   mary-major-east.1234567890abcdef0
   mary.w-abcdef01234567890.021345abcdef6789
   ```
**Note**  
 The 16-digit alphanumeric identifiers listed in this example are fictitious. The values that you see in Amazon CloudWatch are different. 

The **Log events** page for `mary-major-usa-east.1234567890abcdef0` displays the details for each user session, and the `mary.w-abcdef01234567890.021345abcdef6789` log stream contains the details for the workflow. 

 The following is a sample log stream for `mary.w-abcdef01234567890.021345abcdef6789`, based on a workflow (`w-abcdef01234567890`) that contains a copy step. 

```
{
    "type": "ExecutionStarted",
    "details": {
        "input": {
            "initialFileLocation": {
                "bucket": "amzn-s3-demo-bucket",
                "key": "mary/workflowSteps2.json",
                "versionId": "version-id",
                "etag": "etag-id"
            }
        }
    },
    "workflowId":"w-abcdef01234567890",
    "executionId":"execution-id",
    "transferDetails": {
        "serverId":"s-server-id",
        "username":"mary",
        "sessionId":"session-id"
    }
},
{
    "type":"StepStarted",
    "details": {
        "input": {
            "fileLocation": {
                "backingStore":"S3",
                "bucket":"amzn-s3-demo-bucket",
                "key":"mary/workflowSteps2.json",
                "versionId":"version-id",
                "etag":"etag-id"
            }
        },
        "stepType":"COPY",
        "stepName":"copyToShared"
    },
    "workflowId":"w-abcdef01234567890",
    "executionId":"execution-id",
    "transferDetails": {
        "serverId":"s-server-id",
        "username":"mary",
        "sessionId":"session-id"
    }
},
{
    "type":"StepCompleted",
    "details":{
        "output":{},
        "stepType":"COPY",
        "stepName":"copyToShared"
    },
    "workflowId":"w-abcdef01234567890",
    "executionId":"execution-id",
    "transferDetails":{
        "serverId":"server-id",
        "username":"mary",
        "sessionId":"session-id"
    }
},
{
    "type":"ExecutionCompleted",
    "details": {},
    "workflowId":"w-abcdef01234567890",
    "executionId":"execution-id",
    "transferDetails":{
        "serverId":"s-server-id",
        "username":"mary",
        "sessionId":"session-id"
    }
}
```

## CloudWatch metrics for workflows
<a name="cloudwatch-workflows-metrics"></a>

AWS Transfer Family provides several metrics for workflows. You can view metrics for how many workflows executions started, completed successfully, and failed in the previous minute. All of the CloudWatch metrics for Transfer Family are described in [Using CloudWatch metrics for Transfer Family servers](metrics.md).

# Create a workflow from a template
<a name="workflow-template"></a>

You can deploy an CloudFormation stack that creates a workflow and a server from a template. This procedure contains an example that you can use to quickly deploy a workflow.

**To create an CloudFormation stack that creates an AWS Transfer Family workflow and server**

1. Open the CloudFormation console at [https://console.aws.amazon.com/cloudformation](https://console.aws.amazon.com/cloudformation/).

1. Save the following code to a file.

------
#### [ YAML ]

   ```
   AWSTemplateFormatVersion: 2010-09-09
   Resources:
     SFTPServer:
       Type: 'AWS::Transfer::Server'
       Properties:
         WorkflowDetails:
           OnUpload:
             - ExecutionRole: workflow-execution-role-arn
               WorkflowId: !GetAtt
                 - TransferWorkflow
                 - WorkflowId
     TransferWorkflow:
       Type: AWS::Transfer::Workflow
       Properties:
         Description: Transfer Family Workflows Blog
         Steps:
           - Type: COPY
             CopyStepDetails:
               Name: copyToUserKey
               DestinationFileLocation:
                 S3FileLocation:
                   Bucket: archived-records
                   Key: ${transfer:UserName}/
               OverwriteExisting: 'TRUE'
           - Type: TAG
             TagStepDetails:
               Name: tagFileForArchive
               Tags:
                 - Key: Archive
                   Value: yes
           - Type: CUSTOM
             CustomStepDetails:
               Name: transferExtract
               Target: arn:aws:lambda:region:account-id:function:function-name
               TimeoutSeconds: 60
           - Type: DELETE
             DeleteStepDetails:
               Name: DeleteInputFile
               SourceFileLocation: '${original.file}'
         Tags:
           - Key: Name
             Value: TransferFamilyWorkflows
   ```

------
#### [ JSON ]

   ```
   {
       "AWSTemplateFormatVersion": "2010-09-09",
       "Resources": {
           "SFTPServer": {
               "Type": "AWS::Transfer::Server",
               "Properties": {
                   "WorkflowDetails": {
                       "OnUpload": [
                           {
                               "ExecutionRole": "workflow-execution-role-arn",
                               "WorkflowId": {
                                   "Fn::GetAtt": [
                                       "TransferWorkflow",
                                       "WorkflowId"
                                   ]
                               }
                           }
                       ]
                   }
               }
           },
           "TransferWorkflow": {
               "Type": "AWS::Transfer::Workflow",
               "Properties": {
                   "Description": "Transfer Family Workflows Blog",
                   "Steps": [
                       {
                           "Type": "COPY",
                           "CopyStepDetails": {
                               "Name": "copyToUserKey",
                               "DestinationFileLocation": {
                                   "S3FileLocation": {
                                       "Bucket": "archived-records",
                                       "Key": "${transfer:UserName}/"
                                   }
                               },
                               "OverwriteExisting": "TRUE"
                           }
                       },
                       {
                           "Type": "TAG",
                           "TagStepDetails": {
                               "Name": "tagFileForArchive",
                               "Tags": [
                                   {
                                       "Key": "Archive",
                                       "Value": "yes"
                                   }
                               ]
                           }
                       },
                       {
                           "Type": "CUSTOM",
                           "CustomStepDetails": {
                               "Name": "transferExtract",
                               "Target": "arn:aws:lambda:region:account-id:function:function-name",
                               "TimeoutSeconds": 60
                           }
                       },
                       {
                           "Type": "DELETE",
                           "DeleteStepDetails": {
                               "Name": "DeleteInputFile",
                               "SourceFileLocation": "${original.file}"
                           }
                       }
                   ],
                   "Tags": [
                       {
                           "Key": "Name",
                           "Value": "TransferFamilyWorkflows"
                       }
                   ]
               }
           }
       }
   }
   ```

------

1. Replace the following items with your actual values.
   + Replace *`workflow-execution-role-arn`* with the ARN for an actual workflow execution role. For example, `arn:aws:transfer:us-east-2:111122223333:workflow/w-1234567890abcdef0`
   + Replace `arn:aws:lambda:region:account-id:function:function-name` with the ARN for your Lambda function. For example, `arn:aws:lambda:us-east-2:123456789012:function:example-lambda-idp`.

1. Follow the instructions for deploying an CloudFormation stack from an existing template in [Selecting a stack template](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-using-console-create-stack-template.html) in the *AWS CloudFormation User Guide*.

After the stack has been deployed, you can view details about it in the **Outputs** tab in the CloudFormation console. The template creates a new AWS Transfer Family SFTP server that uses service-managed users, and a new workflow, and associates the workflow with the new server.

## Remove a workflow from a Transfer Family server
<a name="remove-workflow-association"></a>

If you have associated a workflow with a Transfer Family server, and you now want to remove that association, you can do so by using the console or programmatically.

------
#### [ Console ]

**To remove a workflow from a Transfer Family server**

1. Open the AWS Transfer Family console at [https://console.aws.amazon.com/transfer/](https://console.aws.amazon.com/transfer/).

1. In the left navigation pane, choose **Servers**.

1. Choose the identifier for the server in the **Server ID** column.

1. On the details page for the server, scroll down to the **Additional details** section, and then choose **Edit**. 

1. On the **Edit additional details** page, in the **Managed workflows** section, clear the information for all settings:
   + Select the dash (-) from the list of workflows for the **Workflow for complete file uploads**.
   + If not already cleared, select the dash (-) from the list of workflows for the **Workflow for partial file uploads**.
   +  Select the dash (-) from the list of roles for the **Managed workflows execution role**.

   If you don't see the dash, scroll up until you see it, as it is the first value in each menu.

   The screen should look like the following.  
![\[The Managed workflows pane, showing all parameters cleared.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/workflows-remove-from-server.png)

1. Scroll down and choose **Save** to save your changes.

------
#### [ CLI ]

You use the `update-server` (or `UpdateServer` for API) call, and provide empty arguments for the `OnUpload` and `OnPartialUpload` parameters.

From the AWS CLI, run the following command:

```
aws transfer update-server --server-id your-server-id --workflow-details '{"OnPartialUpload":[],"OnUpload":[]}'
```

Replace `your-server-id` with the ID for your server. For example, if your server ID is, `s-01234567890abcdef`, the command is as follows:

```
aws transfer update-server --server-id s-01234567890abcdef --workflow-details '{"OnPartialUpload":[],"OnUpload":[]}'
```

------

## Managed workflows restrictions and limitations
<a name="limitations-workflow"></a>

**Restrictions**

The following restrictions currently apply to post-upload processing workflows for AWS Transfer Family. 
+ Cross-account and cross-region AWS Lambda functions are not supported. You can, however, copy across accounts, provided that your AWS Identity and Access Management (IAM) policies are correctly configured.
+ For all workflow steps, any Amazon S3 buckets accessed by the workflow must be in the same region as the workflow itself.
+ For a decryption step, the decryption destination must match the source for Region and backing store (for example, if the file to be decrypted is stored in Amazon S3, then the specified destination must also be in Amazon S3).
+ Only asynchronous custom steps are supported.
+ Custom step timeouts are approximate. That is, it might take slightly longer to time out than specified. Additionally, the workflow is dependent upon the Lambda function. Therefore, if the function is delayed during execution, the workflow is not aware of the delay.
+ If you exceed your throttling limit, Transfer Family doesn't add workflow operations to the queue.
+ Workflows are not initiated for files that have a size of 0. Files with a size greater than 0 do initiate the associated workflow.
+ You can attach a file-processing workflow to a Transfer Family server that uses the AS2 protocol: however, AS2 messages don't execute workflows attached to the server. 

**Limitations**

 Additionally, the following functional limits apply to workflows for Transfer Family: 
+ The number of workflows per Region, per account, is limited to 10.
+ The maximum timeout for custom steps is 30 minutes.
+ The maximum number of steps in a workflow is 8.
+ The maximum number of tags per workflow is 50.
+ The maximum number of concurrent executions that contain a decrypt step is 250 per workflow.
+ You can store a maximum of 3 PGP private keys, per Transfer Family server, per user.
+ The maximum size for a decrypted file is 10 GB.
+ We throttle the new execution rate using a [token bucket](https://en.wikipedia.org/wiki/Token_bucket) system with a burst capacity of 100 and a refill rate of 1.
+ Anytime you remove a workflow from a server and replace it with a new one, or update server configuration (which impacts a workflow's execution role), you must wait approximately 10 minutes before executing the new workflow. The Transfer Family server caches the workflow details, and it takes 10 minutes for the server to refresh its cache.

  Additionally, you must log out of any active SFTP sessions, and then log back in after the 10-minute waiting period to see the changes.