

AWS Snowball Edge is no longer available to new customers. New customers should explore [AWS DataSync](https://aws.amazon.com/datasync/) for online transfers, [AWS Data Transfer Terminal](https://aws.amazon.com/data-transfer-terminal/) for secure physical transfers, or AWS Partner solutions. For edge computing, explore [AWS Outposts](https://aws.amazon.com/outposts/). 

# Transferring files using the Amazon S3 adapter for data migration to or from Snowball Edge
<a name="using-adapter"></a>

Following is an overview of the Amazon S3 adapter, which you can use to transfer data programmatically to and from S3 buckets already on the AWS Snowball Edge device using Amazon S3 REST API actions. This Amazon S3 REST API support is limited to a subset of actions. You can use this subset of actions with one of the AWS SDKs to transfer data programmatically. You can also use the subset of supported AWS Command Line Interface (AWS CLI) commands for Amazon S3 to transfer data programmatically.

If your solution uses the AWS SDK for Java version 1.11.0 or newer, you must use the following `S3ClientOptions`:
+ `disableChunkedEncoding()` – Indicates that chunked encoding is not supported with the interface.
+ `setPathStyleAccess(true)` – Configures the interface to use path-style access for all requests.

For more information, see [Class S3ClientOptions.Builder](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/S3ClientOptions.Builder.html) in the *Amazon AppStream SDK for Java*.

**Important**  
We recommend that you use only one method at a time to read and write data to a local bucket on an AWS Snowball Edge device. Using both the NFS interface and the Amazon S3 adapter on the same bucket at the same time can result in read/write conflicts.  
[AWS Snowball Edge quotas](limits.md) details the limits.  
For AWS services to work properly on a Snowball Edge, you must allow the ports for the services. For details, see [Port requirements for AWS services on a Snowball Edge](port-requirements.md).

**Topics**
+ [Downloading and installing the AWS CLI version 1.16.14 for use with the Amazon S3 adapter](#aws-cli-version)
+ [Using the AWS CLI and API operations on Snowball Edge devices](#using-adapter-cli-specify-region)
+ [Getting and using local Amazon S3 credentials on Snowball Edge](#adapter-credentials)
+ [Unsupported Amazon S3 features for the Amazon S3 adapter on Snowball Edge](#snowball-edge-s3-unsupported-features)
+ [Batching small files to improve data transfer performance to Snowball Edge](batching-small-files.md)
+ [Supported AWS CLI commands for data transfer to or from Snowball Edge](using-adapter-cli.md)
+ [Supported Amazon S3 REST API actions on Snowball Edge for data transfer](using-adapter-supported-api.md)

## Downloading and installing the AWS CLI version 1.16.14 for use with the Amazon S3 adapter
<a name="aws-cli-version"></a>

Currently, Snowball Edge devices support only version 1.16.14 and earlier of the AWS CLI for use with the Amazon S3 adapter. Newer versions of the AWS CLI are not compatible with the Amazon S3 adapter because they do not support all of the functionality of the S3 adapter.

**Note**  
If you are using Amazon S3 compatible storage on Snowball Edge, you can use the latest version of the AWS CLI. To download and use the latest version, see [AWS Command Line Interface User Guide](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html).

### Install the AWS CLI on Linux operating systems
<a name="install-cli-linux"></a>

Run this chained command:

```
curl "https://s3.amazonaws.com/aws-cli/awscli-bundle-1.16.14.zip" -o "awscli-bundle.zip";unzip awscli-bundle.zip;sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws;/usr/local/bin/aws --version;
```

### Install the AWS CLI on Windows operating systems
<a name="install-cli-windows"></a>

Download and run the installer file for your operating system:
+ [32‐bit installer bundled with Python 2](https://s3.amazonaws.com/aws-cli/AWSCLI32-1.16.14.msi)
+ [32‐bit installer bundled with Python 3](https://s3.amazonaws.com/aws-cli/AWSCLI32PY3-1.16.14.msi)
+ [64‐bit installer bundled with Python 2](https://s3.amazonaws.com/aws-cli/AWSCLI64-1.16.14.msi)
+ [64‐bit installer bundled with Python 3](https://s3.amazonaws.com/aws-cli/AWSCLI64PY3-1.16.14.msi)
+ [Setup file including 32‐ and 64‐bit installers that will automatically install the correct version](https://s3.amazonaws.com/aws-cli/AWSCLISetup-1.16.14.exe)

## Using the AWS CLI and API operations on Snowball Edge devices
<a name="using-adapter-cli-specify-region"></a>

When using the AWS CLI or API operations to issue IAM, Amazon S3, and Amazon EC2 commands on Snowball Edge, you must specify the Region as "`snow`." You can do this using `AWS configure` or within the command itself, as in the following examples. 

```
aws configure --profile abc
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: 1234567
Default region name [None]: snow
Default output format [None]: json
```

Or

```
aws s3 ls  --endpoint http://192.0.2.0:8080 --region snow --profile snowballEdge
```

### Authorization with the Amazon S3 API interface for AWS Snowball Edge
<a name="auth-adapter"></a>

When you use the Amazon S3 adapter, every interaction is signed with the AWS Signature Version 4 algorithm by default. This authorization is used only to verify the data traveling from its source to the interface. All encryption and decryption happens on the device. Unencrypted data is never stored on the device.

When using the interface, keep the following in mind:
+ To get the local Amazon S3 credentials to sign your requests to the AWS Snowball Edge device, run the `snowballEdge list-access-keys` and `snowballEdge get-secret-access-keys` Snowball Edge client commands. For more information, see [Configuring and using the Snowball Edge Client](using-client-commands.md). These local Amazon S3 credentials include a pair of keys: an access key and a secret key. These keys are only valid for the devices associated with your job. They can't be used in the AWS Cloud because they have no AWS Identity and Access Management (IAM) counterpart.
+ The encryption key is not changed by what AWS credentials you use. Signing with the Signature Version 4 algorithm is only used to verify the data traveling from its source to the interface. Thus, this signing never factors into the encryption keys used to encrypt your data on the Snowball.

## Getting and using local Amazon S3 credentials on Snowball Edge
<a name="adapter-credentials"></a>

Every interaction with a Snowball Edge is signed with the AWS Signature Version 4 algorithm. For more information about the algorithm, see [Signature Version 4 Signing Process](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html) in the *AWS General Reference*.

You can get the local Amazon S3 credentials to sign your requests to the Snowball Edge client Edge device by running the `snowballEdge list-access-keys` and `snowballEdge get-secret-access-key` Snowball Edge client information, see [Getting credentials for a Snowball Edge](using-client-commands.md#client-credentials). These local Amazon S3 credentials include a pair of keys: an access key ID and a secret key. These credentials are only valid for the devices that are associated with your job. They can't be used in the AWS Cloud because they have no IAM counterpart.

You can add these credentials to the AWS credentials file on your server. The default credential profiles file is typically located at `~/.aws/credentials`, but the location can vary per platform. This file is shared by many of the AWS SDKs and by the AWS CLI. You can save local credentials with a profile name as in the following example.

```
[snowballEdge]
aws_access_key_id = AKIAIOSFODNN7EXAMPLE
aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
```

### Configuring the AWS CLI to use the S3 adapter on a Snowball Edge as the endpoint
<a name="using-adapter-cli-endpoint"></a>

When you use the AWS CLI to issue a command to the AWS Snowball Edge device, you specify that the endpoint is the Amazon S3 adapter. You have the choice of using the HTTPS endpoint, or an unsecured HTTP endpoint, as shown following.

**HTTPS secured endpoint**

```
aws s3 ls --endpoint https://192.0.2.0:8443 --ca-bundle path/to/certificate --profile snowballEdge
```

**HTTP unsecured endpoint**

```
aws s3 ls --endpoint http://192.0.2.0:8080 --profile snowballEdge
```

If you use the HTTPS endpoint of `8443`, your data is securely transferred from your server to the Snowball Edge. This encryption is ensured with a certificate that's generated by the Snowball Edge when it gets a new IP address. After you have your certificate, you can save it to a local `ca-bundle.pem` file. Then you can configure your AWS CLI profile to include the path to your certificate, as described following.

**To associate your certificate with the interface endpoint**

1. Connect the Snowball Edge to power and the network, and turn it on.

1. After the device has finished booting up, make a note of its IP address on your local network.

1. From a terminal on your network, make sure you can ping the Snowball Edge.

1. Run the `snowballEdge get-certificate` command in your terminal. For more information on this command, see [Managing public key certificates on Snowball Edge](snowball-edge-certificates-cli.md).

1. Save the output of the `snowballEdge get-certificate` command to a file, for example `ca-bundle.pem`.

1. Run the following command from your terminal.

   ```
   aws configure set profile.snowballEdge.ca_bundle /path/to/ca-bundle.pem
   ```

After you complete the procedure, you can run CLI commands with these local credentials, your certificate, and your specified endpoint, as in the following example.

```
aws s3 ls --endpoint https://192.0.2.0:8443 --profile snowballEdge
```

## Unsupported Amazon S3 features for the Amazon S3 adapter on Snowball Edge
<a name="snowball-edge-s3-unsupported-features"></a>

Using the Amazon S3 adapter, you can programmatically transfer data to and from a Snowball Edge with Amazon S3 API actions. However, not all Amazon S3 transfer features and API actions are supported for use with a Snowball Edge device when using the Amazon S3 adapter. For example, the following features and actions are not supported for use with Snowball Edge: 
+ [TransferManager](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/examples-s3-transfermanager.html) – This utility transfers files from a local environment to Amazon S3 with the SDK for Java. Consider using the supported API actions or AWS CLI commands with the interface instead.
+ [GET Bucket (List Objects) Version 2](https://docs.aws.amazon.com/AmazonS3/latest/API/v2-RESTBucketGET.html) – This implementation of the GET action returns some or all (up to 1,000) of the objects in a bucket. Consider using the [GET Bucket (List Objects) Version 1](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGET.html) action or the [ls](https://docs.aws.amazon.com/cli/latest/reference/s3/ls.html) AWS CLI command.
+ [ListBuckets](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListBuckets.html) – The ListBuckets with the object endpoint is not supported. The following command does not work with Amazon S3 compatible storage on Snowball Edge:

  ```
  aws s3 ls --endpoint https://192.0.2.0 --profile profile                    
  ```

# Batching small files to improve data transfer performance to Snowball Edge
<a name="batching-small-files"></a>

Each copy operation has some overhead because of encryption. To speed up the process of transferring small files to your AWS Snowball Edge device, you can batch them together in a single archive. When you batch files together, they can be auto-extracted when they are imported into Amazon S3, if they were batched in one of the supported archive formats.

Typically, files that are 1 MB or smaller should be included in batches. There's no hard limit on the number of files you can have in a batch, though we recommend that you limit your batches to about 10,000 files. Having more than 100,000 files in a batch can affect how quickly those files import into Amazon S3 after you return the device. We recommend that the total size of each batch be no larger than 100 GB.

Batching files is a manual process, which you manage. After you batch your files, transfer them to a Snowball Edge device using the AWS CLI `cp` command with the `--metadata snowball-auto-extract=true` option. Specifying `snowball-auto-extract=true` automatically extracts the contents of the archived files when the data is imported into Amazon S3, so long as the size of the batched file is no larger than 100 GB.

**Note**  
Any batches larger than 100 GB are not extracted when they are imported into Amazon S3.

**To batch small files**

1. Decide on what format you want to batch your small files in. The auto-extract feature supports the `TAR`, `ZIP`, and `tar.gz` formats.

1. Identify which small files you want to batch together, including their size and the total number of files that you want to batch together.

1. Batch your files on the command line as shown in the following examples.
   + For Linux, you can batch the files in the same command line used to transfer your files to the device. 

     ```
     tar -cf - /Logs/April | aws s3 cp - s3://amzn-s3-demo-bucket/batch01.tar --metadata snowball-auto-extract=true --endpoint http://192.0.2.0:8080
     ```
**Note**  
Alternatively, you can use the archive utility of your choice to batch the files into one or more large archives. However, this approach requires extra local storage to save the archives before you transfer them to the Snowball Edge.
   + For Windows, use the following example command to batch the files when all files are in the same directory from which the command is run:

     ```
     7z a -tzip -so "test" | aws s3 cp - s3://amzn-s3-demo-bucket/batch01.zip --metadata snowball-auto-extract=true --endpoint http://192.0.2.0:8080
     ```

     To batch files from a different directory from which the command is run, use the following example command:

     ```
     7z a -tzip -so "test" "c:\temp" | aws s3 cp - s3://amzn-s3-demo-bucket/batch01.zip --metadata snowball-auto-extract=true --endpoint http://10.x.x.x:8080
     ```
**Note**  
For Microsoft Windows 2016, tar is not available, but you can download it from the *Tar for Windows* website.  
You can download 7 ZIP from the 7ZIP website.

1. Repeat until you've archived all the small files that you want to transfer to Amazon S3 using a Snowball Edge.

1. Transfer the archived files to the Snowball. If you want the data to be auto-extracted, and you used one of the supported archive formats mentioned previously in step 1, use the AWS CLI `cp` command with the `--metadata snowball-auto-extract=true` option.
**Note**  
If there are non-archive files, don't use this command.

When creating the archive files, the extraction will maintain the current data structure. This means if you create an archive file that contains files and folders, Snowball Edge will recreate this during the ingestion to Amazon S3 process.

The archive file will be extracted in the same directory it is stored in and the folder structures will be built out accordingly. Keep in mind that when copying archive files, it is important to set the flag `--metadata snowball-auto-extract=true`. Otherwise, Snowball Edge will not extract the data when it's imported into Amazon S3.

Using the example in step 3, if you have the folder structure of /Logs/April/ that contains files `a.txt`, `b.txt` and `c.txt`. If this archive file was placed in the root of /amzn-s3-demo-bucket/ then the data would look like the following after the extraction:

```
/amzn-s3-demo-bucket/Logs/April/a.txt
/amzn-s3-demo-bucket/Logs/April/b.txt
/amzn-s3-demo-bucket/Logs/April/c.txt
```



If the archive file was placed into /amzn-s3-demo-bucket/Test/, then the extraction would look as like the following:

```
/amzn-s3-demo-bucket/Test/Logs/April/a.txt
/amzn-s3-demo-bucket/Test/Logs/April/b.txt
/amzn-s3-demo-bucket/Test/Logs/April/c.txt
```

# Supported AWS CLI commands for data transfer to or from Snowball Edge
<a name="using-adapter-cli"></a>

Following, you can find information about how to specify the Amazon S3 adapter or Amazon S3 compatible storage on Snowball Edge as the endpoint for applicable AWS Command Line Interface (AWS CLI) commands. You can also find the list of AWS CLI commands for Amazon S3 that are supported for transferring data to the AWS Snowball Edge device with the adapter or Amazon S3 compatible storage on Snowball Edge.

**Note**  
For information about installing and setting up the AWS CLI, including specifying what Regions you want to make AWS CLI calls against, see [AWS Command Line Interface User Guide](https://docs.aws.amazon.com/cli/latest/userguide/).

Currently, Snowball Edge devices support only version 1.16.14 and earlier of the AWS CLI when using the Amazon S3 adapter. See [Finding Snowball Edge client version](using-adapter.md#aws-cli-version). If you are using Amazon S3 compatible storage on Snowball Edge, you can use the lastest version of the AWS CLI. To download and use the latest version, see [AWS Command Line Interface User Guide](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html).

**Note**  
Be sure to install version 2.6.5\$1 or 3.4\$1 of Python before you install version 1.16.14 of the AWS CLI.

## Supported AWS CLI commands for data transfer with Amazon S3 and Snowball Edge
<a name="using-adapter-cli-commands"></a>

Following is a description of the subset of AWS CLI commands and options for Amazon S3 that the AWS Snowball Edge device supports. If a command or option isn't listed, it's not supported. You can declare some unsupported options, like `--sse` or `--storage-class`, along with a command. However, these are ignored and have no impact on how data is imported.
+ [cp](https://docs.aws.amazon.com/cli/latest/reference/s3/cp.html) – Copies a file or object to or from the AWS Snowball Edge device. The following are options for this command:
  + `--dryrun` (Boolean) – The operations that would be performed using the specified command are displayed without being run.
  + `--quiet` (Boolean) – Operations performed by the specified command are not displayed.
  + `--include` (string) – Don't exclude files or objects in the command that match the specified pattern. For details, see [Use of Exclude and Include Filters](https://docs.aws.amazon.com/cli/latest/reference/s3/index.html#use-of-exclude-and-include-filters) in the *AWS CLI Command Reference*.
  + `--exclude` (string) – Exclude all files or objects from the command that matches the specified pattern.
  + `--follow-symlinks | --no-follow-symlinks` (Boolean) – Symbolic links (symlinks) are followed only when uploading to Amazon S3 from the local file system. Amazon S3 doesn't support symbolic links, so the contents of the link target are uploaded under the name of the link. When neither option is specified, the default is to follow symlinks.
  + `--only-show-errors` (Boolean) – Only errors and warnings are displayed. All other output is suppressed.
  + `--recursive` (Boolean) – The command is performed on all files or objects under the specified directory or prefix.
  + `--page-size` (integer) – The number of results to return in each response to a list operation. The default value is 1000 (the maximum allowed). Using a lower value might help if an operation times out.
  + `--metadata` (map) – A map of metadata to store with the objects in Amazon S3. This map is applied to every object that is part of this request. In a sync, this functionality means that files that haven't changed don't receive the new metadata. When copying between two Amazon S3 locations, the `metadata-directive` argument defaults to `REPLACE` unless otherwise specified.
+ [ls](https://docs.aws.amazon.com/cli/latest/reference/s3/ls.html) – Lists objects on the AWS Snowball Edge device. The following are options for this command:
  + `--human-readable` (Boolean) – File sizes are displayed in human-readable format.
  + `--summarize` (Boolean) – Summary information is displayed. This information is the number of objects and their total size.
  + `--recursive` (Boolean) – The command is performed on all files or objects under the specified directory or prefix.
  + `--page-size` (integer) – The number of results to return in each response to a list operation. The default value is 1000 (the maximum allowed). Using a lower value might help if an operation times out.
+ [rm](https://docs.aws.amazon.com/cli/latest/reference/s3/rm.html) – Deletes an object on the AWS Snowball Edge device. The following are options for this command:
  + `--dryrun` (Boolean) – The operations that would be performed using the specified command are displayed without being run.
  + `--include` (string) – Don't exclude files or objects in the command that match the specified pattern. For details, see [Use of Exclude and Include Filters](https://docs.aws.amazon.com/cli/latest/reference/s3/index.html#use-of-exclude-and-include-filters) in the *AWS CLI Command Reference*.
  + `--exclude` (string) – Exclude all files or objects from the command that matches the specified pattern.
  + `--recursive` (Boolean) – The command is performed on all files or objects under the specified directory or prefix.
  + `--page-size` (integer) – The number of results to return in each response to a list operation. The default value is 1000 (the maximum allowed). Using a lower value might help if an operation times out.
  + `--only-show-errors` (Boolean) – Only errors and warnings are displayed. All other output is suppressed.
  + `--quiet` (Boolean) – Operations performed by the specified command are not displayed.
+ [sync](https://docs.aws.amazon.com/cli/latest/reference/s3/sync.html) – Syncs directories and prefixes. This command copies new and updated files from the source directory to the destination. This command only creates directories in the destination if they contain one or more files.
**Important**  
Syncing from one directory to another directory on the same Snowball Edge isn't supported.   
Syncing from one AWS Snowball Edge device to another AWS Snowball Edge device isn't supported.   
You can only use this option to sync the contents between your on-premises data storage and a Snowball Edge.
  + `--dryrun` (Boolean) – The operations that would be performed using the specified command are displayed without being run.
  + `--quiet` (Boolean) – Operations performed by the specified command are not displayed.
  + `--include` (string) – Don't exclude files or objects in the command that match the specified pattern. For details, see [Use of Exclude and Include Filters](https://docs.aws.amazon.com/cli/latest/reference/s3/index.html#use-of-exclude-and-include-filters) in the *AWS CLI Command Reference*.
  + `--exclude` (string) – Exclude all files or objects from the command that matches the specified pattern.
  + `--follow-symlinks` or `--no-follow-symlinks` (Boolean) – Symbolic links (symlinks) are followed only when uploading to Amazon S3 from the local file system. Amazon S3 doesn't support symbolic links, so the contents of the link target are uploaded under the name of the link. When neither option is specified, the default is to follow symlinks.
  + `--only-show-errors` (Boolean) – Only errors and warnings are displayed. All other output is suppressed.
  + `--no-progress` (Boolean) – File transfer progress is not displayed. This option is only applied when the `--quiet` and `--only-show-errors` options are not provided.
  + `--page-size` (integer) – The number of results to return in each response to a list operation. The default value is 1000 (the maximum allowed). Using a lower value might help if an operation times out.
  + `--metadata` (map) – A map of metadata to store with the objects in Amazon S3. This map is applied to every object that is part of this request. In a sync, this functionality means that files that haven't changed don't receive the new metadata. When copying between two Amazon S3 locations, the `metadata-directive` argument defaults to `REPLACE` unless otherwise specified.
**Important**  
Syncing from one directory to another directory on the same Snowball Edge isn't supported.   
Syncing from one AWS Snowball Edge device to another AWS Snowball Edge device isn't supported.  
You can only use this option to sync the contents between your on-premises data storage and a Snowball Edge.
  + `--size-only` (Boolean) – With this option, the size of each key is the only criterion used to decide whether to sync from source to destination.
  + `--exact-timestamps` (Boolean) – When syncing from Amazon S3 to local storage, same-sized items are ignored only when the timestamps match exactly. The default behavior is to ignore same-sized items unless the local version is newer than the Amazon S3 version.
  + `--delete` (Boolean) – Files that exist in the destination but not in the source are deleted during sync.

You can work with files or folders with spaces in their names, such as `my photo.jpg` or `My Documents`. However, make sure that you handle the spaces properly in the AWS CLI commands. For more information, see [Specifying parameter values for the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-using-param.html) in the *AWS Command Line Interface User Guide*.

# Supported Amazon S3 REST API actions on Snowball Edge for data transfer
<a name="using-adapter-supported-api"></a>

Following, you can find the list of Amazon S3 REST API actions that are supported for using the Amazon S3 adapter. The list includes links to information about how the API actions work with Amazon S3. The list also covers any differences in behavior between the Amazon S3 API action and the AWS Snowball Edge device counterpart. All responses coming back from an AWS Snowball Edge device declare `Server` as `AWSSnowball`, as in the following example.

```
HTTP/1.1 201 OK
x-amz-id-2: JuKZqmXuiwFeDQxhD7M8KtsKobSzWA1QEjLbTMTagkKdBX2z7Il/jGhDeJ3j6s80
x-amz-request-id: 32FE2CEB32F5EE25
Date: Fri, 08 2016 21:34:56 GMT
Server: AWSSnowball
```

Amazon S3 REST API calls require SigV4 signing. If you're using the AWS CLI or an AWS SDK to make these API calls, the SigV4 signing is handled for you. Otherwise, you need to implement your own SigV4 signing solution. For more information, see [Authenticating requests (AWS Signature Version 4)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/sig-v4-authenticating-requests.html) in the Amazon Simple Storage Service User Guide.
+ [GET Bucket (List Objects) version 1](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGET.html) – Supported. However, in this implemetation of the GET operation, the following is not supported: 
  + Pagination
  + Markers
  + Delimiters
  + When the list is returned, the list is not sorted

  Only version 1 is supported. GET Bucket (List Objects) version 2 is not supported.
+ [GET Service](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTServiceGET.html) 
+ [HEAD Bucket](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketHEAD.html) 
+ [HEAD Object](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectHEAD.html) 
+ [GET Object](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGET.html) – is a DOWNLOAD of an object from the Snow device's S3 bucket.
+ [PUT Object](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html) – When an object is uploaded to an AWS Snowball Edge device using `PUT Object`, an ETag is generated.

  The ETag is a hash of the object. The ETag reflects changes only to the contents of an object, not its metadata. The ETag might or might not be an MD5 digest of the object data. For more information about ETags, see [Common Response Headers](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTCommonResponseHeaders.html) in the *Amazon Simple Storage Service API Reference.*
+ [DELETE Object](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectDELETE.html) 
+ [Initiate Multipart Upload](https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadInitiate.html) – In this implementation, initiating a multipart upload request for an object already on the AWS Snowball Edge device first deletes that object. It then copies it in parts to the AWS Snowball Edge device. 
+ [List Multipart Uploads](https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListMPUpload.html) 
+ [Upload Part](https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPart.html) 
+ [Complete Multipart Upload](https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadComplete.html) 
+ [Abort Multipart Upload](https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadAbort.html) 

**Note**  
Any Amazon S3 adapter REST API actions not listed here are not supported. Using any unsupported REST API actions with your Snowball Edge returns an error message saying that the action is not supported.