

# Making requests to Amazon S3 over IPv6
<a name="ipv6-access"></a>

Amazon Simple Storage Service (Amazon S3) supports the ability to access S3 buckets using the Internet Protocol version 6 (IPv6), in addition to the IPv4 protocol. Amazon S3 dual-stack endpoints support requests to S3 buckets over IPv6 and IPv4. There are no additional charges for accessing Amazon S3 over IPv6. For more information about pricing, see [Amazon S3 Pricing](https://aws.amazon.com/s3/pricing/).

**Note**  
The following information is for public Amazon S3 IPv6 endpoints. For information about AWS Virtual Private Cloud (VPC) endpoints, refer to [AWS PrivateLink for Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/privatelink-interface-endpoints.html). 

**Topics**
+ [

## Getting started making requests over IPv6
](#ipv6-access-getting-started)
+ [

## Using IPv6 addresses in IAM policies
](#ipv6-access-iam)
+ [

## Testing IP address compatibility
](#ipv6-access-test-compatabilty)
+ [

# Using Amazon S3 dual-stack endpoints
](dual-stack-endpoints.md)

## Getting started making requests over IPv6
<a name="ipv6-access-getting-started"></a>

To make a request to an S3 bucket over IPv6, you need to use a dual-stack endpoint. The next section describes how to make requests over IPv6 by using dual-stack endpoints. 

The following are some things you should know before trying to access a bucket over IPv6: 
+ The client and the network accessing the bucket must be enabled to use IPv6. 
+ Both virtual hosted-style and path style requests are supported for IPv6 access. For more information, see [Amazon S3 dual-stack endpoints](dual-stack-endpoints.md#dual-stack-endpoints-description).
+ If you use source IP address filtering in your AWS Identity and Access Management (IAM) user or bucket policies, you need to update the policies to include IPv6 address ranges. For more information, see [Using IPv6 addresses in IAM policies](#ipv6-access-iam).
+ When using IPv6, server access log files output IP addresses in an IPv6 format. You need to update existing tools, scripts, and software that you use to parse Amazon S3 log files so that they can parse the IPv6 formatted `Remote IP` addresses. For more information, see [Logging requests with server access logging ](https://docs.aws.amazon.com//AmazonS3/latest/userguide/ServerLogs.html). 
**Note**  
If you experience issues related to the presence of IPv6 addresses in log files, contact [AWS Support](https://aws.amazon.com/premiumsupport/).



### Making requests over IPv6 by using dual-stack endpoints
<a name="ipv6-access-api"></a>

You make requests with Amazon S3 API calls over IPv6 by using dual-stack endpoints. The Amazon S3 API operations work the same way whether you're accessing Amazon S3 over IPv6 or over IPv4. Performance should be the same too.

When using the REST API, you access a dual-stack endpoint directly. For more information, see [Dual-stack endpoints](dual-stack-endpoints.md#dual-stack-endpoints-description).

When using the AWS Command Line Interface (AWS CLI) and AWS SDKs, you can use a parameter or flag to change to a dual-stack endpoint. You can also specify the dual-stack endpoint directly as an override of the Amazon S3 endpoint in the config file.

You can use a dual-stack endpoint to access a bucket over IPv6 from any of the following:
+ The AWS CLI, see [Using dual-stack endpoints from the AWS CLI](dual-stack-endpoints.md#dual-stack-endpoints-cli).
+ The AWS SDKs, see [Using dual-stack endpoints from the AWS SDKs](dual-stack-endpoints.md#dual-stack-endpoints-sdks).
+ The REST API, see [Making requests to dual-stack endpoints by using the REST API](RESTAPI.md#rest-api-dual-stack).

### Features not available over IPv6
<a name="ipv6-not-supported"></a>

The following feature is currently not supported when accessing an S3 bucket over IPv6: Static website hosting from an S3 bucket.

## Using IPv6 addresses in IAM policies
<a name="ipv6-access-iam"></a>

Before trying to access a bucket using IPv6, you must ensure that any IAM user or S3 bucket polices that are used for IP address filtering are updated to include IPv6 address ranges. IP address filtering policies that are not updated to handle IPv6 addresses may result in clients incorrectly losing or gaining access to the bucket when they start using IPv6. For more information about managing access permissions with IAM, see [Identity and Access Management for Amazon S3](https://docs.aws.amazon.com//AmazonS3/latest/userguide/security-iam.html) .

IAM policies that filter IP addresses use [IP Address Condition Operators](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html#Conditions_IPAddress). The following bucket policy identifies the 54.240.143.\$1 range of allowed IPv4 addresses by using IP address condition operators. Any IP addresses outside of this range will be denied access to the bucket (`examplebucket`). Since all IPv6 addresses are outside of the allowed range, this policy prevents IPv6 addresses from being able to access `examplebucket`. 

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "IPAllow",
      "Effect": "Allow",
      "Principal": "*",
      "Action": "s3:*",
      "Resource": "arn:aws:s3:::examplebucket/*",
      "Condition": {
         "IpAddress": {"aws:SourceIp": "54.240.143.0/24"}
      } 
    } 
  ]
}
```

------

You can modify the bucket policy's `Condition` element to allow both IPv4 (`54.240.143.0/24`) and IPv6 (`2001:DB8:1234:5678::/64`) address ranges as shown in the following example. You can use the same type of `Condition` block shown in the example to update both your IAM user and bucket policies.

```
1.        "Condition": {
2.          "IpAddress": {
3.             "aws:SourceIp": [
4.               "54.240.143.0/24",
5.                "2001:DB8:1234:5678::/64"
6.              ]
7.           }
8.         }
```

Before using IPv6 you must update all relevant IAM user and bucket policies that use IP address filtering. We do not recommend using IP address filterig in bucket policies. 

You can review your IAM user policies using the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/). For more information about IAM, see the [IAM User Guide](https://docs.aws.amazon.com/IAM/latest/UserGuide/). For information about editing S3 bucket policies, see [Adding a bucket policy](https://docs.aws.amazon.com//AmazonS3/latest/userguide/add-bucket-policy.html). 

## Testing IP address compatibility
<a name="ipv6-access-test-compatabilty"></a>

If you are using use Linux/Unix or Mac OS X, you can test whether you can access a dual-stack endpoint over IPv6 by using the `curl` command as shown in the following example:

**Example**  

```
curl -v  http://s3.dualstack.us-west-2.amazonaws.com/
```
You get back information similar to the following example. If you are connected over IPv6 the connected IP address will be an IPv6 address.   

```
* About to connect() to s3-us-west-2.amazonaws.com port 80 (#0)
*   Trying IPv6 address... connected
* Connected to s3.dualstack.us-west-2.amazonaws.com (IPv6 address) port 80 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.18.1 (x86_64-unknown-linux-gnu) libcurl/7.18.1 OpenSSL/1.0.1t zlib/1.2.3
> Host: s3.dualstack.us-west-2.amazonaws.com
```

If you are using Microsoft Windows 7 or Windows 10, you can test whether you can access a dual-stack endpoint over IPv6 or IPv4 by using the `ping` command as shown in the following example.

```
ping ipv6.s3.dualstack.us-west-2.amazonaws.com 
```

# Using Amazon S3 dual-stack endpoints
<a name="dual-stack-endpoints"></a>

Amazon S3 dual-stack endpoints support requests to S3 buckets over IPv6 and IPv4. This section describes how to use dual-stack endpoints.

**Topics**
+ [

## Amazon S3 dual-stack endpoints
](#dual-stack-endpoints-description)
+ [

## Using dual-stack endpoints from the AWS CLI
](#dual-stack-endpoints-cli)
+ [

## Using dual-stack endpoints from the AWS SDKs
](#dual-stack-endpoints-sdks)
+ [

## Using dual-stack endpoints from the REST API
](#dual-stack-endpoints-examples-rest-api)

## Amazon S3 dual-stack endpoints
<a name="dual-stack-endpoints-description"></a>

When you make a request to a dual-stack endpoint, the bucket URL resolves to an IPv6 or an IPv4 address. For more information about accessing a bucket over IPv6, see [Making requests to Amazon S3 over IPv6](ipv6-access.md).

When using the REST API, you directly access an Amazon S3 endpoint by using the endpoint name (URI). You can access an S3 bucket through a dual-stack endpoint by using a virtual hosted-style or a path-style endpoint name. Amazon S3 supports only regional dual-stack endpoint names, which means that you must specify the region as part of the name. 

Use the following naming conventions for the dual-stack virtual hosted-style and path-style endpoint names:
+ Virtual hosted-style dual-stack endpoint: 

   *bucketname*.s3.dualstack.*aws-region*.amazonaws.com

   
+ Path-style dual-stack endpoint: 

  s3.dualstack.*aws-region*.amazonaws.com/*bucketname*

For more information, about endpoint name style, see [Accessing and listing an Amazon S3 bucket ](https://docs.aws.amazon.com//AmazonS3/latest/userguide/access-bucket-intro.html). For a list of Amazon S3 endpoints, see [Regions and Endpoints](https://docs.aws.amazon.com/general/latest/gr/s3.html) in the *AWS General Reference*. 

**Important**  
You can use transfer acceleration with dual-stack endpoints. For more information, see [Getting started with Amazon S3 Transfer Acceleration ](https://docs.aws.amazon.com//AmazonS3/latest/userguide/transfer-acceleration-getting-started.html).

**Note**  
The two types of Virtual Private Cloud (VPC) endpoints that access Amazon S3 (*Interface VPC endpoints* and *Gateway VPC endpoints*) now have dual-stack support. For more information about VPC endpoints for Amazon S3, see [AWS PrivateLink for Amazon S3 ](https://docs.aws.amazon.com//AmazonS3/latest/userguide/privatelink-interface-endpoints.html).

When using the AWS Command Line Interface (AWS CLI) and AWS SDKs, you can use a parameter or flag to change to a dual-stack endpoint. You can also specify the dual-stack endpoint directly as an override of the Amazon S3 endpoint in the config file. The following sections describe how to use dual-stack endpoints from the AWS CLI and the AWS SDKs.

## Using dual-stack endpoints from the AWS CLI
<a name="dual-stack-endpoints-cli"></a>

This section provides examples of AWS CLI commands used to make requests to a dual-stack endpoint. For instructions on setting up the AWS CLI, see [Developing with Amazon S3 using the AWS CLI](setup-aws-cli.md).

You set the configuration value `use_dualstack_endpoint` to `true` in a profile in your AWS Config file to direct all Amazon S3 requests made by the `s3` and `s3api` AWS CLI commands to the dual-stack endpoint for the specified region. You specify the region in the config file or in a command using the `--region` option. 

When using dual-stack endpoints with the AWS CLI, both `path` and `virtual` addressing styles are supported. The addressing style, set in the config file, controls if the bucket name is in the hostname or part of the URL. By default, the CLI will attempt to use virtual style where possible, but will fall back to path style if necessary. For more information, see [AWS CLI Amazon S3 Configuration](https://docs.aws.amazon.com/cli/latest/topic/s3-config.html).

You can also make configuration changes by using a command, as shown in the following example, which sets `use_dualstack_endpoint` to `true` and `addressing_style` to `virtual` in the default profile.

```
$ aws configure set default.s3.use_dualstack_endpoint true
$ aws configure set default.s3.addressing_style virtual
```

If you want to use a dual-stack endpoint for specified AWS CLI commands only (not all commands), you can use either of the following methods: 
+ You can use the dual-stack endpoint per command by setting the `--endpoint-url` parameter to `https://s3.dualstack.aws-region.amazonaws.com` or `http://s3.dualstack.aws-region.amazonaws.com` for any `s3` or `s3api` command.

  ```
  $ aws s3api list-objects --bucket bucketname --endpoint-url https://s3.dualstack.aws-region.amazonaws.com
  ```
+ You can set up separate profiles in your AWS Config file. For example, create one profile that sets `use_dualstack_endpoint` to `true` and a profile that does not set `use_dualstack_endpoint`. When you run a command, specify which profile you want to use, depending upon whether or not you want to use the dual-stack endpoint. 

**Note**  
When using the AWS CLI you currently cannot use transfer acceleration with dual-stack endpoints. However, support for the AWS CLI is coming soon. For more information, see [Enabling and using S3 Transfer Acceleration ](https://docs.aws.amazon.com//AmazonS3/latest/userguide/transfer-acceleration.html#transfer-acceleration-requirements). 

## Using dual-stack endpoints from the AWS SDKs
<a name="dual-stack-endpoints-sdks"></a>

This section provides examples of how to access a dual-stack endpoint by using the AWS SDKs. 

### AWS SDK for Java dual-stack endpoint example
<a name="dual-stack-endpoints-examples-java"></a>

The following example shows how to enable dual-stack endpoints when creating an Amazon S3 client using the AWS SDK for Java.

For instructions on creating and testing a working Java sample, see [Getting Started](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/getting-started.html) in the AWS SDK for Java Developer Guide.

```
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;

public class DualStackEndpoints {

    public static void main(String[] args) {
        Regions clientRegion = Regions.DEFAULT_REGION;
        String bucketName = "*** Bucket name ***";

        try {
            // Create an Amazon S3 client with dual-stack endpoints enabled.
            AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
                    .withCredentials(new ProfileCredentialsProvider())
                    .withRegion(clientRegion)
                    .withDualstackEnabled(true)
                    .build();

            s3Client.listObjects(bucketName);
        } catch (AmazonServiceException e) {
            // The call was transmitted successfully, but Amazon S3 couldn't process
            // it, so it returned an error response.
            e.printStackTrace();
        } catch (SdkClientException e) {
            // Amazon S3 couldn't be contacted for a response, or the client
            // couldn't parse the response from Amazon S3.
            e.printStackTrace();
        }
    }
}
```

If you are using the AWS SDK for Java on Windows, you might have to set the following Java virtual machine (JVM) property: 

```
java.net.preferIPv6Addresses=true
```

### AWS .NET SDK dual-stack endpoint example
<a name="dual-stack-endpoints-examples-dotnet"></a>

When using the AWS SDK for .NET you use the `AmazonS3Config` class to enable the use of a dual-stack endpoint as shown in the following example. 

```
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
    class DualStackEndpointTest
    {
        private const string bucketName = "*** bucket name ***";
        // Specify your bucket region (an example region is shown).
        private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
        private static IAmazonS3 client;

        public static void Main()
        {
            var config = new AmazonS3Config
            {
                UseDualstackEndpoint = true,
                RegionEndpoint = bucketRegion
            };
            client = new AmazonS3Client(config);
            Console.WriteLine("Listing objects stored in a bucket");
            ListingObjectsAsync().Wait();
        }

        private static async Task ListingObjectsAsync()
        {
            try
            {
                var request = new ListObjectsV2Request
                {
                    BucketName = bucketName,
                    MaxKeys = 10
                };
                ListObjectsV2Response response;
                do
                {
                    response = await client.ListObjectsV2Async(request);

                    // Process the response.
                    foreach (S3Object entry in response.S3Objects)
                    {
                        Console.WriteLine("key = {0} size = {1}",
                            entry.Key, entry.Size);
                    }
                    Console.WriteLine("Next Continuation Token: {0}", response.NextContinuationToken);
                    request.ContinuationToken = response.NextContinuationToken;
                } while (response.IsTruncated == true);
            }
            catch (AmazonS3Exception amazonS3Exception)
            {
                Console.WriteLine("An AmazonS3Exception was thrown. Exception: " + amazonS3Exception.ToString());
            }
            catch (Exception e)
            {
                Console.WriteLine("Exception: " + e.ToString());
            }
        }
    }
}
```

For information about setting up and running the code examples, see [Getting Started with the AWS SDK for .NET](https://docs.aws.amazon.com/sdk-for-net/latest/developer-guide/net-dg-setup.html) in the *AWS SDK for .NET Developer Guide*. 

## Using dual-stack endpoints from the REST API
<a name="dual-stack-endpoints-examples-rest-api"></a>

For information about making requests to dual-stack endpoints by using the REST API, see [Making requests to dual-stack endpoints by using the REST API](RESTAPI.md#rest-api-dual-stack).