Creating a bucket - Amazon Simple Storage Service

Creating a bucket

To upload your data to Amazon S3, you must first create an Amazon S3 bucket in one of the AWS Regions. The AWS account that creates the bucket owns it. When you create a bucket, you must choose a bucket name and Region. You can optionally choose other storage management options for the bucket. After you create a bucket, you cannot change the bucket name or Region. For information about naming buckets, see General purpose bucket naming rules.

By default, you can create up to 10,000 general purpose buckets per AWS account. To request a quota increase for general purpose buckets, visit the Service Quotas console.

You can store any number of objects in a bucket. For a list of restriction and limitations related to Amazon S3 buckets see, Bucket quotas, limitations, and restrictions.

S3 Object Ownership is an Amazon S3 bucket-level setting that you can use both to control ownership of objects that are uploaded to your bucket and to disable or enable access control lists (ACLs). By default, Object Ownership is set to the Bucket owner enforced setting, and all ACLs are disabled. With ACLs disabled, the bucket owner owns every object in the bucket and manages access to data exclusively by using policies.

For more information, see Controlling ownership of objects and disabling ACLs for your bucket.

Server-side encryption with Amazon S3 managed keys (SSE-S3) is the base level of encryption configuration for every bucket in Amazon S3. All new objects uploaded to an S3 bucket are automatically encrypted with SSE-S3 as the base level of encryption setting. If you want to use a different type of default encryption, you can also specify server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS) or customer-provided keys (SSE-C) to encrypt your data. For more information, see Setting default server-side encryption behavior for Amazon S3 buckets.

You can use the Amazon S3 console, Amazon S3 APIs, AWS CLI, or AWS SDKs to create a bucket. For more information about the permissions required to create a bucket, see CreateBucket in the Amazon Simple Storage Service API Reference.

  1. Sign in to the AWS Management Console and open the Amazon S3 console at https://console.aws.amazon.com/s3/.

  2. In the navigation bar on the top of the page, choose the name of the currently displayed AWS Region. Next, choose the Region in which you want to create a bucket.

    Note

    To minimize latency and costs and address regulatory requirements, choose a Region close to you. Objects stored in a Region never leave that Region unless you explicitly transfer them to another Region. For a list of Amazon S3 AWS Regions, see AWS service endpoints in the Amazon Web Services General Reference.

  3. In the left navigation pane, choose Buckets.

  4. Choose Create bucket.

    The Create bucket page opens.

  5. Under General configuration, view the AWS Region where your bucket will be created.

  6. Under Bucket type, choose General purpose.

  7. For Bucket name, enter a name for your bucket.

    The bucket name must:

    • Be unique within a partition. A partition is a grouping of Regions. AWS currently has three partitions: aws (Standard Regions), aws-cn (China Regions), and aws-us-gov (AWS GovCloud (US) Regions).

    • Be between 3 and 63 characters long.

    • Consist only of lowercase letters, numbers, dots (.), and hyphens (-). For best compatibility, we recommend that you avoid using dots (.) in bucket names, except for buckets that are used only for static website hosting.

    • Begin and end with a letter or number.

    After you create the bucket, you cannot change its name. The AWS account that creates the bucket owns it. For more information about naming buckets, see General purpose bucket naming rules.

    Important

    Avoid including sensitive information, such as account numbers, in the bucket name. The bucket name is visible in the URLs that point to the objects in the bucket.

  8. AWS Management Console allows you to copy an existing bucket's settings to your new bucket. If you do not want to copy the settings of an existing bucket, skip to the next step.

    Note

    This option:

    • Is not available in the AWS CLI and is only available in console

    • Is not available for directory buckets

    • Does not copy the bucket policy from the existing bucket to the new bucket

    To copy an existing bucket's settings, under Copy settings from existing bucket, select Choose bucket. The Choose bucket window opens. Find the bucket with the settings that you would like to copy, and select Choose bucket. The Choose bucket window closes, and the Create bucket window re-opens.

    Under Copy settings from existing bucket, you will now see the name of the bucket you selected. You will also see a Restore defaults option that you can use to remove the copied bucket settings. Review the remaining bucket settings, on the Create bucket page. You will see that they now match the settings of the bucket that you selected. You can skip to the final step.

  9. Under Object Ownership, to disable or enable ACLs and control ownership of objects uploaded in your bucket, choose one of the following settings:

    ACLs disabled
    • Bucket owner enforced (default) – ACLs are disabled, and the bucket owner automatically owns and has full control over every object in the bucket. ACLs no longer affect access permissions to data in the S3 bucket. The bucket uses policies exclusively to define access control.

      By default, ACLs are disabled. A majority of modern use cases in Amazon S3 no longer require the use of ACLs. We recommend that you keep ACLs disabled, except in unusual circumstances where you must control access for each object individually. For more information, see Controlling ownership of objects and disabling ACLs for your bucket.

    ACLs enabled
    • Bucket owner preferred – The bucket owner owns and has full control over new objects that other accounts write to the bucket with the bucket-owner-full-control canned ACL.

      If you apply the Bucket owner preferred setting, to require all Amazon S3 uploads to include the bucket-owner-full-control canned ACL, you can add a bucket policy that allows only object uploads that use this ACL.

    • Object writer – The AWS account that uploads an object owns the object, has full control over it, and can grant other users access to it through ACLs.

    Note

    The default setting is Bucket owner enforced. To apply the default setting and keep ACLs disabled, only the s3:CreateBucket permission is needed. To enable ACLs, you must have the s3:PutBucketOwnershipControls permission.

  10. Under Block Public Access settings for this bucket, choose the Block Public Access settings that you want to apply to the bucket.

    By default, all four Block Public Access settings are enabled. We recommend that you keep all settings enabled, unless you know that you need to turn off one or more of them for your specific use case. For more information about blocking public access, see Blocking public access to your Amazon S3 storage.

    Note

    To enable all Block Public Access settings, only the s3:CreateBucket permission is required. To turn off any Block Public Access settings, you must have the s3:PutBucketPublicAccessBlock permission.

  11. (Optional) Under Bucket Versioning, you can choose if you wish to keep variants of objects in your bucket. For more information about versioning, see Retaining multiple versions of objects with S3 Versioning.

    To disable or enable versioning on your bucket, choose either Disable or Enable.

  12. (Optional) Under Tags, you can choose to add tags to your bucket. Tags are key-value pairs used to categorize storage.

    To add a bucket tag, enter a Key and optionally a Value and choose Add Tag.

  13. Under Default encryption, choose Edit.

  14. To configure default encryption, under Encryption type, choose one of the following:

    • Amazon S3 managed key (SSE-S3)

    • AWS Key Management Service key (SSE-KMS)

      Important

      If you use the SSE-KMS option for your default encryption configuration, you are subject to the requests per second (RPS) quota of AWS KMS. For more information about AWS KMS quotas and how to request a quota increase, see Quotas in the AWS Key Management Service Developer Guide.

    Buckets and new objects are encrypted with server-side encryption with an Amazon S3 managed key as the base level of encryption configuration. For more information about default encryption, see Setting default server-side encryption behavior for Amazon S3 buckets.

    For more information about using Amazon S3 server-side encryption to encrypt your data, see Using server-side encryption with Amazon S3 managed keys (SSE-S3).

  15. If you chose AWS Key Management Service key (SSE-KMS), do the following:

    1. Under AWS KMS key, specify your KMS key in one of the following ways:

      • To choose from a list of available KMS keys, choose Choose from your AWS KMS keys, and choose your KMS key from the list of available keys.

        Both the AWS managed key (aws/s3) and your customer managed keys appear in this list. For more information about customer managed keys, see Customer keys and AWS keys in the AWS Key Management Service Developer Guide.

      • To enter the KMS key ARN, choose Enter AWS KMS key ARN, and enter your KMS key ARN in the field that appears.

      • To create a new customer managed key in the AWS KMS console, choose Create a KMS key.

        For more information about creating an AWS KMS key, see Creating keys in the AWS Key Management Service Developer Guide.

      Important

      You can use only KMS keys that are available in the same AWS Region as the bucket. The Amazon S3 console lists only the first 100 KMS keys in the same Region as the bucket. To use a KMS key that is not listed, you must enter your KMS key ARN. If you want to use a KMS key that is owned by a different account, you must first have permission to use the key and then you must enter the KMS key ARN. For more information on cross account permissions for KMS keys, see Creating KMS keys that other accounts can use in the AWS Key Management Service Developer Guide. For more information on SSE-KMS, see Specifying server-side encryption with AWS KMS (SSE-KMS).

      When you use an AWS KMS key for server-side encryption in Amazon S3, you must choose a symmetric encryption KMS key. Amazon S3 supports only symmetric encryption KMS keys and not asymmetric KMS keys. For more information, see Identifying symmetric and asymmetric KMS keys in the AWS Key Management Service Developer Guide.

      For more information about creating an AWS KMS key, see Creating keys in the AWS Key Management Service Developer Guide. For more information about using AWS KMS with Amazon S3, see Using server-side encryption with AWS KMS keys (SSE-KMS).

    2. When you configure your bucket to use default encryption with SSE-KMS, you can also enable S3 Bucket Keys. S3 Bucket Keys lower the cost of encryption by decreasing request traffic from Amazon S3 to AWS KMS. For more information, see Reducing the cost of SSE-KMS with Amazon S3 Bucket Keys.

      To use S3 Bucket Keys, under Bucket Key, choose Enable.

  16. (Optional) If you want to enable S3 Object Lock, do the following:

    1. Choose Advanced settings.

      Important

      Enabling Object Lock also enables versioning for the bucket. After enabling you must configure the Object Lock default retention and legal hold settings to protect new objects from being deleted or overwritten.

    2. If you want to enable Object Lock, choose Enable, read the warning that appears, and acknowledge it.

    For more information, see Locking objects with Object Lock.

    Note

    To create an Object Lock enabled bucket, you must have the following permissions: s3:CreateBucket, s3:PutBucketVersioning and s3:PutBucketObjectLockConfiguration.

  17. Choose Create bucket.

When you use the AWS SDKs to create a bucket, you must create a client and then use the client to send a request to create a bucket. As a best practice, you should create your client and bucket in the same AWS Region. If you don't specify a Region when you create a client or a bucket, Amazon S3 uses the default Region, US East (N. Virginia). If you want to constrain the bucket creation to a specific AWS Region, use the LocationConstraint condition key.

To create a client to access a dual-stack endpoint, you must specify an AWS Region. For more information, see Using Amazon S3 dual-stack endpoints in the Amazon S3 API Reference . For a list of available AWS Regions, see Regions and endpoints in the AWS General Reference.

When you create a client, the Region maps to the Region-specific endpoint. The client uses this endpoint to communicate with Amazon S3: s3.region.amazonaws.com. If your Region launched after March 20, 2019, your client and bucket must be in the same Region. However, you can use a client in the US East (N. Virginia) Region to create a bucket in any Region that launched before March 20, 2019. For more information, see Legacy endpoints.

These AWS SDK code examples perform the following tasks:

  • Create a client by explicitly specifying an AWS Region – In the example, the client uses the s3.us-west-2.amazonaws.com endpoint to communicate with Amazon S3. You can specify any AWS Region. For a list of AWS Regions, see Regions and endpoints in the AWS General Reference.

  • Send a create bucket request by specifying only a bucket name – The client sends a request to Amazon S3 to create the bucket in the Region where you created a client.

  • Retrieve information about the location of the bucket – Amazon S3 stores bucket location information in the location subresource that is associated with the bucket.

Java
Example Create a bucket that uses a globally unique identifier (GUID) in the bucket name

The following example shows you how to create a bucket with a GUID at the end of the bucket name in US East (N. Virginia) Region (us-east-1;) by using the AWS SDK for Java. For information about other AWS SDKs, see Tools to Build on AWS.

import com.amazonaws.regions.Regions; import com.amazonaws.services.s3.AmazonS3; import com.amazonaws.services.s3.AmazonS3ClientBuilder; import com.amazonaws.services.s3.model.Bucket; import com.amazonaws.services.s3.model.CreateBucketRequest; import java.util.List; import java.util.UUID; public class CreateBucketWithUUID { public static void main(String[] args) { final AmazonS3 s3 = AmazonS3ClientBuilder.standard().withRegion(Regions.US_EAST_1).build(); String bucketName = "amzn-s3-demo-bucket" + UUID.randomUUID().toString().replace("-", ""); CreateBucketRequest createRequest = new CreateBucketRequest(bucketName); System.out.println(bucketName); s3.createBucket(createRequest); } }
Example Create a bucket

This example shows you how to create an Amazon S3 bucket using the AWS SDK for Java. For instructions on creating and testing a working sample, see Getting Started in the AWS SDK for Java Developer Guide.

import com.amazonaws.AmazonServiceException; import com.amazonaws.SdkClientException; import com.amazonaws.auth.profile.ProfileCredentialsProvider; import com.amazonaws.regions.Regions; import com.amazonaws.services.s3.AmazonS3; import com.amazonaws.services.s3.AmazonS3ClientBuilder; import com.amazonaws.services.s3.model.CreateBucketRequest; import com.amazonaws.services.s3.model.GetBucketLocationRequest; import java.io.IOException; public class CreateBucket2 { public static void main(String[] args) throws IOException { Regions clientRegion = Regions.DEFAULT_REGION; String bucketName = "*** Bucket name ***"; try { AmazonS3 s3Client = AmazonS3ClientBuilder.standard() .withCredentials(new ProfileCredentialsProvider()) .withRegion(clientRegion) .build(); if (!s3Client.doesBucketExistV2(bucketName)) { // Because the CreateBucketRequest object doesn't specify a region, the // bucket is created in the region specified in the client. s3Client.createBucket(new CreateBucketRequest(bucketName)); // Verify that the bucket was created by retrieving it and checking its // location. String bucketLocation = s3Client.getBucketLocation(new GetBucketLocationRequest(bucketName)); System.out.println("Bucket location: " + bucketLocation); } } catch (AmazonServiceException e) { // The call was transmitted successfully, but Amazon S3 couldn't process // it and returned an error response. e.printStackTrace(); } catch (SdkClientException e) { // Amazon S3 couldn't be contacted for a response, or the client // couldn't parse the response from Amazon S3. e.printStackTrace(); } } }
.NET

For information about how to create and test a working sample, see AWS SDK for .NET Version 3 API Reference.

using Amazon; using Amazon.S3; using Amazon.S3.Model; using Amazon.S3.Util; using System; using System.Threading.Tasks; namespace Amazon.DocSamples.S3 { class CreateBucketTest { private const string bucketName = "*** bucket name ***"; // Specify your bucket region (an example region is shown). private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2; private static IAmazonS3 s3Client; public static void Main() { s3Client = new AmazonS3Client(bucketRegion); CreateBucketAsync().Wait(); } static async Task CreateBucketAsync() { try { if (!(await AmazonS3Util.DoesS3BucketExistAsync(s3Client, bucketName))) { var putBucketRequest = new PutBucketRequest { BucketName = bucketName, UseClientRegion = true }; PutBucketResponse putBucketResponse = await s3Client.PutBucketAsync(putBucketRequest); } // Retrieve the bucket location. string bucketLocation = await FindBucketLocationAsync(s3Client); } catch (AmazonS3Exception e) { Console.WriteLine("Error encountered on server. Message:'{0}' when writing an object", e.Message); } catch (Exception e) { Console.WriteLine("Unknown encountered on server. Message:'{0}' when writing an object", e.Message); } } static async Task<string> FindBucketLocationAsync(IAmazonS3 client) { string bucketLocation; var request = new GetBucketLocationRequest() { BucketName = bucketName }; GetBucketLocationResponse response = await client.GetBucketLocationAsync(request); bucketLocation = response.Location.ToString(); return bucketLocation; } } }
Ruby

For information about how to create and test a working sample, see AWS SDK for Ruby - Version 3.

require 'aws-sdk-s3' # Wraps Amazon S3 bucket actions. class BucketCreateWrapper attr_reader :bucket # @param bucket [Aws::S3::Bucket] An Amazon S3 bucket initialized with a name. This is a client-side object until # create is called. def initialize(bucket) @bucket = bucket end # Creates an Amazon S3 bucket in the specified AWS Region. # # @param region [String] The Region where the bucket is created. # @return [Boolean] True when the bucket is created; otherwise, false. def create?(region) @bucket.create(create_bucket_configuration: { location_constraint: region }) true rescue Aws::Errors::ServiceError => e puts "Couldn't create bucket. Here's why: #{e.message}" false end # Gets the Region where the bucket is located. # # @return [String] The location of the bucket. def location if @bucket.nil? 'None. You must create a bucket before you can get its location!' else @bucket.client.get_bucket_location(bucket: @bucket.name).location_constraint end rescue Aws::Errors::ServiceError => e "Couldn't get the location of #{@bucket.name}. Here's why: #{e.message}" end end # Example usage: def run_demo region = "us-west-2" wrapper = BucketCreateWrapper.new(Aws::S3::Bucket.new("amzn-s3-demo-bucket-#{Random.uuid}")) return unless wrapper.create?(region) puts "Created bucket #{wrapper.bucket.name}." puts "Your bucket's region is: #{wrapper.location}" end run_demo if $PROGRAM_NAME == __FILE__

The following AWS CLI example creates a bucket in the US West (N. California) Region (us-west-1) Region with an example bucket name that uses a globally unique identifier (GUID).

aws s3api create-bucket \ --bucket amzn-s3-demo-bucket1$(uuidgen | tr -d - | tr '[:upper:]' '[:lower:]' ) \ --region us-west-1 \ --create-bucket-configuration LocationConstraint=us-west-1

For more information, see create-bucket in the AWS CLI Command Reference.