

# Identity and Access Management for Amazon S3
<a name="security-iam"></a>

AWS Identity and Access Management (IAM) is an AWS service that helps an administrator securely control access to AWS resources. IAM administrators control who can be *authenticated* (signed in) and *authorized* (have permissions) to use Amazon S3 resources. IAM is an AWS service that you can use with no additional charge.

For more information about the permissions to S3 API operations by S3 resource types, see [Required permissions for Amazon S3 API operations](using-with-s3-policy-actions.md).

**Note**  
For more information about using the Amazon S3 Express One Zone storage class with directory buckets, see [S3 Express One Zone](directory-bucket-high-performance.md#s3-express-one-zone) and [Working with directory buckets](directory-buckets-overview.md).

**Topics**
+ [

## Audience
](#security_iam_audience)
+ [

## Authenticating with identities
](#security_iam_authentication)
+ [

## Managing access using policies
](#security_iam_access-manage)
+ [

# How Amazon S3 works with IAM
](security_iam_service-with-iam.md)
+ [

# How Amazon S3 authorizes a request
](how-s3-evaluates-access-control.md)
+ [

# Required permissions for Amazon S3 API operations
](using-with-s3-policy-actions.md)
+ [

# Policies and permissions in Amazon S3
](access-policy-language-overview.md)
+ [

# Bucket policies for Amazon S3
](bucket-policies.md)
+ [

# Identity-based policies for Amazon S3
](security_iam_id-based-policy-examples.md)
+ [

# Walkthroughs that use policies to manage access to your Amazon S3 resources
](example-walkthroughs-managing-access.md)
+ [

# Using service-linked roles for Amazon S3 Storage Lens
](using-service-linked-roles.md)
+ [

# Troubleshooting Amazon S3 identity and access
](security_iam_troubleshoot.md)
+ [

# AWS managed policies for Amazon S3
](security-iam-awsmanpol.md)

## Audience
<a name="security_iam_audience"></a>

How you use AWS Identity and Access Management (IAM) differs based on your role:
+ **Service user** - request permissions from your administrator if you cannot access features (see [Troubleshooting Amazon S3 identity and access](security_iam_troubleshoot.md))
+ **Service administrator** - determine user access and submit permission requests (see [How Amazon S3 works with IAM](security_iam_service-with-iam.md))
+ **IAM administrator** - write policies to manage access (see [Identity-based policies for Amazon S3](security_iam_id-based-policy-examples.md))

## Authenticating with identities
<a name="security_iam_authentication"></a>

Authentication is how you sign in to AWS using your identity credentials. You must be authenticated as the AWS account root user, an IAM user, or by assuming an IAM role.

You can sign in as a federated identity using credentials from an identity source like AWS IAM Identity Center (IAM Identity Center), single sign-on authentication, or Google/Facebook credentials. For more information about signing in, see [How to sign in to your AWS account](https://docs.aws.amazon.com/signin/latest/userguide/how-to-sign-in.html) in the *AWS Sign-In User Guide*.

For programmatic access, AWS provides an SDK and CLI to cryptographically sign requests. For more information, see [AWS Signature Version 4 for API requests](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_sigv.html) in the *IAM User Guide*.

### AWS account root user
<a name="security_iam_authentication-rootuser"></a>

 When you create an AWS account, you begin with one sign-in identity called the AWS account *root user* that has complete access to all AWS services and resources. We strongly recommend that you don't use the root user for everyday tasks. For tasks that require root user credentials, see [Tasks that require root user credentials](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user.html#root-user-tasks) in the *IAM User Guide*. 

### Federated identity
<a name="security_iam_authentication-federated"></a>

As a best practice, require human users to use federation with an identity provider to access AWS services using temporary credentials.

A *federated identity* is a user from your enterprise directory, web identity provider, or Directory Service that accesses AWS services using credentials from an identity source. Federated identities assume roles that provide temporary credentials.

For centralized access management, we recommend AWS IAM Identity Center. For more information, see [What is IAM Identity Center?](https://docs.aws.amazon.com/singlesignon/latest/userguide/what-is.html) in the *AWS IAM Identity Center User Guide*.

### IAM users and groups
<a name="security_iam_authentication-iamuser"></a>

An *[IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html)* is an identity with specific permissions for a single person or application. We recommend using temporary credentials instead of IAM users with long-term credentials. For more information, see [Require human users to use federation with an identity provider to access AWS using temporary credentials](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#bp-users-federation-idp) in the *IAM User Guide*.

An [https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html) specifies a collection of IAM users and makes permissions easier to manage for large sets of users. For more information, see [Use cases for IAM users](https://docs.aws.amazon.com/IAM/latest/UserGuide/gs-identities-iam-users.html) in the *IAM User Guide*.

### IAM roles
<a name="security_iam_authentication-iamrole"></a>

An *[IAM role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html)* is an identity with specific permissions that provides temporary credentials. You can assume a role by [switching from a user to an IAM role (console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-console.html) or by calling an AWS CLI or AWS API operation. For more information, see [Methods to assume a role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_manage-assume.html) in the *IAM User Guide*.

IAM roles are useful for federated user access, temporary IAM user permissions, cross-account access, cross-service access, and applications running on Amazon EC2. For more information, see [Cross account resource access in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies-cross-account-resource-access.html) in the *IAM User Guide*.

## Managing access using policies
<a name="security_iam_access-manage"></a>

You control access in AWS by creating policies and attaching them to AWS identities or resources. A policy defines permissions when associated with an identity or resource. AWS evaluates these policies when a principal makes a request. Most policies are stored in AWS as JSON documents. For more information about JSON policy documents, see [Overview of JSON policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#access_policies-json) in the *IAM User Guide*.

Using policies, administrators specify who has access to what by defining which **principal** can perform **actions** on what **resources**, and under what **conditions**.

By default, users and roles have no permissions. An IAM administrator creates IAM policies and adds them to roles, which users can then assume. IAM policies define permissions regardless of the method used to perform the operation.

### Identity-based policies
<a name="security_iam_access-manage-id-based-policies"></a>

Identity-based policies are JSON permissions policy documents that you attach to an identity (user, group, or role). These policies control what actions identities can perform, on which resources, and under what conditions. To learn how to create an identity-based policy, see [Define custom IAM permissions with customer managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html) in the *IAM User Guide*.

Identity-based policies can be *inline policies* (embedded directly into a single identity) or *managed policies* (standalone policies attached to multiple identities). To learn how to choose between managed and inline policies, see [Choose between managed policies and inline policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies-choosing-managed-or-inline.html) in the *IAM User Guide*.

### Resource-based policies
<a name="security_iam_access-manage-resource-based-policies"></a>

Resource-based policies are JSON policy documents that you attach to a resource. Examples include IAM *role trust policies* and Amazon S3 *bucket policies*. In services that support resource-based policies, service administrators can use them to control access to a specific resource. You must [specify a principal](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html) in a resource-based policy.

Resource-based policies are inline policies that are located in that service. You can't use AWS managed policies from IAM in a resource-based policy.

### Other policy types
<a name="security_iam_access-manage-other-policies"></a>

AWS supports additional policy types that can set the maximum permissions granted by more common policy types:
+ **Permissions boundaries** – Set the maximum permissions that an identity-based policy can grant to an IAM entity. For more information, see [Permissions boundaries for IAM entities](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html) in the *IAM User Guide*.
+ **Service control policies (SCPs)** – Specify the maximum permissions for an organization or organizational unit in AWS Organizations. For more information, see [Service control policies](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html) in the *AWS Organizations User Guide*.
+ **Resource control policies (RCPs)** – Set the maximum available permissions for resources in your accounts. For more information, see [Resource control policies (RCPs)](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_rcps.html) in the *AWS Organizations User Guide*.
+ **Session policies** – Advanced policies passed as a parameter when creating a temporary session for a role or federated user. For more information, see [Session policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#policies_session) in the *IAM User Guide*.

### Multiple policy types
<a name="security_iam_access-manage-multiple-policies"></a>

When multiple types of policies apply to a request, the resulting permissions are more complicated to understand. To learn how AWS determines whether to allow a request when multiple policy types are involved, see [Policy evaluation logic](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_evaluation-logic.html) in the *IAM User Guide*.

# How Amazon S3 works with IAM
<a name="security_iam_service-with-iam"></a>

Before you use IAM to manage access to Amazon S3, learn what IAM features are available to use with Amazon S3.






**IAM features you can use with Amazon S3**  

| IAM feature | Amazon S3 support | 
| --- | --- | 
|  [Identity-based policies](#security_iam_service-with-iam-id-based-policies)  |   Yes  | 
|  [Resource-based policies](#security_iam_service-with-iam-resource-based-policies)  |   Yes  | 
|  [Policy actions](#security_iam_service-with-iam-id-based-policies-actions)  |   Yes  | 
|  [Policy resources](#security_iam_service-with-iam-id-based-policies-resources)  |   Yes  | 
|  [Policy condition keys (service-specific)](#security_iam_service-with-iam-id-based-policies-conditionkeys)  |   Yes  | 
|  [ACLs](#security_iam_service-with-iam-acls)  |   Yes  | 
|  [ABAC (tags in policies)](#security_iam_service-with-iam-tags)  |   Partial  | 
|  [Temporary credentials](#security_iam_service-with-iam-roles-tempcreds)  |   Yes  | 
|  [Forward access sessions (FAS)](#security_iam_service-with-iam-principal-permissions)  |   Yes  | 
|  [Service roles](#security_iam_service-with-iam-roles-service)  |   Yes  | 
|  [Service-linked roles](#security_iam_service-with-iam-roles-service-linked)  |   Partial  | 

To get a high-level view of how Amazon S3 and other AWS services work with most IAM features, see [AWS services that work with IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_aws-services-that-work-with-iam.html) in the *IAM User Guide*.

For more information about the permissions to S3 API operations by S3 resource types, see [Required permissions for Amazon S3 API operations](using-with-s3-policy-actions.md).

## Identity-based policies for Amazon S3
<a name="security_iam_service-with-iam-id-based-policies"></a>

**Supports identity-based policies:** Yes

Identity-based policies are JSON permissions policy documents that you can attach to an identity, such as an IAM user, group of users, or role. These policies control what actions users and roles can perform, on which resources, and under what conditions. To learn how to create an identity-based policy, see [Define custom IAM permissions with customer managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html) in the *IAM User Guide*.

With IAM identity-based policies, you can specify allowed or denied actions and resources as well as the conditions under which actions are allowed or denied. To learn about all of the elements that you can use in a JSON policy, see [IAM JSON policy elements reference](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html) in the *IAM User Guide*.

### Identity-based policy examples for Amazon S3
<a name="security_iam_service-with-iam-id-based-policies-examples"></a>

To view examples of Amazon S3 identity-based policies, see [Identity-based policies for Amazon S3](security_iam_id-based-policy-examples.md).

## Resource-based policies within Amazon S3
<a name="security_iam_service-with-iam-resource-based-policies"></a>

**Supports resource-based policies:** Yes

Resource-based policies are JSON policy documents that you attach to a resource. Examples of resource-based policies are IAM *role trust policies* and Amazon S3 *bucket policies*. In services that support resource-based policies, service administrators can use them to control access to a specific resource. For the resource where the policy is attached, the policy defines what actions a specified principal can perform on that resource and under what conditions. You must [specify a principal](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html) in a resource-based policy. Principals can include accounts, users, roles, federated users, or AWS services.

To enable cross-account access, you can specify an entire account or IAM entities in another account as the principal in a resource-based policy. For more information, see [Cross account resource access in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies-cross-account-resource-access.html) in the *IAM User Guide*.

The Amazon S3 service supports *bucket policies*, *access points policies*, and *access grants*:
+ Bucket policies are resource-based policies that are attached to an Amazon S3 bucket. A bucket policy defines which principals can perform actions on the bucket.
+ Access point policies are resource-based polices that are evaluated in conjunction with the underlying bucket policy.
+ Access grants are a simplified model for defining access permissions to data in Amazon S3 by prefix, bucket, or object. For information about S3 Access Grants, see [Managing access with S3 Access Grants](access-grants.md).

### Principals for bucket policies
<a name="s3-bucket-user-policy-specifying-principal-intro"></a>

The `Principal` element specifies the user, account, service, or other entity that is either allowed or denied access to a resource. The following are examples of specifying `Principal`. For more information, see [Principal](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html) in the *IAM User Guide*.

#### Grant permissions to an AWS account
<a name="s3-aws-account-permissions"></a>

To grant permissions to an AWS account, identify the account using the following format.

```
"AWS":"account-ARN"
```

The following are examples.

```
"Principal":{"AWS":"arn:aws:iam::AccountIDWithoutHyphens:root"}
```

```
"Principal":{"AWS":["arn:aws:iam::AccountID1WithoutHyphens:root","arn:aws:iam::AccountID2WithoutHyphens:root"]}
```

**Note**  
The examples above grant permissions to the root user, which delegates permissions to the account level. However, IAM policies are still required on the specific roles and users in the account.

#### Grant permissions to an IAM user
<a name="s3-aws-user-permissions"></a>

To grant permission to an IAM user within your account, you must provide an `"AWS":"user-ARN"` name-value pair.

```
"Principal":{"AWS":"arn:aws:iam::account-number-without-hyphens:user/username"}
```

For detailed examples that provide step-by-step instructions, see [Example 1: Bucket owner granting its users bucket permissions](example-walkthroughs-managing-access-example1.md) and [Example 3: Bucket owner granting permissions to objects it does not own](example-walkthroughs-managing-access-example3.md).

**Note**  
If an IAM identity is deleted after you update your bucket policy, the bucket policy will show a unique identifier in the principal element instead of an ARN. These unique IDs are never reused, so you can safely remove principals with unique identifiers from all of your policy statements. For more information about unique identifiers, see [IAM identifiers](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html#identifiers-unique-ids) in the *IAM User Guide*.

#### Grant anonymous permissions
<a name="s3-anonymous-permissions"></a>

**Warning**  
Use caution when granting anonymous access to your Amazon S3 bucket. When you grant anonymous access, anyone in the world can access your bucket. We highly recommend that you never grant any kind of anonymous write access to your S3 bucket.

To grant permission to everyone, also referred as anonymous access, you set the wildcard (`"*"`) as the `Principal` value. For example, if you configure your bucket as a website, you want all the objects in the bucket to be publicly accessible.

```
"Principal":"*"
```

```
"Principal":{"AWS":"*"}
```

Using `"Principal": "*"` with an `Allow` effect in a resource-based policy allows anyone, even if they’re not signed in to AWS, to access your resource. 

Using `"Principal" : { "AWS" : "*" }` with an `Allow` effect in a resource-based policy allows any root user, IAM user, assumed-role session, or federated user in any account in the same partition to access your resource.

For anonymous users, these two methods are equivalent. For more information, see [All principals](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html#principal-anonymous) in the *IAM User Guide*.

You cannot use a wildcard to match part of a principal name or ARN.

**Important**  
In AWS access control policies, the Principals "\$1" and \$1"AWS": "\$1"\$1 behave identically.

#### Restrict resource permissions
<a name="s3-restrict-permissions"></a>

You can also use resource policy to restrict access to resources that would otherwise be available to IAM principals. Use a `Deny` statement to prevent access.

The following example blocks access if a secure transport protocol isn’t used:

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "DenyBucketAccessIfSTPNotUsed",
      "Effect": "Deny",
      "Principal": "*",
      "Action": "s3:*",
      "Resource": "arn:aws:s3:::amzn-s3-demo-bucket",
      "Condition": {
        "Bool": {
          "aws:SecureTransport": "false"
        }
      }
    }
  ]
}
```

------

Using `"Principal": "*"` so that this restriction applies to everyone is a best practice for this policy, instead of attempting to deny access only to specific accounts or principals using this method. 

#### Require access through CloudFront URLs
<a name="require-cloudfront-urls"></a>

You can require that your users access your Amazon S3 content only by using CloudFront URLs instead of Amazon S3 URLs. To do this, create a CloudFront origin access control (OAC). Then, change the permissions on your S3 data. In your bucket policy, you can set CloudFront as the Principal as follows:

```
"Principal":{"Service":"cloudfront.amazonaws.com"}
```

Use a `Condition` element in the policy to allow CloudFront to access the bucket only when the request is on behalf of the CloudFront distribution that contains the S3 origin.

```
        "Condition": {
           "StringEquals": {
              "AWS:SourceArn": "arn:aws:cloudfront::111122223333:distribution/CloudFront-distribution-ID"
           }
        }
```

For more information about requiring S3 access through CloudFront URLs, see [Restricting access to an Amazon Simple Storage Service origin](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html) in the *Amazon CloudFront Developer Guide*. For more information about the security and privacy benefits of using Amazon CloudFront, see [Configuring secure access and restricting access to content](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/SecurityAndPrivateContent.html). 

### Resource-based policy examples for Amazon S3
<a name="security_iam_service-with-iam-resource-based-policies-examples"></a>
+ To view policy examples for Amazon S3 buckets, see [Bucket policies for Amazon S3](bucket-policies.md).
+ To view policy examples for access points, see [Configuring IAM policies for using access points](access-points-policies.md).

## Policy actions for Amazon S3
<a name="security_iam_service-with-iam-id-based-policies-actions"></a>

**Supports policy actions:** Yes

Administrators can use AWS JSON policies to specify who has access to what. That is, which **principal** can perform **actions** on what **resources**, and under what **conditions**.

The `Action` element of a JSON policy describes the actions that you can use to allow or deny access in a policy. Include actions in a policy to grant permissions to perform the associated operation.

The following shows different types of mapping relationship between S3 API operations and the required policy actions.
+ One-to-one mapping with the same name. For example, to use the `PutBucketPolicy` API operation, the `s3:PutBucketPolicy` policy action is required.
+ One-to-one mapping with different names. For example, to use the `ListObjectsV2` API operation, the `s3:ListBucket` policy action is required.
+ One-to-many mapping. For example, to use the `HeadObject` API operation, the `s3:GetObject` is required. Also, when you use S3 Object Lock and want to get an object's Legal Hold status or retention settings, the corresponding `s3:GetObjectLegalHold` or `s3:GetObjectRetention` policy actions are also required before you can use the `HeadObject` API operation.
+ Many-to-one mapping. For example, to use the `ListObjectsV2` or `HeadBucket` API operations, the `s3:ListBucket` policy action is required.



To see a list of Amazon S3 actions for use in policies, see [Actions defined by Amazon S3](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html#amazons3-actions-as-permissions) in the *Service Authorization Reference*. For a complete list of Amazon S3 API operations, see [Amazon S3 API Actions](https://docs.aws.amazon.com//AmazonS3/latest/API/API_Operations.html) in the *Amazon Simple Storage Service API Reference*.

For more information about the permissions to S3 API operations by S3 resource types, see [Required permissions for Amazon S3 API operations](using-with-s3-policy-actions.md).

Policy actions in Amazon S3 use the following prefix before the action:

```
s3
```

To specify multiple actions in a single statement, separate them with commas.

```
"Action": [
      "s3:action1",
      "s3:action2"
         ]
```





### Bucket operations
<a name="using-with-s3-actions-related-to-buckets"></a>

Bucket operations are S3 API operations that operate on the bucket resource type. For example, `CreateBucket`, `ListObjectsV2`, and `PutBucketPolicy`. S3 policy actions for bucket operations require the `Resource` element in bucket policies or IAM identity-based policies to be the S3 bucket type Amazon Resource Name (ARN) identifier in the following example format. 

```
"Resource": "arn:aws:s3:::amzn-s3-demo-bucket"
```

The following bucket policy grants the user `Akua` with account `12345678901` the `s3:ListBucket` permission to perform the [https://docs.aws.amazon.com//AmazonS3/latest/API/API_PutObject.html](https://docs.aws.amazon.com//AmazonS3/latest/API/API_PutObject.html) API operation and list objects in an S3 bucket.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "Allow Akua to list objects in the bucket",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:user/Akua"
            },
            "Action": [
            "s3:ListBucket"
            ],
            "Resource": "arn:aws:s3:::amzn-s3-demo-bucket"
        }
    ]
}
```

------
<a name="bucket-operations-ap"></a>
**Bucket operations in policies for access points for general purpose buckets**  
Permissions granted in an access point for general purpose buckets policy are effective only if the underlying bucket allows the same permissions. When you use S3 Access Points, you must delegate access control from the bucket to the access point or add the same permissions in the access point policies to the underlying bucket's policy. For more information, see [Configuring IAM policies for using access points](access-points-policies.md). In access point policies, S3 policy actions for bucket operations require you to use the access point ARN for the `Resource` element in the following format. 

```
"Resource": "arn:aws:s3:us-west-2:123456789012:accesspoint/example-access-point"
```

The following access point policy grants the user `Akua` with account `12345678901` the `s3:ListBucket` permission to perform the [https://docs.aws.amazon.com//AmazonS3/latest/API/API_ListObjectsV2.html](https://docs.aws.amazon.com//AmazonS3/latest/API/API_ListObjectsV2.html) API operation through the S3 access point named `example-access-point`. This permission allows `Akua` to list the objects in the bucket that's associated with `example-access-point`. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "AllowAkuaToListObjectsInBucketThroughAccessPoint",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:user/Akua"
            },
            "Action": [
            "s3:ListBucket"
            ],
            "Resource": "arn:aws:s3:us-west-2:111122223333:accesspoint/example-access-point"
        }
    ]
}
```

------

**Note**  
Not all bucket operations are supported by access points for general purpose buckets. For more information, see [Access points compatibility with S3 operations](access-points-service-api-support.md#access-points-operations-support).
<a name="bucket-operations-ap-directory-buckets"></a>
**Bucket operations in policies for access points for directory buckets**  
Permissions granted in an access points for directory buckets policy are effective only if the underlying bucket allows the same permissions. When you use S3 Access Points, you must delegate access control from the bucket to the access point or add the same permissions in the access point policies to the underlying bucket's policy. For more information, see [Configuring IAM policies for using access points for directory buckets](access-points-directory-buckets-policies.md). In access point policies, S3 policy actions for bucket operations require you to use the access point ARN for the `Resource` element in the following format. 

```
"Resource": "arn:aws:s3:us-west-2:123456789012:accesspoint/example-access-point--usw2-az1--xa-s3"
```

The following access point policy grants the user `Akua` with account `12345678901` the `s3:ListBucket` permission to perform the [https://docs.aws.amazon.com//AmazonS3/latest/API/API_ListObjectsV2.html](https://docs.aws.amazon.com//AmazonS3/latest/API/API_ListObjectsV2.html) API operation through the access point named `example-access-point--usw2-az1--xa-s3`. This permission allows `Akua` to list the objects in the bucket that's associated with `example-access-point--usw2-az1--xa-s3`. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "AllowAkuaToListObjectsInTheBucketThroughAccessPoint",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:user/Akua"
            },
            "Action": [
            "s3:ListBucket"
            ],
            "Resource": "arn:aws:s3express:us-east-1:111122223333:accesspoint/example-access-point-usw2-az1-xa-s3"
        }
    ]
}
```

------

**Note**  
Not all bucket operations are supported by access points for directory buckets. For more information, see [Object operations for access points for directory buckets](access-points-directory-buckets-service-api-support.md).

### Object operations
<a name="using-with-s3-actions-related-to-objects"></a>

Object operations are S3 API operations that act upon the object resource type. For example, `GetObject`, `PutObject`, and `DeleteObject`. S3 policy actions for object operations require the `Resource` element in policies to be the S3 object ARN in the following example formats. 

```
"Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*"
```

```
"Resource": "arn:aws:s3:::amzn-s3-demo-bucket/prefix/*"
```

**Note**  
The object ARN must contain a forward slash after the bucket name, as seen in the previous examples.

The following bucket policy grants the user `Akua` with account `12345678901` the `s3:PutObject` permission. This permission allows `Akua` to use the [https://docs.aws.amazon.com//AmazonS3/latest/API/API_PutObject.html](https://docs.aws.amazon.com//AmazonS3/latest/API/API_PutObject.html) API operation to upload objects to the S3 bucket named `amzn-s3-demo-bucket`.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "Allow Akua to upload objects",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:user/Akua"
            },
            "Action": [
            "s3:PutObject"
            ],
            "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*"
        }
    ]
}
```

------
<a name="object-operations-ap"></a>
**Object operations in access point policies**  
When you use S3 Access Points to control access to object operations, you can use access point policies. When you use access point policies, S3 policy actions for object operations require you to use the access point ARN for the `Resource` element in the following format: `arn:aws:s3:region:account-id:accesspoint/access-point-name/object/resource`. For object operations that use access points, you must include the `/object/` value after the whole access point ARN in the `Resource` element. Here are some examples.

```
"Resource": "arn:aws:s3:us-west-2:123456789012:accesspoint/example-access-point/object/*"
```

```
"Resource": "arn:aws:s3:us-west-2:123456789012:accesspoint/example-access-point/object/prefix/*"
```

The following access point policy grants the user `Akua` with account `12345678901` the `s3:GetObject` permission. This permission allows `Akua` to perform the [https://docs.aws.amazon.com//AmazonS3/latest/API/API_GetObject.html](https://docs.aws.amazon.com//AmazonS3/latest/API/API_GetObject.html) API operation through the access point named `example-access-point` on all objects in the bucket that's associated with the access point. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "Allow Akua to get objects through access point",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:user/Akua"
            },
            "Action": [
            "s3:GetObject"
            ],
            "Resource": "arn:aws:s3:us-east-1:111122223333:accesspoint/example-access-point/object/*"
        }
    ]
}
```

------

**Note**  
Not all object operations are supported by access points. For more information, see [Access points compatibility with S3 operations](access-points-service-api-support.md#access-points-operations-support).
<a name="object-operations-ap-directory-buckets"></a>
**Object operations in policies for access points for directory buckets**  
When you use access points for directory buckets to control access to object operations, you can use access point policies. When you use access point policies, S3 policy actions for object operations require you to use the access point ARN for the `Resource` element in the following format: `arn:aws:s3:region:account-id:accesspoint/access-point-name/object/resource`. For object operations that use access points, you must include the `/object/` value after the whole access point ARN in the `Resource` element. Here are some examples.

```
"Resource": "arn:aws:s3express:us-west-2:123456789012:accesspoint/example-access-point--usw2-az1--xa-s3/object/*"
```

```
"Resource": "arn:aws:s3express:us-west-2:123456789012:accesspoint/example-access-point--usw2-az1--xa-s3/object/prefix/*"
```

The following access point policy grants the user `Akua` with account `12345678901` the `s3:GetObject` permission. This permission allows `Akua` to perform the [https://docs.aws.amazon.com//AmazonS3/latest/API/API_GetObject.html](https://docs.aws.amazon.com//AmazonS3/latest/API/API_GetObject.html) API operation through the access point named `example-access-point--usw2-az1--xa-s3` on all objects in the bucket that's associated with the access point. 

```
{
    "Version": "2012-10-17", 		 	 	 
    "Statement": [
        {
            "Sid": "Allow Akua to get objects through access point",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::12345678901:user/Akua"
            },
            "Action": "s3express:CreateSession","s3:GetObject"
            "Resource": "arn:aws:s3:us-west-2:123456789012:accesspoint/example-access-point--usw2-az1--xa-s3/object/*"
        }
    ]
}
```

**Note**  
Not all object operations are supported by access points for directory buckets. For more information, see [Object operations for access points for directory buckets](access-points-directory-buckets-service-api-support.md).

### Access point for general purpose bucket operations
<a name="using-with-s3-actions-related-to-accesspoint"></a>

Access point operations are S3 API operations that operate on the `accesspoint` resource type. For example, `CreateAccessPoint`, `DeleteAccessPoint`, and `GetAccessPointPolicy`. S3 policy actions for access point operations can only be used in IAM identity-based policies, not in bucket policies or access point policies. Access points operations require the `Resource` element to be the access point ARN in the following example format. 

```
"Resource": "arn:aws:s3:us-west-2:123456789012:accesspoint/example-access-point"
```

The following IAM identity-based policy grants the `s3:GetAccessPointPolicy` permission to perform the [https://docs.aws.amazon.com//AmazonS3/latest/API/API_control_GetAccessPointPolicy.html](https://docs.aws.amazon.com//AmazonS3/latest/API/API_control_GetAccessPointPolicy.html) API operation on the S3 access point named `example-access-point`.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "GrantPermissionToRetrieveTheAccessPointPolicyOfAccessPointExampleAccessPoint",
            "Effect": "Allow",
            "Action": [
            "s3:GetAccessPointPolicy"
            ],
            "Resource": "arn:aws:s3:*:123456789012:accesspoint/example-access-point"
        }
    ]
}
```

------

When you use Access Points, to control access to bucket operations, see [Bucket operations in policies for access points for general purpose buckets](#bucket-operations-ap); to control access to object operations, see [Object operations in access point policies](#object-operations-ap). For more information about how to configure access point policies, see [Configuring IAM policies for using access points](access-points-policies.md).

### Access point for directory buckets operations
<a name="using-with-s3-actions-related-to-accesspoint-directory-buckets"></a>

Access point for directory buckets operations are S3 API operations that operate on the `accesspoint` resource type. For example, `CreateAccessPoint`, `DeleteAccessPoint`, and `GetAccessPointPolicy`. S3 policy actions for access point operations can only be used in IAM identity-based policies, not in bucket policies or access point policies. Access points for directory buckets operations require the `Resource` element to be the access point ARN in the following example format. 

```
"Resource": "arn:aws:s3:us-west-2:123456789012:accesspoint/example-access-point--usw2-az1--xa-s3"
```

The following IAM identity-based policy grants the `s3express:GetAccessPointPolicy` permission to perform the [https://docs.aws.amazon.com//AmazonS3/latest/API/API_control_GetAccessPointPolicy.html](https://docs.aws.amazon.com//AmazonS3/latest/API/API_control_GetAccessPointPolicy.html) API operation on the access point named `example-access-point--usw2-az1--xa-s3`.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "GrantPermissionToRetrieveTheAccessPointPolicyOfAccessPointExampleAccessPointUsw2Az1XaS3",
            "Effect": "Allow",
            "Action": [
            "s3express:CreateSession","s3express:GetAccessPointPolicy"
            ],
            "Resource": "arn:aws:s3:*:111122223333:accesspoint/example-access-point"
        }
    ]
}
```

------

The following IAM identity-based policy grants the `s3express:CreateAccessPoint` permission to create an access points for directory buckets.

```
{
    "Version": "2012-10-17", 		 	 	 
    "Statement": [
        {
            "Sid": "Grant CreateAccessPoint.",
            "Principal": "*",
            "Action": "s3express:CreateSession",
            "s3express:CreateAccessPoint""Effect": "Allow",
            "Resource": "*"
        }
    ]
}
```

The following IAM identity-based policy grants the `s3express:PutAccessPointScope` permission to create access point scope for access points for directory buckets.

```
{
    "Version": "2012-10-17", 		 	 	 
    "Statement": [
        {
            "Sid": "Grant PutAccessPointScope",
            "Principal": "*",
            "Action": "s3express:CreateSession",
            "s3express:CreateAccessPoint",
            "S3Express:PutAccessPointScope""Effect": "Allow",
            "Resource": "*",
        }
    ]
}
```

When you use access points for directory buckets to control access to bucket operations, see [Bucket operations in policies for access points for directory buckets](#bucket-operations-ap-directory-buckets); to control access to object operations, see [Object operations in policies for access points for directory buckets](#object-operations-ap-directory-buckets). For more information about how to configure access points for directory buckets policies, see [Configuring IAM policies for using access points for directory buckets](access-points-directory-buckets-policies.md).

### Object Lambda Access Point operations
<a name="using-with-s3-actions-related-to-olap"></a>

With Amazon S3 Object Lambda, you can add your own code to Amazon S3 `GET`, `LIST`, and `HEAD` requests to modify and process data as it is returned to an application. You can make requests through an Object Lambda Access Point, which works the same as making requests through other access points. For more information, see [Transforming objects with S3 Object Lambda](transforming-objects.md).

For more information about how to configure policies for Object Lambda Access Point operations, see [Configuring IAM policies for Object Lambda Access Points](olap-policies.md).

### Multi-Region Access Point operations
<a name="using-with-s3-actions-related-to-mrap"></a>

A Multi-Region Access Point provides a global endpoint that applications can use to fulfill requests from S3 buckets that are located in multiple AWS Region. You can use a Multi-Region Access Point to build multi-Region applications with the same architecture that's used in a single Region, and then run those applications anywhere in the world. For more information, see [Managing multi-Region traffic with Multi-Region Access Points](MultiRegionAccessPoints.md).

For more information about how to configure policies for Multi-Region Access Point operations, see [Multi-Region Access Point policy examples](MultiRegionAccessPointPermissions.md#MultiRegionAccessPointPolicyExamples).

### Batch job operations
<a name="using-with-s3-actions-related-to-batchops"></a>

(Batch Operations) job operations are S3 API operations that operate on the job resource type. For example, `DescribeJob` and `CreateJob`. S3 policy actions for job operations can only be used in IAM identity-based policies, not in bucket policies. Also, job operations require the `Resource` element in IAM identity-based policies to be the `job` ARN in the following example format. 

```
"Resource": "arn:aws:s3:*:123456789012:job/*"
```

The following IAM identity-based policy grants the `s3:DescribeJob` permission to perform the [DescribeJob](https://docs.aws.amazon.com//AmazonS3/latest/API/API_DescribeJob.html) API operation on the S3 Batch Operations job named `example-job`.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "AllowDescribingBatchOperationJob",
            "Effect": "Allow",
            "Action": [
            "s3:DescribeJob"
            ],
            "Resource": "arn:aws:s3:*:111122223333:job/example-job"
        }
    ]
}
```

------

### S3 Storage Lens configuration operations
<a name="using-with-s3-actions-related-to-lens"></a>

For more information about how to configure S3 Storage Lens configuration operations, see [Setting Amazon S3 Storage Lens permissions](storage_lens_iam_permissions.md).

### Account operations
<a name="using-with-s3-actions-related-to-accounts"></a>

Account operations are S3 API operations that operate on the account level. For example, `GetPublicAccessBlock` (for account). Account isn't a resource type defined by Amazon S3. S3 policy actions for account operations can only be used in IAM identity-based policies, not in bucket policies. Also, account operations require the `Resource` element in IAM identity-based policies to be `"*"`. 

The following IAM identity-based policy grants the `s3:GetAccountPublicAccessBlock` permission to perform the account-level [https://docs.aws.amazon.com//AmazonS3/latest/API/API_control_GetPublicAccessBlock.html](https://docs.aws.amazon.com//AmazonS3/latest/API/API_control_GetPublicAccessBlock.html) API operation and retrieve the account-level Public Access Block settings.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Sid":"AllowRetrievingTheAccountLevelPublicAccessBlockSettings",
         "Effect":"Allow",
         "Action":[
            "s3:GetAccountPublicAccessBlock" 
         ],
         "Resource":[
            "*"
         ]
       }
    ]
}
```

------

### Policy examples for Amazon S3
<a name="security_iam_service-with-policies-examples-actions"></a>
+ To view examples of Amazon S3 identity-based policies, see [Identity-based policies for Amazon S3](security_iam_id-based-policy-examples.md).
+ To view examples of Amazon S3 resource-based policies, see [Bucket policies for Amazon S3](bucket-policies.md) and [Configuring IAM policies for using access points](access-points-policies.md).

## Policy resources for Amazon S3
<a name="security_iam_service-with-iam-id-based-policies-resources"></a>

**Supports policy resources:** Yes

Administrators can use AWS JSON policies to specify who has access to what. That is, which **principal** can perform **actions** on what **resources**, and under what **conditions**.

The `Resource` JSON policy element specifies the object or objects to which the action applies. As a best practice, specify a resource using its [Amazon Resource Name (ARN)](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference-arns.html). For actions that don't support resource-level permissions, use a wildcard (\$1) to indicate that the statement applies to all resources.

```
"Resource": "*"
```

Some Amazon S3 API actions support multiple resources. For example, `s3:GetObject` accesses `example-resource-1` and `example-resource-2`, so a principal must have permissions to access both resources. To specify multiple resources in a single statement, separate the ARNs with commas, as shown in the following example. 

```
"Resource": [
      "example-resource-1",
      "example-resource-2"
```

Resources in Amazon S3 are buckets, objects, access points, or jobs. In a policy, use the Amazon Resource Name (ARN) of the bucket, object, access point, or job to identify the resource.

To see a complete list of Amazon S3 resource types and their ARNs, see [Resources defined by Amazon S3](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html#amazons3-resources-for-iam-policies) in the *Service Authorization Reference*. To learn with which actions you can specify the ARN of each resource, see [Actions defined by Amazon S3](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html#amazons3-actions-as-permissions).

For more information about the permissions to S3 API operations by S3 resource types, see [Required permissions for Amazon S3 API operations](using-with-s3-policy-actions.md).

### Wildcard characters in resource ARNs
<a name="s3-arn-wildcards"></a>

You can use wildcard characters as part of the resource ARN. You can use the wildcard characters (`*` and `?`) within any ARN segment (the parts separated by colons). An asterisk (`*`) represents any combination of zero or more characters, and a question mark (`?`) represents any single character. You can use multiple `*` or `?` characters in each segment. However, a wildcard character can't span segments. 
+ The following ARN uses the `*` wildcard character in the `relative-ID` part of the ARN to identify all objects in the `amzn-s3-demo-bucket` bucket.

  ```
  1. arn:aws:s3:::amzn-s3-demo-bucket/*
  ```
+ The following ARN uses `*` to indicate all S3 buckets and objects.

  ```
  arn:aws:s3:::*
  ```
+ The following ARN uses both of the wildcard characters, `*` and `?`, in the `relative-ID` part. This ARN identifies all objects in buckets such as *`amzn-s3-demo-example1bucket`*, `amzn-s3-demo-example2bucket`, `amzn-s3-demo-example3bucket`, and so on.

  ```
  1. arn:aws:s3:::amzn-s3-demo-example?bucket/*
  ```

### Policy variables for resource ARNs
<a name="s3-policy-variables"></a>

You can use policy variables in Amazon S3 ARNs. At policy-evaluation time, these predefined variables are replaced by their corresponding values. Suppose that you organize your bucket as a collection of folders, with one folder for each of your users. The folder name is the same as the username. To grant users permission to their folders, you can specify a policy variable in the resource ARN:

```
arn:aws:s3:::bucket_name/developers/${aws:username}/
```

At runtime, when the policy is evaluated, the variable `${aws:username}` in the resource ARN is substituted with the username of the person who is making the request. 





### Policy examples for Amazon S3
<a name="security_iam_service-with-policies-examples-resources"></a>
+ To view examples of Amazon S3 identity-based policies, see [Identity-based policies for Amazon S3](security_iam_id-based-policy-examples.md).
+ To view examples of Amazon S3 resource-based policies, see [Bucket policies for Amazon S3](bucket-policies.md) and [Configuring IAM policies for using access points](access-points-policies.md).

## Policy condition keys for Amazon S3
<a name="security_iam_service-with-iam-id-based-policies-conditionkeys"></a>

**Supports service-specific policy condition keys:** Yes

Administrators can use AWS JSON policies to specify who has access to what. That is, which **principal** can perform **actions** on what **resources**, and under what **conditions**.

The `Condition` element specifies when statements execute based on defined criteria. You can create conditional expressions that use [condition operators](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition_operators.html), such as equals or less than, to match the condition in the policy with values in the request. To see all AWS global condition keys, see [AWS global condition context keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html) in the *IAM User Guide*.

Each Amazon S3 condition key maps to the same name request header allowed by the API on which the condition can be set. Amazon S3‐specific condition keys dictate the behavior of the same name request headers. For example, the condition key `s3:VersionId` used to grant conditional permission for the `s3:GetObjectVersion` permission defines behavior of the `versionId` query parameter that you set in a GET Object request.

To see a list of Amazon S3 condition keys, see [Condition keys for Amazon S3](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html#amazons3-policy-keys) in the *Service Authorization Reference*. To learn with which actions and resources you can use a condition key, see [Actions defined by Amazon S3](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html#amazons3-actions-as-permissions).

### Example: Restricting object uploads to objects with a specific storage class
<a name="example-storage-class-condition-key"></a>

Suppose that Account A, represented by account ID `123456789012`, owns a bucket. The Account A administrator wants to restrict *`Dave`*, a user in Account A, so that *`Dave`* can upload objects to the bucket only if the object is stored in the `STANDARD_IA` storage class. To restrict object uploads to a specific storage class, the Account A administrator can use the `s3:x-amz-storage-class` condition key, as shown in the following example bucket policy. 

------
#### [ JSON ]

****  

```
{
                 "Version":"2012-10-17",		 	 	 
                 "Statement": [
                   {
                     "Sid": "statement1",
                     "Effect": "Allow",
                     "Principal": {
                       "AWS": "arn:aws:iam::123456789012:user/Dave"
                     },
                     "Action": "s3:PutObject",
                     "Resource": "arn:aws:s3:::amzn-s3-demo-bucket1/*",
                     "Condition": {
                       "StringEquals": {
                         "s3:x-amz-storage-class": [
                           "STANDARD_IA"
                         ]
                       }
                     }
                   }
                 ]
            }
```

------

In the example, the `Condition` block specifies the `StringEquals` condition that is applied to the specified key-value pair, `"s3:x-amz-acl":["public-read"]`. There is a set of predefined keys that you can use in expressing a condition. The example uses the `s3:x-amz-acl` condition key. This condition requires the user to include the `x-amz-acl` header with value `public-read` in every `PutObject` request.

### Policy examples for Amazon S3
<a name="security_iam_service-with-policies-examples-conditions"></a>
+ To view examples of Amazon S3 identity-based policies, see [Identity-based policies for Amazon S3](security_iam_id-based-policy-examples.md).
+ To view examples of Amazon S3 resource-based policies, see [Bucket policies for Amazon S3](bucket-policies.md) and [Configuring IAM policies for using access points](access-points-policies.md).

## ACLs in Amazon S3
<a name="security_iam_service-with-iam-acls"></a>

**Supports ACLs:** Yes

In Amazon S3, access control lists (ACLs) control which AWS accounts have permissions to access a resource. ACLs are similar to resource-based policies, although they do not use the JSON policy document format.

**Important**  
A majority of modern use cases in Amazon S3 no longer require the use of ACLs. 

For information about using ACLs to control access in Amazon S3, see [Managing access with ACLs](acls.md).

## ABAC with Amazon S3
<a name="security_iam_service-with-iam-tags"></a>

**Supports ABAC (tags in policies):** Partial

Attribute-based access control (ABAC) is an authorization strategy that defines permissions based on attributes called tags. You can attach tags to IAM entities and AWS resources, then design ABAC policies to allow operations when the principal's tag matches the tag on the resource.

To control access based on tags, you provide tag information in the [condition element](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition.html) of a policy using the `aws:ResourceTag/key-name`, `aws:RequestTag/key-name`, or `aws:TagKeys` condition keys.

If a service supports all three condition keys for every resource type, then the value is **Yes** for the service. If a service supports all three condition keys for only some resource types, then the value is **Partial**.

For more information about ABAC, see [Define permissions with ABAC authorization](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction_attribute-based-access-control.html) in the *IAM User Guide*. To view a tutorial with steps for setting up ABAC, see [Use attribute-based access control (ABAC)](https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_attribute-based-access-control.html) in the *IAM User Guide*.

For information about resources that support ABAC in Amazon S3, see [Using tags for attribute-based access control (ABAC)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/tagging.html#using-tags-for-abac).

To view example identity-based policies for limiting access to S3 Batch Operations jobs based on tags, see [Controlling permissions for Batch Operations using job tags](batch-ops-job-tags-examples.md).

### ABAC and object tags
<a name="s3-object-tags"></a>

In ABAC policies, objects use `s3:` tags instead of `aws:` tags. To control access to objects based on object tags, you provide tag information in the [Condition element](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition.html) of a policy using the following tags:
+ `s3:ExistingObjectTag/tag-key`
+ `s3:RequestObjectTagKeys`
+ `s3:RequestObjectTag/tag-key`

For information about using object tags to control access, including example permission policies, see [Tagging and access control policies](tagging-and-policies.md).

## Using temporary credentials with Amazon S3
<a name="security_iam_service-with-iam-roles-tempcreds"></a>

**Supports temporary credentials:** Yes

Temporary credentials provide short-term access to AWS resources and are automatically created when you use federation or switch roles. AWS recommends that you dynamically generate temporary credentials instead of using long-term access keys. For more information, see [Temporary security credentials in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html) and [AWS services that work with IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_aws-services-that-work-with-iam.html) in the *IAM User Guide*.

## Forward access sessions for Amazon S3
<a name="security_iam_service-with-iam-principal-permissions"></a>

**Supports forward access sessions (FAS):** Yes

 Forward access sessions (FAS) use the permissions of the principal calling an AWS service, combined with the requesting AWS service to make requests to downstream services. For policy details when making FAS requests, see [Forward access sessions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_forward_access_sessions.html). 
+ FAS is used by Amazon S3 to make calls to AWS KMS to decrypt an object when SSE-KMS was used to encrypt it. For more information, see [Using server-side encryption with AWS KMS keys (SSE-KMS)](UsingKMSEncryption.md). 
+ S3 Access Grants also uses FAS. After you create an access grant to your S3 data for a particular identity, the grantee requests a temporary credential from S3 Access Grants. S3 Access Grants obtains a temporary credential for the requester from AWS STS and vends the credential to the requester. For more information, see [Request access to Amazon S3 data through S3 Access Grants](access-grants-credentials.md).

## Service roles for Amazon S3
<a name="security_iam_service-with-iam-roles-service"></a>

**Supports service roles:** Yes

 A service role is an [IAM role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) that a service assumes to perform actions on your behalf. An IAM administrator can create, modify, and delete a service role from within IAM. For more information, see [Create a role to delegate permissions to an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-service.html) in the *IAM User Guide*. 

**Warning**  
Changing the permissions for a service role might break Amazon S3 functionality. Edit service roles only when Amazon S3 provides guidance to do so.

## Service-linked roles for Amazon S3
<a name="security_iam_service-with-iam-roles-service-linked"></a>

**Supports service-linked roles:** Partial

 A service-linked role is a type of service role that is linked to an AWS service. The service can assume the role to perform an action on your behalf. Service-linked roles appear in your AWS account and are owned by the service. An IAM administrator can view, but not edit the permissions for service-linked roles. 

Amazon S3 supports service-linked roles for Amazon S3 Storage Lens. For details about creating or managing Amazon S3 service-linked roles, see [Using service-linked roles for Amazon S3 Storage Lens](using-service-linked-roles.md).

**Amazon S3 Service as a Principal**


| Service name in the policy | S3 feature | More information | 
| --- | --- | --- | 
|  `s3.amazonaws.com`  |  S3 Replication  |  [Setting up live replication overview](replication-how-setup.md)  | 
|  `s3.amazonaws.com`  |  S3 event notifications  |  [Amazon S3 Event Notifications](EventNotifications.md)  | 
|  `s3.amazonaws.com`  |  S3 Inventory  |  [Cataloging and analyzing your data with S3 Inventory](storage-inventory.md)  | 
|  `access-grants.s3.amazonaws.com`  |  S3 Access Grants  |  [Register a location](access-grants-location-register.md)  | 
|  `batchoperations.s3.amazonaws.com`  |  S3 Batch Operations  |  [Granting permissions for Batch Operations](batch-ops-iam-role-policies.md)  | 
|  `logging.s3.amazonaws.com`  |  S3 Server Access Logging  |  [Enabling Amazon S3 server access logging](enable-server-access-logging.md)  | 
|  `storage-lens.s3.amazonaws.com`  |  S3 Storage Lens  |  [Viewing Amazon S3 Storage Lens metrics using a data export](storage_lens_view_metrics_export.md)  | 

# How Amazon S3 authorizes a request
<a name="how-s3-evaluates-access-control"></a>

When Amazon S3 receives a request—for example, a bucket or an object operation—it first verifies that the requester has the necessary permissions. Amazon S3 evaluates all the relevant access policies, user policies, and resource-based policies (bucket policy, bucket access control list (ACL), and object ACL) in deciding whether to authorize the request. 

**Note**  
If the Amazon S3 permission check fails to find valid permissions, an Access Denied (403 Forbidden)permission denied error is returned. For more information, see [Troubleshoot Access Denied (403 Forbidden) errors in Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/troubleshoot-403-errors.html).

To determine whether the requester has permission to perform the specific operation, Amazon S3 does the following, in order, when it receives a request:

1. Converts all the relevant access policies (user policy, bucket policy, and ACLs) at run time into a set of policies for evaluation.

1. Evaluates the resulting set of policies in the following steps. In each step, Amazon S3 evaluates a subset of policies in a specific context, based on the context authority. 

   1. **User context** – In the user context, the parent account to which the user belongs is the context authority.

      Amazon S3 evaluates a subset of policies owned by the parent account. This subset includes the user policy that the parent attaches to the user. If the parent also owns the resource in the request (bucket or object), Amazon S3 also evaluates the corresponding resource policies (bucket policy, bucket ACL, and object ACL) at the same time. 

      A user must have permission from the parent account to perform the operation.

      This step applies only if the request is made by a user in an AWS account. If the request is made by using the root user credentials of an AWS account, Amazon S3 skips this step.

   1. **Bucket context** – In the bucket context, Amazon S3 evaluates policies owned by the AWS account that owns the bucket. 

      If the request is for a bucket operation, the requester must have permission from the bucket owner. If the request is for an object, Amazon S3 evaluates all the policies owned by the bucket owner to check if the bucket owner has not explicitly denied access to the object. If there is an explicit deny set, Amazon S3 does not authorize the request. 

   1. **Object context** – If the request is for an object, Amazon S3 evaluates the subset of policies owned by the object owner. 

Following are some example scenarios that illustrate how Amazon S3 authorizes a request.

**Example – Requester is an IAM principal**  
If the requester is an IAM principal, Amazon S3 must determine if the parent AWS account to which the principal belongs has granted the principal necessary permission to perform the operation. In addition, if the request is for a bucket operation, such as a request to list the bucket content, Amazon S3 must verify that the bucket owner has granted permission for the requester to perform the operation. To perform a specific operation on a resource, an IAM principal needs permission from both the parent AWS account to which it belongs and the AWS account that owns the resource.

 

**Example – Requester is an IAM principal – If the request is for an operation on an object that the bucket owner doesn't own**  
If the request is for an operation on an object that the bucket owner doesn't own, in addition to making sure the requester has permissions from the object owner, Amazon S3 must also check the bucket policy to ensure the bucket owner has not set explicit deny on the object. A bucket owner (who pays the bill) can explicitly deny access to objects in the bucket regardless of who owns it. The bucket owner can also delete any object in the bucket.  
By default, when another AWS account uploads an object to your S3 general purpose bucket, that account (the object writer) owns the object, has access to it, and can grant other users access to it through access control lists (ACLs). You can use Object Ownership to change this default behavior so that ACLs are disabled and you, as the bucket owner, automatically own every object in your general purpose bucket. As a result, access control for your data is based on policies, such as IAM user policies, S3 bucket policies, virtual private cloud (VPC) endpoint policies, and AWS Organizations service control policies (SCPs). For more information, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md).

For more information about how Amazon S3 evaluates access policies to authorize or deny requests for bucket operations and object operations, see the following topics:

**Topics**
+ [

# How Amazon S3 authorizes a request for a bucket operation
](access-control-auth-workflow-bucket-operation.md)
+ [

# How Amazon S3 authorizes a request for an object operation
](access-control-auth-workflow-object-operation.md)

# How Amazon S3 authorizes a request for a bucket operation
<a name="access-control-auth-workflow-bucket-operation"></a>

When Amazon S3 receives a request for a bucket operation, Amazon S3 converts all the relevant permissions into a set of policies to evaluate at run time. Relevant permissions include resource-based permissions (for example, bucket policies and bucket access control lists) and user policies if the request is from an IAM principal. Amazon S3 then evaluates the resulting set of policies in a series of steps according to a specific context—user context or bucket context: 

1. **User context** – If the requester is an IAM principal, the principal must have permission from the parent AWS account to which it belongs. In this step, Amazon S3 evaluates a subset of policies owned by the parent account (also referred to as the context authority). This subset of policies includes the user policy that the parent account attaches to the principal. If the parent also owns the resource in the request (in this case, the bucket), Amazon S3 also evaluates the corresponding resource policies (bucket policy and bucket ACL) at the same time. Whenever a request for a bucket operation is made, the server access logs record the canonical ID of the requester. For more information, see [Logging requests with server access logging](ServerLogs.md).

1. **Bucket context** – The requester must have permissions from the bucket owner to perform a specific bucket operation. In this step, Amazon S3 evaluates a subset of policies owned by the AWS account that owns the bucket. 

   The bucket owner can grant permission by using a bucket policy or bucket ACL. If the AWS account that owns the bucket is also the parent account of an IAM principal, then it can configure bucket permissions in a user policy. 

 The following is a graphical illustration of the context-based evaluation for bucket operation. 

![\[Illustration that shows the context-based evaluation for bucket operation.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/AccessControlAuthorizationFlowBucketResource.png)


The following examples illustrate the evaluation logic. 

## Example 1: Bucket operation requested by bucket owner
<a name="example1-policy-eval-logic"></a>

 In this example, the bucket owner sends a request for a bucket operation by using the root credentials of the AWS account. 

![\[Illustration that shows a bucket operation requested by bucket owner.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/example10-policy-eval-logic.png)


 Amazon S3 performs the context evaluation as follows:

1.  Because the request is made by using the root user credentials of an AWS account, the user context is not evaluated.

1.  In the bucket context, Amazon S3 reviews the bucket policy to determine if the requester has permission to perform the operation. Amazon S3 authorizes the request. 

## Example 2: Bucket operation requested by an AWS account that is not the bucket owner
<a name="example2-policy-eval-logic"></a>

In this example, a request is made by using the root user credentials of AWS account 1111-1111-1111 for a bucket operation owned by AWS account 2222-2222-2222. No IAM users are involved in this request.

![\[Illustration that shows a bucket operation requested by an AWS account that is not the bucket owner.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/example20-policy-eval-logic.png)


In this example, Amazon S3 evaluates the context as follows:

1. Because the request is made by using the root user credentials of an AWS account, the user context is not evaluated.

1. In the bucket context, Amazon S3 examines the bucket policy. If the bucket owner (AWS account 2222-2222-2222) has not authorized AWS account 1111-1111-1111 to perform the requested operation, Amazon S3 denies the request. Otherwise, Amazon S3 grants the request and performs the operation.

## Example 3: Bucket operation requested by an IAM principal whose parent AWS account is also the bucket owner
<a name="example3-policy-eval-logic"></a>

 In the example, the request is sent by Jill, an IAM user in AWS account 1111-1111-1111, which also owns the bucket. 

![\[Illustration that shows a bucket operation requested by an IAM principal and bucket owner.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/example30-policy-eval-logic.png)


 Amazon S3 performs the following context evaluation:

1.  Because the request is from an IAM principal, in the user context, Amazon S3 evaluates all policies that belong to the parent AWS account to determine if Jill has permission to perform the operation. 

    In this example, parent AWS account 1111-1111-1111, to which the principal belongs, is also the bucket owner. As a result, in addition to the user policy, Amazon S3 also evaluates the bucket policy and bucket ACL in the same context because they belong to the same account.

1. Because Amazon S3 evaluated the bucket policy and bucket ACL as part of the user context, it does not evaluate the bucket context.

## Example 4: Bucket operation requested by an IAM principal whose parent AWS account is not the bucket owner
<a name="example4-policy-eval-logic"></a>

In this example, the request is sent by Jill, an IAM user whose parent AWS account is 1111-1111-1111, but the bucket is owned by another AWS account, 2222-2222-2222. 

![\[Illustration that shows a bucket operation requested by an IAM principal that is not the bucket owner.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/example40-policy-eval-logic.png)


Jill will need permissions from both the parent AWS account and the bucket owner. Amazon S3 evaluates the context as follows:

1. Because the request is from an IAM principal, Amazon S3 evaluates the user context by reviewing the policies authored by the account to verify that Jill has the necessary permissions. If Jill has permission, then Amazon S3 moves on to evaluate the bucket context. If Jill doesn't have permission, it denies the request.

1.  In the bucket context, Amazon S3 verifies that bucket owner 2222-2222-2222 has granted Jill (or her parent AWS account) permission to perform the requested operation. If she has that permission, Amazon S3 grants the request and performs the operation. Otherwise, Amazon S3 denies the request. 

# How Amazon S3 authorizes a request for an object operation
<a name="access-control-auth-workflow-object-operation"></a>

When Amazon S3 receives a request for an object operation, it converts all the relevant permissions— resource-based permissions (object access control list (ACL), bucket policy, bucket ACL) and IAM user policies—into a set of policies to be evaluated at run time. It then evaluates the resulting set of policies in a series of steps. In each step, it evaluates a subset of policies in three specific contexts—user context, bucket context, and object context:

1. **User context** – If the requester is an IAM principal, the principal must have permission from the parent AWS account to which it belongs. In this step, Amazon S3 evaluates a subset of policies owned by the parent account (also referred as the context authority). This subset of policies includes the user policy that the parent attaches to the principal. If the parent also owns the resource in the request (bucket or object), Amazon S3 evaluates the corresponding resource policies (bucket policy, bucket ACL, and object ACL) at the same time. 
**Note**  
If the parent AWS account owns the resource (bucket or object), it can grant resource permissions to its IAM principal by using either the user policy or the resource policy. 

1. **Bucket context** – In this context, Amazon S3 evaluates policies owned by the AWS account that owns the bucket.

   If the AWS account that owns the object in the request is not same as the bucket owner, Amazon S3 checks the policies if the bucket owner has explicitly denied access to the object. If there is an explicit deny set on the object, Amazon S3 does not authorize the request. 

1. **Object context** – The requester must have permissions from the object owner to perform a specific object operation. In this step, Amazon S3 evaluates the object ACL. 
**Note**  
If bucket and object owners are the same, access to the object can be granted in the bucket policy, which is evaluated at the bucket context. If the owners are different, the object owners must use an object ACL to grant permissions. If the AWS account that owns the object is also the parent account to which the IAM principal belongs, it can configure object permissions in a user policy, which is evaluated at the user context. For more information about using these access policy alternatives, see [Walkthroughs that use policies to manage access to your Amazon S3 resources](example-walkthroughs-managing-access.md).  
If you as the bucket owner want to own all the objects in your bucket and use bucket policies or policies based on IAMto manage access to these objects, you can apply the bucket owner enforced setting for Object Ownership. With this setting, you as the bucket owner automatically own and have full control over every object in your bucket. Bucket and object ACLs can’t be edited and are no longer considered for access. For more information, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md).

 The following is an illustration of the context-based evaluation for an object operation.

![\[Illustration that shows the context-based evaluation for an object operation.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/AccessControlAuthorizationFlowObjectResource.png)


## Example of an object operation request
<a name="access-control-auth-workflow-object-operation-example1"></a>

In this example, IAM user Jill, whose parent AWS account is 1111-1111-1111, sends an object operation request (for example, `GetObject`) for an object owned by AWS account 3333-3333-3333 in a bucket owned by AWS account 2222-2222-2222. 

![\[Illustration that shows an object operation request.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/example50-policy-eval-logic.png)


Jill will need permission from the parent AWS account, the bucket owner, and the object owner. Amazon S3 evaluates the context as follows:

1. Because the request is from an IAM principal, Amazon S3 evaluates the user context to verify that the parent AWS account 1111-1111-1111 has given Jill permission to perform the requested operation. If she has that permission, Amazon S3 evaluates the bucket context. Otherwise, Amazon S3 denies the request.

1. In the bucket context, the bucket owner, AWS account 2222-2222-2222, is the context authority. Amazon S3 evaluates the bucket policy to determine if the bucket owner has explicitly denied Jill access to the object. 

1. In the object context, the context authority is AWS account 3333-3333-3333, the object owner. Amazon S3 evaluates the object ACL to determine if Jill has permission to access the object. If she does, Amazon S3 authorizes the request. 

# Required permissions for Amazon S3 API operations
<a name="using-with-s3-policy-actions"></a>

**Note**  
This page is about Amazon S3 policy actions for general purpose buckets. To learn more about Amazon S3 policy actions for directory buckets, see [Actions for directory buckets](s3-express-security-iam.md#s3-express-security-iam-actions).

To perform an S3 API operation, you must have the right permissions. This page maps S3 API operations to the required permissions. To grant permissions to perform an S3 API operation, you must compose a valid policy (such as an S3 bucket policy or IAM identity-based policy), and specify corresponding actions in the `Action` element of the policy. These actions are called policy actions. Not every S3 API operation is represented by a single permission (a single policy action), and some permissions (some policy actions) are required for many different API operations. 

When you compose policies, you must specify the `Resource` element based on the correct resource type required by the corresponding Amazon S3 policy actions. This page categorizes permissions to S3 API operations by the resource types. For more information about the resource types, see [ Resource types defined by Amazon S3](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html#amazons3-resources-for-iam-policies) in the *Service Authorization Reference*. For a full list of Amazon S3 policy actions, resources, and condition keys for use in policies, see [ Actions, resources, and condition keys for Amazon S3](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html) in the *Service Authorization Reference*. For a complete list of Amazon S3 API operations, see [Amazon S3 API Actions](https://docs.aws.amazon.com//AmazonS3/latest/API/API_Operations.html) in the *Amazon Simple Storage Service API Reference*.

For more information on how to address the HTTP `403 Forbidden` errors in S3, see [Troubleshoot access denied (403 Forbidden) errors in Amazon S3](troubleshoot-403-errors.md). For more information on the IAM features to use with S3, see [How Amazon S3 works with IAM](security_iam_service-with-iam.md). For more information on S3 security best practices, see [Security best practices for Amazon S3](security-best-practices.md). 

**Topics**
+ [

## Bucket operations and permissions
](#using-with-s3-policy-actions-related-to-buckets)
+ [

## Object operations and permissions
](#using-with-s3-policy-actions-related-to-objects)
+ [

## Access point for general purpose buckets operations and permissions
](#using-with-s3-policy-actions-related-to-accesspoint)
+ [

## Object Lambda Access Point operations and permissions
](#using-with-s3-policy-actions-related-to-olap)
+ [

## Multi-Region Access Point operations and permissions
](#using-with-s3-policy-actions-related-to-mrap)
+ [

## Batch job operations and permissions
](#using-with-s3-policy-actions-related-to-batchops)
+ [

## S3 Storage Lens configuration operations and permissions
](#using-with-s3-policy-actions-related-to-lens)
+ [

## S3 Storage Lens groups operations and permissions
](#using-with-s3-policy-actions-related-to-lens-groups)
+ [

## S3 Access Grants instance operations and permissions
](#using-with-s3-policy-actions-related-to-s3ag-instances)
+ [

## S3 Access Grants location operations and permissions
](#using-with-s3-policy-actions-related-to-s3ag-locations)
+ [

## S3 Access Grants grant operations and permissions
](#using-with-s3-policy-actions-related-to-s3ag-grants)
+ [

## Account operations and permissions
](#using-with-s3-policy-actions-related-to-accounts)

## Bucket operations and permissions
<a name="using-with-s3-policy-actions-related-to-buckets"></a>

Bucket operations are S3 API operations that operate on the bucket resource type. You must specify S3 policy actions for bucket operations in bucket policies or IAM identity-based policies.

In the policies, the `Resource` element must be the bucket Amazon Resource Name (ARN). For more information about the `Resource` element format and example policies, see [Bucket operations](security_iam_service-with-iam.md#using-with-s3-actions-related-to-buckets).

**Note**  
To grant permissions to bucket operations in access point policies, note the following:  
Permissions granted for bucket operations in an access point policy are effective only if the underlying bucket allows the same permissions. When you use an access point, you must delegate access control from the bucket to the access point or add the same permissions in the access point policy to the underlying bucket's policy.
In access point policies that grant permissions to bucket operations, the `Resource` element must be the `accesspoint` ARN. For more information about the `Resource` element format and example policies, see [Bucket operations in policies for access points for general purpose buckets](security_iam_service-with-iam.md#bucket-operations-ap). For more information about access point policies, see [Configuring IAM policies for using access points](access-points-policies.md). 
Not all bucket operations are supported by access points. For more information, see [Access points compatibility with S3 operations](access-points-service-api-support.md#access-points-operations-support).

The following is the mapping of bucket operations and required policy actions. 


| API operations | Policy actions | Description of policy actions | 
| --- | --- | --- | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html)  |  (Required) `s3:CreateBucket`  |  Required to create a new s3 bucket.  | 
|    |  (Conditionally required) `s3:PutBucketAcl`  |  Required if you want to use access control list (ACL) to specify permissions on a bucket when you make a `CreateBucket` request.  | 
|    |  (Conditionally required) `s3:PutBucketObjectLockConfiguration`, `s3:PutBucketVersioning`  |  Required if you want to enable Object Lock when you create a bucket.  | 
|    |  (Conditionally required) `s3:PutBucketOwnershipControls`  |  Required if you want to specify S3 Object Ownership when you create a bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucketMetadataConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucketMetadataConfiguration.html) (V2 API operation. The IAM policy action name is the same for the V1 and V2 API operations.)  |  (Required) `s3:CreateBucketMetadataTableConfiguration`, `s3tables:CreateTableBucket`, `s3tables:CreateNamespace`, `s3tables:CreateTable`, `s3tables:GetTable`, `s3tables:PutTablePolicy`, `s3tables:PutTableEncryption`, `kms:DescribeKey`  |  Required to create a metadata table configuration on a general purpose bucket.  To create your AWS managed table bucket and the metadata tables that are specified in your metadata table configuration, you must have the specified `s3tables` permissions. If you want to encrypt your metadata tables with server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS), you need additional permissions in your KMS key policy. For more information, see [Setting up permissions for configuring metadata tables](metadata-tables-permissions.md). If you also want to integrate your AWS managed table bucket with AWS analytics services so that you can query your metadata table, you need additional permissions. For more information, see [Integrating Amazon S3 Tables with AWS analytics services](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-integrating-aws.html).  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucketMetadataTableConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucketMetadataTableConfiguration.html) (V1 API operation)  |  (Required) `s3:CreateBucketMetadataTableConfiguration`, `s3tables:CreateNamespace`, `s3tables:CreateTable`, `s3tables:GetTable`, `s3tables:PutTablePolicy`  |  Required to create a metadata table configuration on a general purpose bucket.  To create the metadata table in the table bucket that's specified in your metadata table configuration, you must have the specified `s3tables` permissions. If you want to encrypt your metadata tables with server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS), you need additional permissions. For more information, see [Setting up permissions for configuring metadata tables](metadata-tables-permissions.md). If you also want to integrate your table bucket with AWS analytics services so that you can query your metadata table, you need additional permissions. For more information, see [Integrating Amazon S3 Tables with AWS analytics services](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-integrating-aws.html).  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucket.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucket.html)  |  (Required) `s3:DeleteBucket`  |  Required to delete an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketAnalyticsConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketAnalyticsConfiguration.html)  |  (Required) `s3:PutAnalyticsConfiguration`  |  Required to delete an S3 analytics configuration from an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketCors.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketCors.html)  |  (Required) `s3:PutBucketCORS`  |  Required to delete the cross-origin resource sharing (CORS) configuration for an bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketEncryption.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketEncryption.html)  |  (Required) `s3:PutEncryptionConfiguration`  |  Required to reset the default encryption configuration for an S3 bucket as server-side encryption with Amazon S3 managed keys (SSE-S3).  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketIntelligentTieringConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketIntelligentTieringConfiguration.html)  |  (Required) `s3:PutIntelligentTieringConfiguration`  |  Required to delete the existing S3 Intelligent-Tiering configuration from an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketInventoryConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketInventoryConfiguration.html)  |  (Required) `s3:PutInventoryConfiguration`  |  Required to delete an S3 Inventory configuration from an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketLifecycle.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketLifecycle.html)  |  (Required) `s3:PutLifecycleConfiguration`  |  Required to delete the S3 Lifecycle configuration for an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketMetadataTableConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketMetadataTableConfiguration.html) (V2 API operation. The IAM policy action name is the same for the V1 and V2 API operations.)  |  (Required) `s3:DeleteBucketMetadataTableConfiguration`  |  Required to delete a metadata table configuration from a general purpose bucket.   | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketMetadataTableConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketMetadataTableConfiguration.html) (V1 API operation)  |  (Required) `s3:DeleteBucketMetadataTableConfiguration`  |  Required to delete a metadata table configuration from a general purpose bucket.   | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketMetricsConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketMetricsConfiguration.html)  |  (Required) `s3:PutMetricsConfiguration`  |  Required to delete a metrics configuration for the Amazon CloudWatch request metrics from an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketOwnershipControls.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketOwnershipControls.html)   |  (Required) `s3:PutBucketOwnershipControls`  |  Required to remove the Object Ownership setting for an S3 bucket. After removal, the Object Ownership setting becomes `Object writer`.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketPolicy.html)  |  (Required) `s3:DeleteBucketPolicy`  |  Required to delete the policy of an S3 bucket.   | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketReplication.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketReplication.html)  |  (Required) `s3:PutReplicationConfiguration`  |  Required to delete the replication configuration of an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketTagging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketTagging.html)  |  (Required) `s3:PutBucketTagging`  |  Required to delete tags from an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketWebsite.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketWebsite.html)  |  (Required) `s3:DeleteBucketWebsite`  |  Required to remove the website configuration for an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeletePublicAccessBlock.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeletePublicAccessBlock.html) (Bucket-level)  |  (Required) `s3:PutBucketPublicAccessBlock`  |  Required to remove the block public access configuration for an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketAccelerateConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketAccelerateConfiguration.html)  |  (Required) `s3:GetAccelerateConfiguration`  |  Required to use the accelerate subresource to return the Amazon S3 Transfer Acceleration state of a bucket, which is either Enabled or Suspended.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketAcl.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketAcl.html)  |  (Required) `s3:GetBucketAcl`  |  Required to return the access control list (ACL) of an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketAnalyticsConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketAnalyticsConfiguration.html)  |  (Required) `s3:GetAnalyticsConfiguration`  |  Required to return an analytics configuration that's identified by the analytics configuration ID from an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketCors.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketCors.html)  |  (Required) `s3:GetBucketCORS`  |  Required to return the cross-origin resource sharing (CORS) configuration for an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketEncryption.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketEncryption.html)  |  (Required) `s3:GetEncryptionConfiguration`  |  Required to return the default encryption configuration for an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketIntelligentTieringConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketIntelligentTieringConfiguration.html)  |  (Required) `s3:GetIntelligentTieringConfiguration`  |  Required to get the S3 Intelligent-Tiering configuration of an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketInventoryConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketInventoryConfiguration.html)  |  (Required) `s3:GetInventoryConfiguration`  |  Required to return an inventory configuration that's identified by the inventory configuration ID from the bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketLifecycle.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketLifecycle.html)  |  (Required) `s3:GetLifecycleConfiguration`  |  Required to return the S3 Lifecycle configuration of the bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketLocation.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketLocation.html)  |  (Required) `s3:GetBucketLocation`  |  Required to return the AWS Region that an S3 bucket resides in.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketLogging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketLogging.html)  |  (Required) `s3:GetBucketLogging`  |  Required to return the logging status of an S3 bucket and the permissions that users have to view and modify that status.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketMetadataTableConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketMetadataTableConfiguration.html) (V2 API operation. The IAM policy action name is the same for the V1 and V2 API operations.)  |  (Required) `s3:GetBucketMetadataTableConfiguration`  |  Required to retrieve a metadata table configuration for a general purpose bucket.   | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketMetadataTableConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketMetadataTableConfiguration.html) (V1 API operation)  |  (Required) `s3:GetBucketMetadataTableConfiguration`  |  Required to retrieve a metadata table configuration for a general purpose bucket.   | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketMetricsConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketMetricsConfiguration.html)  |  (Required) `s3:GetMetricsConfiguration`  |  Required to get a metrics configuration that's specified by the metrics configuration ID from the bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketNotificationConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketNotificationConfiguration.html)  |  (Required) `s3:GetBucketNotification`  |  Required to return the notification configuration of an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketOwnershipControls.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketOwnershipControls.html)  |  (Required) `s3:GetBucketOwnershipControls`  |  Required to retrieve the Object Ownership setting for an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketPolicy.html)  |  (Required) `s3:GetBucketPolicy`  |  Required to return the policy of an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketPolicyStatus.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketPolicyStatus.html)  |  (Required) `s3:GetBucketPolicyStatus`  |  Required to retrieve the policy status for an S3 bucket, indicating whether the bucket is public.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketReplication.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketReplication.html)  |  (Required) `s3:GetReplicationConfiguration`  |  Required to return the replication configuration of an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketRequestPayment.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketRequestPayment.html)  |  (Required) `s3:GetBucketRequestPayment`  |  Required to return the request payment configuration for an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketVersioning.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketVersioning.html)  |  (Required) `s3:GetBucketVersioning`  |  Required to return the versioning state of an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketTagging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketTagging.html)  |  (Required) `s3:GetBucketTagging`  |  Required to return the tag set that's associated with an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketWebsite.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketWebsite.html)  |  (Required) `s3:GetBucketWebsite`  |  Required to return the website configuration for an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectLockConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectLockConfiguration.html)  |  (Required) `s3:GetBucketObjectLockConfiguration`  |  Required to get the Object Lock configuration for an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetPublicAccessBlock.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetPublicAccessBlock.html) (Bucket-level)  |  (Required) `s3:GetBucketPublicAccessBlock`  |  Required to retrieve the block public access configuration for an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadBucket.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadBucket.html)  |  (Required) `s3:ListBucket`  |  Required to determine if a bucket exists and if you have permission to access it.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListBucketAnalyticsConfigurations.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListBucketAnalyticsConfigurations.html)  |  (Required) `s3:GetAnalyticsConfiguration`  |  Required to list the analytics configurations for an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListBucketIntelligentTieringConfigurations.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListBucketIntelligentTieringConfigurations.html)  |  (Required) `s3:GetIntelligentTieringConfiguration`  |  Required to list the S3 Intelligent-Tiering configurations of an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListBucketInventoryConfigurations.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListBucketInventoryConfigurations.html)  |  (Required) `s3:GetInventoryConfiguration`  |  Required to return a list of inventory configurations for an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListBucketMetricsConfigurations.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListBucketMetricsConfigurations.html)  |  (Required) `s3:GetMetricsConfiguration`  |  Required to list the metrics configurations for an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjects.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjects.html)  |  (Required) `s3:ListBucket`  |  Required to list some or all (up to 1,000) of the objects in an S3 bucket.  | 
|    |  (Conditionally required) `s3:GetObjectAcl`  |  Required if you want to display object owner information.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html)  |  (Required) `s3:ListBucket`  |  Required to list some or all (up to 1,000) of the objects in an S3 bucket.  | 
|    |  (Conditionally required) `s3:GetObjectAcl`  |  Required if you want to display object owner information.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectVersions.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectVersions.html)  |  (Required) `s3:ListBucketVersions`  |  Required to get metadata about all the versions of objects in an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketAccelerateConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketAccelerateConfiguration.html)  |  (Required) `s3:PutAccelerateConfiguration`  |  Required to set the accelerate configuration of an existing bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketAcl.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketAcl.html)  |  (Required) `s3:PutBucketAcl`  |  Required to use access control lists (ACLs) to set the permissions on an existing bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketAnalyticsConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketAnalyticsConfiguration.html)  |  (Required) `s3:PutAnalyticsConfiguration`  |  Required to set an analytics configuration for an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketCors.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketCors.html)  |  (Required) `s3:PutBucketCORS`  |  Required to set the cross-origin resource sharing (CORS) configuration for an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketEncryption.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketEncryption.html)  |  (Required) `s3:PutEncryptionConfiguration`  |  Required to configure the default encryption for an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketIntelligentTieringConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketIntelligentTieringConfiguration.html)  |  (Required) `s3:PutIntelligentTieringConfiguration`  |  Required to put the S3 Intelligent-Tiering configuration to an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketInventoryConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketInventoryConfiguration.html)  |  (Required) `s3:PutInventoryConfiguration`  |  Required to add an inventory configuration to an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLifecycle.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLifecycle.html)  |  (Required) `s3:PutLifecycleConfiguration`  |  Required to create a new S3 Lifecycle configuration or replace an existing lifecycle configuration for an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLogging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLogging.html)  |  (Required) `s3:PutBucketLogging`  |  Required to set the logging parameters for an S3 bucket and specify permissions for who can view and modify the logging parameters.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketMetricsConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketMetricsConfiguration.html)  |  (Required) `s3:PutMetricsConfiguration`  |  Required to set or update a metrics configuration for the Amazon CloudWatch request metrics of an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketNotificationConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketNotificationConfiguration.html)  |  (Required) `s3:PutBucketNotification`  |  Required to enable notifications of specified events for an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketOwnershipControls.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketOwnershipControls.html)  |  (Required) `s3:PutBucketOwnershipControls`  |  Required to create or modify the Object Ownership setting for an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketPolicy.html)  |  (Required) `s3:PutBucketPolicy`  |  Required to apply an S3 bucket policy to a bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketReplication.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketReplication.html)  |  (Required) `s3:PutReplicationConfiguration`  |  Required to create a new replication configuration or replace an existing one for an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketRequestPayment.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketRequestPayment.html)  |  (Required) `s3:PutBucketRequestPayment`  |  Required to set the request payment configuration for a bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketTagging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketTagging.html)  |  (Required) `s3:PutBucketTagging`  |  Required to add a set of tags to an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketVersioning.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketVersioning.html)  |  (Required) `s3:PutBucketVersioning`  |  Required to set the versioning state of an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketWebsite.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketWebsite.html)  |  (Required) `s3:PutBucketWebsite`  |  Required to configure a bucket as a website and set the configuration of the website.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectLockConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectLockConfiguration.html)  |  (Required) `s3:PutBucketObjectLockConfiguration`  |  Required to put Object Lock configuration on an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutPublicAccessBlock.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutPublicAccessBlock.html) (Bucket-level)  |  (Required) `s3:PutBucketPublicAccessBlock`  |  Required to create or modify the block public access configuration for an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UpdateBucketMetadataInventoryTableConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UpdateBucketMetadataInventoryTableConfiguration.html)  |  (Required) `s3:UpdateBucketMetadataInventoryTableConfiguration`, `s3tables:CreateTableBucket`, `s3tables:CreateNamespace`, `s3tables:CreateTable`, `s3tables:GetTable`, `s3tables:PutTablePolicy`, `s3tables:PutTableEncryption`, `kms:DescribeKey`  |  Required to enable or disable an inventory table for a metadata table configuration on a general purpose bucket. If you want to encrypt your inventory table with server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS), you need additional permissions in your KMS key policy. For more information, see [Setting up permissions for configuring metadata tables](metadata-tables-permissions.md).  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UpdateBucketMetadataJournalTableConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UpdateBucketMetadataJournalTableConfiguration.html)  |  (Required) `s3:UpdateBucketMetadataJournalTableConfiguration`  |  Required to enable or disable journal table record expiration for a metadata table configuration on a general purpose bucket.  | 

## Object operations and permissions
<a name="using-with-s3-policy-actions-related-to-objects"></a>

Object operations are S3 API operations that operate on the object resource type. You must specify S3 policy actions for object operations in resource-based policies (such as bucket policies, access point policies, Multi-Region Access Point policies, VPC endpoint policies) or IAM identity-based policies.

In the policies, the `Resource` element must be the object ARN. For more information about the `Resource` element format and example policies, see [Object operations](security_iam_service-with-iam.md#using-with-s3-actions-related-to-objects). 

**Note**  
AWS KMS policy actions (`kms:GenerateDataKey` and `kms:Decrypt`) are only applicable for the AWS KMS resource type and must be specified in IAM identity-based policies and AWS KMS resource-based policies (AWS KMS key policies). You can't specify AWS KMS policy actions in S3 resource-based policies, such as S3 bucket policies.
When you use access points to control access to object operations, you can use access point policies. To grant permissions to object operations in access point policies, note the following:  
In access point policies that grant permissions to object operations, the `Resource` element must be the ARNs for objects accessed through an access point. For more information about the `Resource` element format and example policies, see [Object operations in access point policies](security_iam_service-with-iam.md#object-operations-ap).
Not all object operations are supported by access points. For more information, see [Access points compatibility with S3 operations](access-points-service-api-support.md#access-points-operations-support).
Not all object operations are supported by Multi-Region Access Points. For more information, see [Multi-Region Access Point compatibility with S3 operations](MrapOperations.md#mrap-operations-support).

The following is the mapping of object operations and required policy actions. 


| API operations | Policy actions | Description of policy actions | 
| --- | --- | --- | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html)  |  (Required) `s3:AbortMultipartUpload`  |  Required to abort a multipart upload.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html)  |  (Required) `s3:PutObject`  |  Required to complete a multipart upload.  | 
|    |  (Conditionally required) `kms:Decrypt`  |  Required if you want to complete a multipart upload for an AWS KMS customer managed key encrypted object.   | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html)  |  For source object:  |  For source object:  | 
|    |  (Required) Either `s3:GetObject` or `s3:GetObjectVersion`  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/using-with-s3-policy-actions.html)  | 
|    |  (Conditionally required) `kms:Decrypt`  |  Required if you want to copy an AWS KMS customer managed key encrypted object from the source bucket.   | 
|    |  For destination object:  |  For destination object:  | 
|    |  (Required) `s3:PutObject`  |  Required to put the copied object in the destination bucket.  | 
|    |  (Conditionally required) `s3:PutObjectAcl`  |  Required if you want to put the copied object with the object access control list (ACL) to the destination bucket when you make a `CopyObject` request.  | 
|    |  (Conditionally required) `s3:PutObjectTagging`  |  Required if you want to put the copied object with object tagging to the destination bucket when you make a `CopyObject` request.  | 
|    |  (Conditionally required) `kms:GenerateDataKey`  |  Required if you want to encrypt the copied object with an AWS KMS customer managed key and put it to the destination bucket.   | 
|    |  (Conditionally required) `s3:PutObjectRetention`  |  Required if you want to set an Object Lock retention configuration for the new object.  | 
|    |  (Conditionally required) `s3:PutObjectLegalHold`  |  Required if you want to place an Object Lock legal hold on the new object.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html)  |  (Required) `s3:PutObject`  |  Required to create multipart upload.  | 
|    |  (Conditionally required) `s3:PutObjectAcl`  |  Required if you want to set the object access control list (ACL) permissions for the uploaded object.  | 
|    |  (Conditionally required) `s3:PutObjectTagging`  |  Required if you want to add object tagging(s) to the uploaded object.  | 
|    |  (Conditionally required) `kms:GenerateDataKey`  |  Required if you want to use an AWS KMS customer managed key to encrypt an object when you initiate a multipart upload.   | 
|    |  (Conditionally required) `s3:PutObjectRetention`  |  Required if you want to set an Object Lock retention configuration for the uploaded object.  | 
|    |  (Conditionally required) `s3:PutObjectLegalHold`  |  Required if you want to apply an Object Lock legal hold to the uploaded object.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObject.html)  |  (Required) Either `s3:DeleteObject` or `s3:DeleteObjectVersion`  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/using-with-s3-policy-actions.html)  | 
|    |  (Conditionally required) `s3:BypassGovernanceRetention`  |  Required if you want to delete an object that's protected by governance mode for Object Lock retention.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjects.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjects.html)  |  (Required) Either `s3:DeleteObject` or `s3:DeleteObjectVersion`  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/using-with-s3-policy-actions.html)  | 
|    |  (Conditionally required) `s3:BypassGovernanceRetention`  |  Required if you want to delete objects that are protected by governance mode for Object Lock retention.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjectTagging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjectTagging.html)  |  (Required) Either `s3:DeleteObjectTagging` or `s3:DeleteObjectVersionTagging`  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/using-with-s3-policy-actions.html)  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html)  |  (Required) Either `s3:GetObject` or `s3:GetObjectVersion`  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/using-with-s3-policy-actions.html)  | 
|    |  (Conditionally required) `kms:Decrypt`  |  Required if you want to get and decrypt an AWS KMS customer managed key encrypted object.   | 
|    |  (Conditionally required) `s3:GetObjectTagging`  |  Required if you want to get the tag-set of an object when you make a `GetObject` request.  | 
|    |  (Conditionally required) `s3:GetObjectLegalHold`  |  Required if you want to get an object's current Object Lock legal hold status.  | 
|    |  (Conditionally required) `s3:GetObjectRetention`  |  Required if you want to retrieve the Object Lock retention settings for an object.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectAcl.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectAcl.html)  |  (Required) Either `s3:GetObjectAcl` or `s3:GetObjectVersionAcl`  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/using-with-s3-policy-actions.html)  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectAttributes.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectAttributes.html)  |  (Required) Either `s3:GetObject` or `s3:GetObjectVersion`  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/using-with-s3-policy-actions.html)  | 
|    |  (Conditionally required) `kms:Decrypt`  |  Required if you want to retrieve attributes related to an AWS KMS customer managed key encrypted object.   | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectLegalHold.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectLegalHold.html)  |  (Required) `s3:GetObjectLegalHold`  |  Required to get an object's current Object Lock legal hold status.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectRetention.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectRetention.html)  |  (Required) `s3:GetObjectRetention`  |  Required to retrieve the Object Lock retention settings for an object.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectTagging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectTagging.html)  |  (Required) Either `s3:GetObjectTagging` or `s3:GetObjectVersionTagging`  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/using-with-s3-policy-actions.html)  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectTorrent.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectTorrent.html)  |  (Required) `s3:GetObject`  |  Required to return torrent files of an object.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html)  |  (Required) `s3:GetObject`  |  Required to retrieve metadata from an object without returning the object itself.  | 
|    |  (Conditionally required) `s3:GetObjectLegalHold`  |  Required if you want to get an object's current Object Lock legal hold status.  | 
|    |  (Conditionally required) `s3:GetObjectRetention`  |  Required if you want to retrieve the Object Lock retention settings for an object.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.html)  |  (Required) `s3:ListBucketMultipartUploads`  |  Required to list in-progress multipart uploads in a bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html)  |  (Required) `s3:ListMultipartUploadParts`  |  Required to list the parts that have been uploaded for a specific multipart upload.  | 
|    |  (Conditionally required) `kms:Decrypt`  |  Required if you want to list parts of an AWS KMS customer managed key encrypted multipart upload.   | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html)  |  (Required) `s3:PutObject`  |  Required to put an object.  | 
|    |  (Conditionally required) `s3:PutObjectAcl`  |  Required if you want to put the object access control list (ACL) when you make a `PutObject` request.  | 
|    |  (Conditionally required) `s3:PutObjectTagging`  |  Required if you want to put object tagging when you make a `PutObject` request.  | 
|    |  (Conditionally required) `kms:GenerateDataKey`  |  Required if you want to encrypt an object with an AWS KMS customer managed key.   | 
|    |  (Conditionally required) `s3:PutObjectRetention`  |  Required if you want to set an Object Lock retention configuration on an object.  | 
|    |  (Conditionally required) `s3:PutObjectLegalHold`  |  Required if you want to apply an Object Lock legal hold configuration to a specified object.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectAcl.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectAcl.html)  |  (Required) Either `s3:PutObjectAcl` or `s3:PutObjectVersionAcl`  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/using-with-s3-policy-actions.html)  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectLegalHold.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectLegalHold.html)  |  (Required) `s3:PutObjectLegalHold`  |  Required to apply an Object Lock legal hold configuration to an object.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectRetention.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectRetention.html)  |  (Required) `s3:PutObjectRetention`  |  Required to apply an Object Lock retention configuration to an object.  | 
|    |  (Conditionally required) `s3:BypassGovernanceRetention`  |  Required if you want to bypass the governance mode of an Object Lock retention configuration.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectTagging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectTagging.html)  |  (Required) Either `s3:PutObjectTagging` or `s3:PutObjectVersionTagging`  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/using-with-s3-policy-actions.html)  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_RestoreObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_RestoreObject.html)  |  (Required) `s3:RestoreObject`  |  Required to restore a copy of an archived object.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_SelectObjectContent.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_SelectObjectContent.html)  |  (Required) `s3:GetObject`  |  Required to filter the contents of an S3 object based on a simple structured query language (SQL) statement.  | 
|    |  (Conditionally required) `kms:Decrypt`  |  Required if you want to filter the contents of an S3 object that's encrypted with an AWS KMS customer managed key.  | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UpdateObjectEncryption.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UpdateObjectEncryption.html) | (Required) `s3:UpdateObjectEncryption`, `s3:PutObject`, `kms:Encrypt`, `kms:Decrypt`, `kms:GenerateDataKey`, `kms:ReEncrypt*`  | Required if you want to change encrypted objects between server-side encryption with Amazon S3 managed encryption (SSE-S3) and server-side encryption with AWS Key Management Service (AWS KMS) encryption keys (SSE-KMS). You can also use the `UpdateObjectEncryption` operation to apply S3 Bucket Keys to reduce AWS KMS request costs or change the customer-managed KMS key that's used to encrypt your data so that you can comply with custom key-rotation standards. | 
|    | (Conditionally required) `organizations:DescribeAccount` | If you're using AWS Organizations, to use the `UpdateObjectEncryption` operation with customer-managed KMS keys from other AWS accounts within your organization, you must have the `organizations:DescribeAccount` permission.   You must also request the ability to use AWS KMS keys owned by other member accounts within your organization by contacting AWS Support.   | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html)  |  (Required) `s3:PutObject`  |  Required to upload a part in a multipart upload.  | 
|    |  (Conditionally required) `kms:GenerateDataKey`  |  Required if you want to put an upload part and encrypt it with an AWS KMS customer managed key.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html)  |  For source object:  |  For source object:  | 
|    |  (Required) Either `s3:GetObject` or `s3:GetObjectVersion`  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/using-with-s3-policy-actions.html)  | 
|    |  (Conditionally required) `kms:Decrypt`  |  Required if you want to copy an AWS KMS customer managed key encrypted object from the source bucket.   | 
|    |  For destination part:  |  For destination part:  | 
|    |  (Required) `s3:PutObject`  |  Required to upload a multipart upload part to the destination bucket.  | 
|    |  (Conditionally required) `kms:GenerateDataKey`  |  Required if you want to encrypt a part with an AWS KMS customer managed key when you upload the part to the destination bucket.   | 

## Access point for general purpose buckets operations and permissions
<a name="using-with-s3-policy-actions-related-to-accesspoint"></a>

Access point operations are S3 API operations that operate on the `accesspoint` resource type. You must specify S3 policy actions for access point operations in IAM identity-based policies, not in bucket policies or access point policies.

In the policies, the `Resource` element must be the `accesspoint` ARN. For more information about the `Resource` element format and example policies, see [Access point for general purpose bucket operations](security_iam_service-with-iam.md#using-with-s3-actions-related-to-accesspoint).

**Note**  
If you want to use access points to control access to bucket or object operations, note the following:  
For using access points to control access to bucket operations, see [Bucket operations in policies for access points for general purpose buckets](security_iam_service-with-iam.md#bucket-operations-ap).
For using access points to control access to object operations, see [Object operations in access point policies](security_iam_service-with-iam.md#object-operations-ap).
For more information about how to configure access point policies, see [Configuring IAM policies for using access points](access-points-policies.md).

The following is the mapping of access point operations and required policy actions. 


| API operations | Policy actions | Description of policy actions | 
| --- | --- | --- | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateAccessPoint.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateAccessPoint.html)  |  (Required) `s3:CreateAccessPoint`  |  Required to create an access point that's associated with an S3 bucket.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessPoint.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessPoint.html)  |  (Required) `s3:DeleteAccessPoint`  |  Required to delete an access point.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessPointPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessPointPolicy.html)  |  (Required) `s3:DeleteAccessPointPolicy`  |  Required to delete an access point policy.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessPointPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessPointPolicy.html)  |  (Required) `s3:GetAccessPointPolicy`  |  Required to retrieve an access point policy.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessPointPolicyStatus.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessPointPolicyStatus.html)  |  (Required) `s3:GetAccessPointPolicyStatus`  |  Required to retrieve the information on whether the specified access point currently has a policy that allows public access.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutAccessPointPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutAccessPointPolicy.html)  |  (Required) `s3:PutAccessPointPolicy`  |  Required to put an access point policy.  | 

## Object Lambda Access Point operations and permissions
<a name="using-with-s3-policy-actions-related-to-olap"></a>

Object Lambda Access Point operations are S3 API operations that operate on the `objectlambdaaccesspoint` resource type. For more information about how to configure policies for Object Lambda Access Point operations, see [Configuring IAM policies for Object Lambda Access Points](olap-policies.md).

The following is the mapping of Object Lambda Access Point operations and required policy actions. 


| API operations | Policy actions | Description of policy actions | 
| --- | --- | --- | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateAccessPointForObjectLambda.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateAccessPointForObjectLambda.html)  |  (Required) `s3:CreateAccessPointForObjectLambda`  |  Required to create an Object Lambda Access Point.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessPointForObjectLambda.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessPointForObjectLambda.html)  |  (Required) `s3:DeleteAccessPointForObjectLambda`  |  Required to delete a specified Object Lambda Access Point.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessPointPolicyForObjectLambda.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessPointPolicyForObjectLambda.html)  |  (Required) `s3:DeleteAccessPointPolicyForObjectLambda`  |  Required to delete the policy on a specified Object Lambda Access Point.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessPointConfigurationForObjectLambda.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessPointConfigurationForObjectLambda.html)  |  (Required) `s3:GetAccessPointConfigurationForObjectLambda`  |  Required to retrieve the configuration of the Object Lambda Access Point.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetAccessPointForObjectLambda.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetAccessPointForObjectLambda.html)  |  (Required) `s3:GetAccessPointForObjectLambda`  |  Required to retrieve information about the Object Lambda Access Point.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetAccessPointPolicyForObjectLambda.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetAccessPointPolicyForObjectLambda.html)  |  (Required) `s3:GetAccessPointPolicyForObjectLambda`  |  Required to return the access point policy that's associated with the specified Object Lambda Access Point.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetAccessPointPolicyStatusForObjectLambda.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetAccessPointPolicyStatusForObjectLambda.html)  |  (Required) `s3:GetAccessPointPolicyStatusForObjectLambda`  |  Required to return the policy status for a specific Object Lambda Access Point policy.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutAccessPointConfigurationForObjectLambda.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutAccessPointConfigurationForObjectLambda.html)  |  (Required) `s3:PutAccessPointConfigurationForObjectLambda`  |  Required to set the configuration of the Object Lambda Access Point.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutAccessPointPolicyForObjectLambda.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutAccessPointPolicyForObjectLambda.html)  |  (Required) `s3:PutAccessPointPolicyForObjectLambda`  |  Required to associate an access policy with a specified Object Lambda Access Point.  | 

## Multi-Region Access Point operations and permissions
<a name="using-with-s3-policy-actions-related-to-mrap"></a>

Multi-Region Access Point operations are S3 API operations that operate on the `multiregionaccesspoint` resource type. For more information about how to configure policies for Multi-Region Access Point operations, see [Multi-Region Access Point policy examples](MultiRegionAccessPointPermissions.md#MultiRegionAccessPointPolicyExamples).

The following is the mapping of Multi-Region Access Point operations and required policy actions. 


| API operations | Policy actions | Description of policy actions | 
| --- | --- | --- | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateMultiRegionAccessPoint.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateMultiRegionAccessPoint.html)  |  (Required) `s3:CreateMultiRegionAccessPoint`  |  Required to create a Multi-Region Access Point and associate it with S3 buckets.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteMultiRegionAccessPoint.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteMultiRegionAccessPoint.html)  |  (Required) `s3:DeleteMultiRegionAccessPoint`  |  Required to delete a Multi-Region Access Point.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DescribeMultiRegionAccessPointOperation.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DescribeMultiRegionAccessPointOperation.html)  |  (Required) `s3:DescribeMultiRegionAccessPointOperation`  |  Required to retrieve the status of an asynchronous request to manage a Multi-Region Access Point.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetMultiRegionAccessPoint.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetMultiRegionAccessPoint.html)  |  (Required) `s3:GetMultiRegionAccessPoint`  |  Required to return configuration information about the specified Multi-Region Access Point.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetMultiRegionAccessPointPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetMultiRegionAccessPointPolicy.html)  |  (Required) `s3:GetMultiRegionAccessPointPolicy`  |  Required to return the access control policy of the specified Multi-Region Access Point.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetMultiRegionAccessPointPolicyStatus.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetMultiRegionAccessPointPolicyStatus.html)  |  (Required) `s3:GetMultiRegionAccessPointPolicyStatus`  |  Required to return the policy status for a specific Multi-Region Access Point about whether the specified Multi-Region Access Point has an access control policy that allows public access.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetMultiRegionAccessPointRoutes.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetMultiRegionAccessPointRoutes.html)  |  (Required) `s3:GetMultiRegionAccessPointRoutes`  |  Required to return the routing configuration for a Multi-Region Access Point.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutMultiRegionAccessPointPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutMultiRegionAccessPointPolicy.html)  |  (Required) `s3:PutMultiRegionAccessPointPolicy`  |  Required to update the access control policy of the specified Multi-Region Access Point.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_SubmitMultiRegionAccessPointRoutes.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_SubmitMultiRegionAccessPointRoutes.html)  |  (Required) `s3:SubmitMultiRegionAccessPointRoutes`  |  Required to submit an updated route configuration for a Multi-Region Access Point.  | 

## Batch job operations and permissions
<a name="using-with-s3-policy-actions-related-to-batchops"></a>

(Batch Operations) job operations are S3 API operations that operate on the `job` resource type. You must specify S3 policy actions for job operations in IAM identity-based policies, not in bucket policies.

In the policies, the `Resource` element must be the `job` ARN. For more information about the `Resource` element format and example policies, see [Batch job operations](security_iam_service-with-iam.md#using-with-s3-actions-related-to-batchops).

The following is the mapping of batch job operations and required policy actions. 


| API operations | Policy actions | Description of policy actions | 
| --- | --- | --- | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteJobTagging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteJobTagging.html)  |  (Required) `s3:DeleteJobTagging`  |  Required to remove tags from an existing S3 Batch Operations job.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DescribeJob.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DescribeJob.html)  |  (Required) `s3:DescribeJob`  |  Required to retrieve the configuration parameters and status for a Batch Operations job.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetJobTagging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetJobTagging.html)  |  (Required) `s3:GetJobTagging`  |  Required to return the tag set of an existing S3 Batch Operations job.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutJobTagging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutJobTagging.html)  |  (Required) `s3:PutJobTagging`  |  Required to put or replace tags on an existing S3 Batch Operations job.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_UpdateJobPriority.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_UpdateJobPriority.html)  |  (Required) `s3:UpdateJobPriority`  |  Required to update the priority of an existing job.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_UpdateJobStatus.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_UpdateJobStatus.html)  |  (Required) `s3:UpdateJobStatus`  |  Required to update the status for the specified job.  | 

## S3 Storage Lens configuration operations and permissions
<a name="using-with-s3-policy-actions-related-to-lens"></a>

S3 Storage Lens configuration operations are S3 API operations that operate on the `storagelensconfiguration` resource type. For more information about how to configure S3 Storage Lens configuration operations, see [Setting Amazon S3 Storage Lens permissions](storage_lens_iam_permissions.md).

The following is the mapping of S3 Storage Lens configuration operations and required policy actions. 


| API operations | Policy actions | Description of policy actions | 
| --- | --- | --- | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteStorageLensConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteStorageLensConfiguration.html)  |  (Required) `s3:DeleteStorageLensConfiguration`  |  Required to delete the S3 Storage Lens configuration.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteStorageLensConfigurationTagging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteStorageLensConfigurationTagging.html)  |  (Required) `s3:DeleteStorageLensConfigurationTagging`  |  Required to delete the S3 Storage Lens configuration tags.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetStorageLensConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetStorageLensConfiguration.html)  |  (Required) `s3:GetStorageLensConfiguration`  |  Required to get the S3 Storage Lens configuration.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetStorageLensConfigurationTagging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetStorageLensConfigurationTagging.html)  |  (Required) `s3:GetStorageLensConfigurationTagging`  |  Required to get the tags of S3 Storage Lens configuration.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutStorageLensConfigurationTagging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutStorageLensConfigurationTagging.html)  |  (Required) `s3:PutStorageLensConfigurationTagging`  |  Required to put or replace tags on an existing S3 Storage Lens configuration.  | 

## S3 Storage Lens groups operations and permissions
<a name="using-with-s3-policy-actions-related-to-lens-groups"></a>

S3 Storage Lens groups operations are S3 API operations that operate on the `storagelensgroup` resource type. For more information about how to configure S3 Storage Lens groups permissions, see [Storage Lens groups permissions](storage-lens-groups.md#storage-lens-group-permissions).

The following is the mapping of S3 Storage Lens groups operations and required policy actions. 


| API operations | Policy actions | Description of policy actions | 
| --- | --- | --- | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteStorageLensGroup.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteStorageLensGroup.html)  |  (Required) `s3:DeleteStorageLensGroup`  |  Required to delete an existing S3 Storage Lens group.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetStorageLensGroup.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetStorageLensGroup.html)  |  (Required) `s3:GetStorageLensGroup`  |  Required to retrieve the S3 Storage Lens group configuration details.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_UpdateStorageLensGroup.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_UpdateStorageLensGroup.html)  |  (Required) `s3:UpdateStorageLensGroup`  |  Required to update the existing S3 Storage Lens group.  | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateStorageLensGroup.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateStorageLensGroup.html) | (Required) `s3:CreateStorageLensGroup` | Required to create a new Storage Lens group. | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateStorageLensGroup.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateStorageLensGroup.html), [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_TagResource.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_TagResource.html) | (Required) `s3:CreateStorageLensGroup`, `s3:TagResource` | Required to create a new Storage Lens group with tags. | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListStorageLensGroups.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListStorageLensGroups.html) | (Required) `s3:ListStorageLensGroups` | Required to list all Storage Lens groups in your home Region. | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListTagsForResource.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListTagsForResource.html) | (Required) `s3:ListTagsForResource` | Required to list the tags that were added to your Storage Lens group. | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_TagResource.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_TagResource.html) | (Required) `s3:TagResource` | Required to add or update a Storage Lens group tag for an existing Storage Lens group. | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_UntagResource.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_UntagResource.html) | (Required) `s3:UntagResource` | Required to delete a tag from a Storage Lens group. | 

## S3 Access Grants instance operations and permissions
<a name="using-with-s3-policy-actions-related-to-s3ag-instances"></a>

S3 Access Grants instance operations are S3 API operations that operate on the `accessgrantsinstance` resource type. An S3 Access Grants instance is a logical container for your access grants. For more information on working with S3 Access Grants instances, see [Working with S3 Access Grants instances](access-grants-instance.md).

The following is the mapping of the S3 Access Grants instance configuration operations and required policy actions. 


| API operations | Policy actions | Description of policy actions | 
| --- | --- | --- | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_AssociateAccessGrantsIdentityCenter.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_AssociateAccessGrantsIdentityCenter.html)  |  (Required) `s3:AssociateAccessGrantsIdentityCenter`  |  Required to associate an AWS IAM Identity Center instance with your S3 Access Grants instance, thus enabling you to create access grants for users and groups in your corporate identity directory. You must also have the following permissions:  `sso:CreateApplication`, `sso:PutApplicationGrant`, and `sso:PutApplicationAuthenticationMethod`.  | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateAccessGrantsInstance.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateAccessGrantsInstance.html)  |  (Required) `s3:CreateAccessGrantsInstance`  |  Required to create an S3 Access Grants instance (`accessgrantsinstance` resource) which is a container for your individual access grants.  To associate an AWS IAM Identity Center instance with your S3 Access Grants instance, you must also have the `sso:DescribeInstance`, `sso:CreateApplication`, `sso:PutApplicationGrant`, and `sso:PutApplicationAuthenticationMethod` permissions.  | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessGrantsInstance.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessGrantsInstance.html)  |  (Required) `s3:DeleteAccessGrantsInstance`  |  Required to delete an S3 Access Grants instance (`accessgrantsinstance` resource) from an AWS Region in your account.   | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessGrantsInstanceResourcePolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessGrantsInstanceResourcePolicy.html)  |  (Required) `s3:DeleteAccessGrantsInstanceResourcePolicy`  |  Required to delete a resource policy for your S3 Access Grants instance.   | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DissociateAccessGrantsIdentityCenter.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DissociateAccessGrantsIdentityCenter.html)  |  (Required) `s3:DissociateAccessGrantsIdentityCenter`  |  Required to disassociate an AWS IAM Identity Center instance from your S3 Access Grants instance. You must also have the following permissions: `sso:DeleteApplication`  | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessGrantsInstance.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessGrantsInstance.html)  |  (Required) `s3:GetAccessGrantsInstance`  |  Required to retrieve the S3 Access Grants instance for an AWS Region in your account.  | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessGrantsInstanceForPrefix.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessGrantsInstanceForPrefix.html)  |  (Required) `s3:GetAccessGrantsInstanceForPrefix`  |  Required to retrieve the S3 Access Grants instance that contains a particular prefix.  | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessGrantsInstanceResourcePolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessGrantsInstanceResourcePolicy.html)  |  (Required) `s3:GetAccessGrantsInstanceResourcePolicy`  |  Required to return the resource policy of your S3 Access Grants instance.  | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListAccessGrantsInstances.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListAccessGrantsInstances.html)  |  (Required) `s3:ListAccessGrantsInstances`  |  Required to return a list of the S3 Access Grants instances in your account.  | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutAccessGrantsInstanceResourcePolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutAccessGrantsInstanceResourcePolicy.html)  |  (Required) `s3:PutAccessGrantsInstanceResourcePolicy`  |  Required to update the resource policy of the S3 Access Grants instance.  | 

## S3 Access Grants location operations and permissions
<a name="using-with-s3-policy-actions-related-to-s3ag-locations"></a>

S3 Access Grants location operations are S3 API operations that operate on the `accessgrantslocation` resource type. For more information on working with S3 Access Grants locations, see [Working with S3 Access Grants locations](access-grants-location.md).

The following is the mapping of the S3 Access Grants location configuration operations and required policy actions. 


| API operations | Policy actions | Description of policy actions | 
| --- | --- | --- | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateAccessGrantsLocation.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateAccessGrantsLocation.html)  |  (Required) `s3:CreateAccessGrantsLocation`  |  Required to register a location in your S3 Access Grants instance (create an `accessgrantslocation` resource). You must also have the following permission for the specified IAM role:  `iam:PassRole`  | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessGrantsLocation.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessGrantsLocation.html)  |  (Required) `s3:DeleteAccessGrantsLocation`  |  Required to remove a registered location from your S3 Access Grants instance.   | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessGrantsLocation.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessGrantsLocation.html)  |  (Required) `s3:GetAccessGrantsLocation`  |  Required to retrieve the details of a particular location registered in your S3 Access Grants instance.  | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListAccessGrantsLocations.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListAccessGrantsLocations.html)  |  (Required) `s3:ListAccessGrantsLocations`  |  Required to return a list of the locations registered in your S3 Access Grants instance.  | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_UpdateAccessGrantsLocation.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_UpdateAccessGrantsLocation.html)  |  (Required) `s3:UpdateAccessGrantsLocation`  |  Required to update the IAM role of a registered location in your S3 Access Grants instance.  | 

## S3 Access Grants grant operations and permissions
<a name="using-with-s3-policy-actions-related-to-s3ag-grants"></a>

S3 Access Grants grant operations are S3 API operations that operate on the `accessgrant` resource type. For more information on working with individual grants using S3 Access Grants, see [Working with grants in S3 Access Grants](access-grants-grant.md).

The following is the mapping of the S3 Access Grants grant configuration operations and required policy actions. 


| API operations | Policy actions | Description of policy actions | 
| --- | --- | --- | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateAccessGrant.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateAccessGrant.html)  |  (Required) `s3:CreateAccessGrant`  |  Required to create an individual grant (`accessgrant` resource) for a user or group in your S3 Access Grants instance. You must also have the following permissions: For any directory identity — `sso:DescribeInstance` and `sso:DescribeApplication` For directory users — `identitystore:DescribeUser`  | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessGrant.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessGrant.html)  |  (Required) `s3:DeleteAccessGrant`  |  Required to delete an individual access grant (`accessgrant` resource) from your S3 Access Grants instance.   | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessGrant.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessGrant.html)  |  (Required) `s3:GetAccessGrant`  |  Required to get the details about an individual access grant in your S3 Access Grants instance.   | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListAccessGrants.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListAccessGrants.html)  |  (Required) `s3:ListAccessGrants`  |  Required to return a list of individual access grants in your S3 Access Grants instance.   | 
| [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListCallerAccessGrants.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListCallerAccessGrants.html)  |  (Required) `s3:ListCallerAccessGrants`  |  Required to list the access grants that grant the caller access to Amazon S3 data through S3 Access Grants.   | 

## Account operations and permissions
<a name="using-with-s3-policy-actions-related-to-accounts"></a>

Account operations are S3 API operations that operate on the account level. Account isn't a resource type defined by Amazon S3. You must specify S3 policy actions for account operations in IAM identity-based policies, not in bucket policies.

In the policies, the `Resource` element must be `"*"`. For more information about example policies, see [Account operations](security_iam_service-with-iam.md#using-with-s3-actions-related-to-accounts).

The following is the mapping of account operations and required policy actions. 


| API operations | Policy actions | Description of policy actions | 
| --- | --- | --- | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateJob.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateJob.html)  |  (Required) `s3:CreateJob`  |  Required to create a new S3 Batch Operations job.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateStorageLensGroup.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateStorageLensGroup.html)  |  (Required) `s3:CreateStorageLensGroup`  |  Required to create a new S3 Storage Lens group and associate it with the specified AWS account ID.  | 
|    |  (Conditionally required) `s3:TagResource`  |  Required if you want to create an S3 Storage Lens group with AWS resource tags.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeletePublicAccessBlock.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeletePublicAccessBlock.html) (Account-level)  |  (Required) `s3:PutAccountPublicAccessBlock`  |  Required to remove the block public access configuration from an AWS account.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessPoint.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessPoint.html)  |  (Required) `s3:GetAccessPoint`  |  Required to retrieve configuration information about the specified access point.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessPointPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessPointPolicy.html) (Account-level)  |  (Required) `s3:GetAccountPublicAccessBlock`  |  Required to retrieve the block public access configuration for an AWS account.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListAccessPoints.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListAccessPoints.html)  |  (Required) `s3:ListAccessPoints`  |  Required to list access points of an S3 bucket that are owned by an AWS account.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListAccessPointsForObjectLambda.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListAccessPointsForObjectLambda.html)  |  (Required) `s3:ListAccessPointsForObjectLambda`  |  Required to list the Object Lambda Access Points.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListBuckets.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListBuckets.html)  |  (Required) `s3:ListAllMyBuckets`  |  Required to return a list of all buckets that are owned by the authenticated sender of the request.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListJobs.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListJobs.html)  |  (Required) `s3:ListJobs`  |  Required to list current jobs and jobs that have ended recently.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListMultiRegionAccessPoints.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListMultiRegionAccessPoints.html)  |  (Required) `s3:ListMultiRegionAccessPoints`  |  Required to return a list of the Multi-Region Access Points that are currently associated with the specified AWS account.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListStorageLensConfigurations.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListStorageLensConfigurations.html)  |  (Required) `s3:ListStorageLensConfigurations`  |  Required to get a list of S3 Storage Lens configurations for an AWS account.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListStorageLensGroups.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListStorageLensGroups.html)  |  (Required) `s3:ListStorageLensGroups`  |  Required to list all the S3 Storage Lens groups in the specified home AWS Region.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutPublicAccessBlock.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutPublicAccessBlock.html) (Account-level)  |  (Required) `s3:PutAccountPublicAccessBlock`  |  Required to create or modify the block public access configuration for an AWS account.  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutStorageLensConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutStorageLensConfiguration.html)  |  (Required) `s3:PutStorageLensConfiguration`  |  Required to put an S3 Storage Lens configuration.  | 

# Policies and permissions in Amazon S3
<a name="access-policy-language-overview"></a>

This page provides an overview of bucket and user policies in Amazon S3 and describes the basic elements of an AWS Identity and Access Management (IAM) policy. Each listed element links to more details about that element and examples of how to use it. 

For a complete list of Amazon S3 actions, resources, and conditions, see [ Actions, resources, and condition keys for Amazon S3](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html) in the *Service Authorization Reference*.

For more information about the permissions to S3 API operations by S3 resource types, see [Required permissions for Amazon S3 API operations](using-with-s3-policy-actions.md).

In its most basic sense, a policy contains the following elements:
+ [Resource](security_iam_service-with-iam.md#security_iam_service-with-iam-id-based-policies-resources) – The Amazon S3 bucket, object, access point, or job that the policy applies to. Use the Amazon Resource Name (ARN) of the bucket, object, access point, or job to identify the resource. 

  An example for bucket-level operations:

  `"Resource": "arn:aws:s3:::bucket_name"`

  Examples for object-level operations: 
  + `"Resource": "arn:aws:s3:::bucket_name/*"` for all objects in the bucket.
  + `"Resource": "arn:aws:s3:::bucket_name/prefix/*"` for objects under a certain prefix in the bucket.

  For more information, see [Policy resources for Amazon S3](security_iam_service-with-iam.md#security_iam_service-with-iam-id-based-policies-resources).
+ [Actions](security_iam_service-with-iam.md#security_iam_service-with-iam-id-based-policies-actions) – For each resource, Amazon S3 supports a set of operations. You identify resource operations that you will allow (or deny) by using action keywords. 

  For example, the `s3:ListBucket` permission allows the user to use the Amazon S3 [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html) operation. (The `s3:ListBucket` permission is a case where the action name doesn't map directly to the operation name.) For more information about using Amazon S3 actions, see [Policy actions for Amazon S3](security_iam_service-with-iam.md#security_iam_service-with-iam-id-based-policies-actions). For a complete list of Amazon S3 actions, see [Actions](https://docs.aws.amazon.com/AmazonS3/latest/API/API_Operations.html) in the *Amazon Simple Storage Service API Reference*.
+ [https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_effect.html](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_effect.html) – What the effect will be when the user requests the specific action—this can be either `Allow` or `Deny`. 

  If you don't explicitly grant access to (allow) a resource, access is implicitly denied. You can also explicitly deny access to a resource. You might do this to make sure that a user can't access the resource, even if a different policy grants access. For more information, see [IAM JSON Policy Elements: Effect](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_effect.html) in the *IAM User Guide*.
+ [Principal](security_iam_service-with-iam.md#s3-bucket-user-policy-specifying-principal-intro) – The account or user who is allowed access to the actions and resources in the statement. In a bucket policy, the principal is the user, account, service, or other entity that is the recipient of this permission. For more information, see [Principals for bucket policies](security_iam_service-with-iam.md#s3-bucket-user-policy-specifying-principal-intro).
+ [Condition](amazon-s3-policy-keys.md) – Conditions for when a policy is in effect. You can use AWS‐wide keys and Amazon S3‐specific keys to specify conditions in an Amazon S3 access policy. For more information, see [Bucket policy examples using condition keys](amazon-s3-policy-keys.md).

The following example bucket policy shows the `Effect`, `Principal`, `Action`, and `Resource` elements. This policy allows `Akua`, a user in account `123456789012`, `s3:GetObject`, `s3:GetBucketLocation`, and `s3:ListBucket` Amazon S3 permissions on the `amzn-s3-demo-bucket1` bucket.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Id": "ExamplePolicy01",
    "Statement": [
        {
            "Sid": "ExampleStatement01",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::123456789012:user/Akua"
            },
            "Action": [
                "s3:GetObject",
                "s3:GetBucketLocation",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::amzn-s3-demo-bucket1/*",
                "arn:aws:s3:::amzn-s3-demo-bucket1"
            ]
        }
    ]
}
```

------

For complete policy language information, see [Policies and permissions in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html) and [IAM JSON policy reference](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies.html) in the *IAM User Guide*.

## Permission delegation
<a name="permission-delegation"></a>

If an AWS account owns a resource, it can grant those permissions to another AWS account. That account can then delegate those permissions, or a subset of them, to users in the account. This is referred to as* permission delegation*. But an account that receives permissions from another account can't delegate permission cross-account to another AWS account. 

## Amazon S3 bucket and object ownership
<a name="about-resource-owner"></a>

Buckets and objects are Amazon S3 resources. By default, only the resource owner can access these resources. The resource owner refers to the AWS account that creates the resource. For example: 
+ The AWS account that you use to create buckets and upload objects owns those resources. 
+  If you upload an object using AWS Identity and Access Management (IAM) user or role credentials, the AWS account that the user or role belongs to owns the object. 
+ A bucket owner can grant cross-account permissions to another AWS account (or users in another account) to upload objects. In this case, the AWS account that uploads objects owns those objects. The bucket owner doesn't have permissions on the objects that other accounts own, with the following exceptions:
  + The bucket owner pays the bills. The bucket owner can deny access to any objects, or delete any objects in the bucket, regardless of who owns them. 
  + The bucket owner can archive any objects or restore archived objects regardless of who owns them. Archival refers to the storage class used to store the objects. For more information, see [Managing the lifecycle of objects](object-lifecycle-mgmt.md).

### Ownership and request authentication
<a name="about-resource-owner-requests"></a>

All requests to a bucket are either authenticated or unauthenticated. Authenticated requests must include a signature value that authenticates the request sender, and unauthenticated requests do not. For more information about request authentication, see [Making requests ](https://docs.aws.amazon.com/AmazonS3/latest/API/MakingRequests.html) in the *Amazon S3 API Reference*.

A bucket owner can allow unauthenticated requests. For example, unauthenticated [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html) requests are allowed when a bucket has a public bucket policy, or when a bucket ACL grants `WRITE` or `FULL_CONTROL` access to the `All Users` group or the anonymous user specifically. For more information about public bucket policies and public access control lists (ACLs), see [The meaning of "public"](access-control-block-public-access.md#access-control-block-public-access-policy-status).

All unauthenticated requests are made by the anonymous user. This user is represented in ACLs by the specific canonical user ID `65a011a29cdf8ec533ec3d1ccaae921c`. If an object is uploaded to a bucket through an unauthenticated request, the anonymous user owns the object. The default object ACL grants `FULL_CONTROL` to the anonymous user as the object's owner. Therefore, Amazon S3 allows unauthenticated requests to retrieve the object or modify its ACL. 

To prevent objects from being modified by the anonymous user, we recommend that you do not implement bucket policies that allow anonymous public writes to your bucket or use ACLs that allow the anonymous user write access to your bucket. You can enforce this recommended behavior by using Amazon S3 Block Public Access. 

For more information about blocking public access, see [Blocking public access to your Amazon S3 storage](access-control-block-public-access.md). For more information about ACLs, see [Access control list (ACL) overview](acl-overview.md).

**Important**  
We recommend that you don't use the AWS account root user credentials to make authenticated requests. Instead, create an IAM role and grant that role full access. We refer to users with this role as *administrator users*. You can use credentials assigned to the administrator role, instead of AWS account root user credentials, to interact with AWS and perform tasks, such as create a bucket, create users, and grant permissions. For more information, see [AWS security credentials](https://docs.aws.amazon.com/general/latest/gr/root-vs-iam.html) and [Security best practices in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) in the *IAM User Guide*.







# Bucket policies for Amazon S3
<a name="bucket-policies"></a>

A bucket policy is a resource-based policy that you can use to grant access permissions to your Amazon S3 bucket and the objects in it. Only the bucket owner can associate a policy with a bucket. The permissions attached to the bucket apply to all of the objects in the bucket that are owned by the bucket owner. These permissions don't apply to objects that are owned by other AWS accounts.

S3 Object Ownership is an Amazon S3 bucket-level setting that you can use to control ownership of objects uploaded to your bucket and to disable or enable access control lists (ACLs). By default, Object Ownership is set to the Bucket owner enforced setting and all ACLs are disabled. The bucket owner owns all the objects in the bucket and manages access to data exclusively using policies.

Bucket policies use JSON-based AWS Identity and Access Management (IAM) policy language. You can use bucket policies to add or deny permissions for the objects in a bucket. Bucket policies can allow or deny requests based on the elements in the policy. These elements include the requester, S3 actions, resources, and aspects or conditions of the request (such as the IP address that's used to make the request). 

For example, you can create a bucket policy that does the following: 
+ Grants other accounts cross-account permissions to upload objects to your S3 bucket
+ Makes sure that you, the bucket owner, has full control of the uploaded objects

For more information, see [Examples of Amazon S3 bucket policies](example-bucket-policies.md).

**Important**  
You can't use a bucket policy to prevent deletions or transitions by an [S3 Lifecycle](object-lifecycle-mgmt.md) rule. For example, even if your bucket policy denies all actions for all principals, your S3 Lifecycle configuration still functions as normal.

The topics in this section provide examples and show you how to add a bucket policy in the S3 console. For information about identity-based policies, see [Identity-based policies for Amazon S3](security_iam_id-based-policy-examples.md). For information about bucket policy language, see [Policies and permissions in Amazon S3](access-policy-language-overview.md).

For more information about the permissions to S3 API operations by S3 resource types, see [Required permissions for Amazon S3 API operations](using-with-s3-policy-actions.md).

**Topics**
+ [

# Adding a bucket policy by using the Amazon S3 console
](add-bucket-policy.md)
+ [

# Controlling access from VPC endpoints with bucket policies
](example-bucket-policies-vpc-endpoint.md)
+ [

# Examples of Amazon S3 bucket policies
](example-bucket-policies.md)
+ [

# Bucket policy examples using condition keys
](amazon-s3-policy-keys.md)

# Adding a bucket policy by using the Amazon S3 console
<a name="add-bucket-policy"></a>

You can use the [AWS Policy Generator](https://aws.amazon.com/blogs/aws/aws-policy-generator/) and the Amazon S3 console to add a new bucket policy or edit an existing bucket policy. A bucket policy is a resource-based AWS Identity and Access Management (IAM) policy. You add a bucket policy to a bucket to grant other AWS accounts or IAM users access permissions for the bucket and the objects in it. Object permissions apply only to the objects that the bucket owner creates. For more information about bucket policies, see [Identity and Access Management for Amazon S3](security-iam.md).

Make sure to resolve security warnings, errors, general warnings, and suggestions from AWS Identity and Access Management Access Analyzer before you save your policy. IAM Access Analyzer runs policy checks to validate your policy against IAM [policy grammar](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_grammar.html) and [best practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html). These checks generate findings and provide actionable recommendations to help you author policies that are functional and conform to security best practices. To learn more about validating policies by using IAM Access Analyzer, see [IAM Access Analyzer policy validation](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-policy-validation.html) in the *IAM User Guide*. To view a list of the warnings, errors, and suggestions that are returned by IAM Access Analyzer, see [IAM Access Analyzer policy check reference](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-reference-policy-checks.html).

For guidance on troubleshooting errors with a policy, see [Troubleshoot access denied (403 Forbidden) errors in Amazon S3](troubleshoot-403-errors.md).

**To create or edit a bucket policy**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets** or **Directory buckets**.

1. In the list of buckets, choose the name of the bucket that you want to create a bucket policy for or whose bucket policy you want to edit.

1. Choose the **Permissions** tab.

1. Under **Bucket policy**, choose **Edit**. The **Edit bucket policy** page appears.

1. On the **Edit bucket policy** page, do one of the following: 
   + To see examples of bucket policies, choose **Policy examples**. Or see [Examples of Amazon S3 bucket policies](example-bucket-policies.md) in the *Amazon S3 User Guide*.
   + To generate a policy automatically, or edit the JSON in the **Policy** section, choose **Policy generator**.

   If you choose **Policy generator**, the AWS Policy Generator opens in a new window.

   1. On the **AWS Policy Generator** page, for **Select Type of Policy**, choose **S3 Bucket Policy**.

   1. Add a statement by entering the information in the provided fields, and then choose **Add Statement**. Repeat this step for as many statements as you would like to add. For more information about these fields, see the [IAM JSON policy elements reference](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html) in the *IAM User Guide*. 
**Note**  
For your convenience, the **Edit bucket policy** page displays the **Bucket ARN **(Amazon Resource Name) of the current bucket above the **Policy** text field. You can copy this ARN for use in the statements on the **AWS Policy Generator** page. 

   1. After you finish adding statements, choose **Generate Policy**.

   1. Copy the generated policy text, choose **Close**, and return to the **Edit bucket policy** page in the Amazon S3 console.

1. In the **Policy** box, edit the existing policy or paste the bucket policy from the AWS Policy Generator. Make sure to resolve security warnings, errors, general warnings, and suggestions before you save your policy.
**Note**  
Bucket policies are limited to 20 KB in size.

1. (Optional) Choose **Preview external access** in the lower-right corner to preview how your new policy affects public and cross-account access to your resource. Before you save your policy, you can check whether it introduces new IAM Access Analyzer findings or resolves existing findings. If you don’t see an active analyzer, choose **Go to Access Analyzer** to [ create an account analyzer](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-getting-started.html#access-analyzer-enabling) in IAM Access Analyzer. For more information, see [Preview access](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-access-preview.html) in the *IAM User Guide*. 

1. Choose **Save changes**, which returns you to the **Permissions** tab. 

# Controlling access from VPC endpoints with bucket policies
<a name="example-bucket-policies-vpc-endpoint"></a>

You can use Amazon S3 bucket policies to control access to buckets from specific virtual private cloud (VPC) endpoints or specific VPCs. This section contains example bucket policies that you can use to control Amazon S3 bucket access from VPC endpoints. To learn how to set up VPC endpoints, see [VPC Endpoints](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints.html) in the *VPC User Guide*. 

A VPC enables you to launch AWS resources into a virtual network that you define. A VPC endpoint enables you to create a private connection between your VPC and another AWS service. This private connection doesn't require access over the internet, through a virtual private network (VPN) connection, through a NAT instance, or through Direct Connect. 

A VPC endpoint for Amazon S3 is a logical entity within a VPC that allows connectivity only to Amazon S3. The VPC endpoint routes requests to Amazon S3 and routes responses back to the VPC. VPC endpoints change only how requests are routed. Amazon S3 public endpoints and DNS names will continue to work with VPC endpoints. For important information about using VPC endpoints with Amazon S3, see [Gateway endpoints](https://docs.aws.amazon.com/vpc/latest/userguide/vpce-gateway.html) and [Gateway endpoints for Amazon S3](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints-s3.html) in the *VPC User Guide*. 

VPC endpoints for Amazon S3 provide two ways to control access to your Amazon S3 data: 
+ You can control the requests, users, or groups that are allowed through a specific VPC endpoint. For information about this type of access control, see [Controlling access to VPC endpoints using endpoint policies](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints-access.html) in the *VPC User Guide*.
+ You can control which VPCs or VPC endpoints have access to your buckets by using Amazon S3 bucket policies. For examples of this type of bucket policy access control, see the following topics on restricting access.

**Topics**
+ [

## Restricting access to a specific VPC endpoint
](#example-bucket-policies-restrict-accesss-vpc-endpoint)
+ [

## Restricting access to a specific VPC
](#example-bucket-policies-restrict-access-vpc)
+ [

## Restricting access to an IPv6 VPC endpoint
](#example-bucket-policies-ipv6-vpc-endpoint)

**Important**  
When applying the Amazon S3 bucket policies for VPC endpoints described in this section, you might block your access to the bucket unintentionally. Bucket permissions that are intended to specifically limit bucket access to connections originating from your VPC endpoint can block all connections to the bucket. For information about how to fix this issue, see [How do I fix my bucket policy when it has the wrong VPC or VPC endpoint ID?](https://aws.amazon.com/premiumsupport/knowledge-center/s3-regain-access/) in the *AWS Support Knowledge Center*.

## Restricting access to a specific VPC endpoint
<a name="example-bucket-policies-restrict-accesss-vpc-endpoint"></a>

The following is an example of an Amazon S3 bucket policy that restricts access to a specific bucket, `awsexamplebucket1`, only from the VPC endpoint with the ID `vpce-1a2b3c4d`. If the specified endpoint is not used, the policy denies all access to the bucket. The `aws:SourceVpce` condition specifies the endpoint. The `aws:SourceVpce` condition doesn't require an Amazon Resource Name (ARN) for the VPC endpoint resource, only the VPC endpoint ID. For more information about using conditions in a policy, see [Bucket policy examples using condition keys](amazon-s3-policy-keys.md).

**Important**  
Before using the following example policy, replace the VPC endpoint ID with an appropriate value for your use case. Otherwise, you won't be able to access your bucket.
This policy disables console access to the specified bucket because console requests don't originate from the specified VPC endpoint.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Id": "Policy1415115909152",
   "Statement": [
     {
       "Sid": "Access-to-specific-VPCE-only",
       "Principal": "*",
       "Action": "s3:*",
       "Effect": "Deny",
       "Resource": ["arn:aws:s3:::amzn-s3-demo-bucket",
                    "arn:aws:s3:::amzn-s3-demo-bucket/*"],
       "Condition": {
         "StringNotEquals": {
           "aws:SourceVpce": "vpce-0abcdef1234567890"
         }
       }
     }
   ]
}
```

------

## Restricting access to a specific VPC
<a name="example-bucket-policies-restrict-access-vpc"></a>

You can create a bucket policy that restricts access to a specific VPC by using the `aws:SourceVpc` condition. This is useful if you have multiple VPC endpoints configured in the same VPC, and you want to manage access to your Amazon S3 buckets for all of your endpoints. The following is an example of a policy that denies access to `awsexamplebucket1` and its objects from anyone outside VPC `vpc-111bbb22`. If the specified VPC isn't used, the policy denies all access to the bucket. This statement doesn't  grant access to the bucket. To grant access, you must add a separate `Allow` statement. The `vpc-111bbb22` condition key doesn't require an ARN for the VPC resource, only the VPC ID.

**Important**  
Before using the following example policy, replace the VPC ID with an appropriate value for your use case. Otherwise, you won't be able to access your bucket.
This policy disables console access to the specified bucket because console requests don't originate from the specified VPC.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Id": "Policy1415115909153",
   "Statement": [
     {
       "Sid": "Access-to-specific-VPC-only",
       "Principal": "*",
       "Action": "s3:*",
       "Effect": "Deny",
       "Resource": ["arn:aws:s3:::amzn-s3-demo-bucket",
                    "arn:aws:s3:::amzn-s3-demo-bucket/*"],
       "Condition": {
         "StringNotEquals": {
           "aws:SourceVpc": "vpc-1a2b3c4d"
         }
       }
     }
   ]
}
```

------

## Restricting access to an IPv6 VPC endpoint
<a name="example-bucket-policies-ipv6-vpc-endpoint"></a>

The following example policy denies all Amazon S3 (`s3:`) actions on the *amzn-s3-demo-bucket* bucket and its objects, unless the request originates from the specified VPC endpoint (`vpce-0a1b2c3d4e5f6g`) and the source IP address matches the provided IPv6 CIDR block.

```
{
   "Version": "2012-10-17", 		 	 	 
   "Id": "Policy1415115909154",
   "Statement": [
     {
       "Sid": "AccessSpecificIPv6VPCEOnly",
       "Action": "s3:*",
       "Effect": "Deny",
       "Resource": ["arn:aws:s3:::amzn-s3-demo-bucket",
                    "arn:aws:s3:::amzn-s3-demo-bucket/*"],
       "Condition": {
         "StringNotEquals": {
           "aws:SourceVpc": "vpc-0a1b2c3d4e5f6g4h2"
         },
        "NotIpAddress": {
          "aws:VpcSourceIp": "2001:db8::/32"
        }
       }
     }
   ]
}
```

For information on how to restrict access to your bucket based on specific IPs or VPCs, see [How do I allow only specific VPC endpoints or IP addresses to access my Amazon S3 bucket?](https://repost.aws/knowledge-center/block-s3-traffic-vpc-ip) in the AWS re:Post Knowledge Center.

# Examples of Amazon S3 bucket policies
<a name="example-bucket-policies"></a>

With Amazon S3 bucket policies, you can secure access to objects in your buckets, so that only users with the appropriate permissions can access them. You can even prevent authenticated users without the appropriate permissions from accessing your Amazon S3 resources.

This section presents examples of typical use cases for bucket policies. These sample policies use `amzn-s3-demo-bucket` as the resource value. To test these policies, replace the `user input placeholders` with your own information (such as your bucket name). 

To grant or deny permissions to a set of objects, you can use wildcard characters (`*`) in Amazon Resource Names (ARNs) and other values. For example, you can control access to groups of objects that begin with a common [prefix](https://docs.aws.amazon.com/general/latest/gr/glos-chap.html#keyprefix) or end with a specific extension, such as `.html`. 

For more information about AWS Identity and Access Management (IAM) policy language, see [Policies and permissions in Amazon S3](access-policy-language-overview.md).

For more information about the permissions to S3 API operations by S3 resource types, see [Required permissions for Amazon S3 API operations](using-with-s3-policy-actions.md).

**Note**  
When testing permissions by using the Amazon S3 console, you must grant additional permissions that the console requires—`s3:ListAllMyBuckets`, `s3:GetBucketLocation`, and `s3:ListBucket`. For an example walkthrough that grants permissions to users and tests those permissions by using the console, see [Controlling access to a bucket with user policies](walkthrough1.md).

Additional resources for creating bucket policies include the following:
+ For a list of the IAM policy actions, resources, and condition keys that you can use when creating a bucket policy, see [ Actions, resources, and condition keys for Amazon S3](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html) in the *Service Authorization Reference*.
+ For more information about the permissions to S3 API operations by S3 resource types, see [Required permissions for Amazon S3 API operations](using-with-s3-policy-actions.md).
+ For guidance on creating your S3 policy, see [Adding a bucket policy by using the Amazon S3 console](add-bucket-policy.md).
+ To troubleshoot errors with a policy, see [Troubleshoot access denied (403 Forbidden) errors in Amazon S3](troubleshoot-403-errors.md).

If you're having trouble adding or updating a policy, see [Why do I get the error "Invalid principal in policy" when I try to update my Amazon S3 bucket policy?](https://repost.aws/knowledge-center/s3-invalid-principal-in-policy-error) in the AWS re:Post Knowledge Center.

**Topics**
+ [

## Granting read-only permission to a public anonymous user
](#example-bucket-policies-anonymous-user)
+ [

## Requiring encryption
](#example-bucket-policies-encryption)
+ [

## Managing buckets using canned ACLs
](#example-bucket-policies-public-access)
+ [

## Managing object access with object tagging
](#example-bucket-policies-object-tags)
+ [

## Managing object access by using global condition keys
](#example-bucket-policies-global-condition-keys)
+ [

## Managing access based on HTTP or HTTPS requests
](#example-bucket-policies-HTTP-HTTPS)
+ [

## Managing user access to specific folders
](#example-bucket-policies-folders)
+ [

## Managing access for access logs
](#example-bucket-policies-access-logs)
+ [

## Managing access to an Amazon CloudFront OAI
](#example-bucket-policies-cloudfront)
+ [

## Managing access for Amazon S3 Storage Lens
](#example-bucket-policies-lens)
+ [

## Managing permissions for S3 Inventory, S3 analytics, and S3 Inventory reports
](#example-bucket-policies-s3-inventory)
+ [

## Requiring MFA
](#example-bucket-policies-MFA)
+ [

## Preventing users from deleting objects
](#using-with-s3-actions-related-to-bucket-subresources)

## Granting read-only permission to a public anonymous user
<a name="example-bucket-policies-anonymous-user"></a>

You can use your policy settings to grant access to public anonymous users, which is useful if you're configuring your bucket as a static website. Granting access to public anonymous users requires you to disable the Block Public Access settings for your bucket. For more information about how to do this, and the policy required, see [Setting permissions for website access](WebsiteAccessPermissionsReqd.md). To learn how to set up more restrictive policies for the same purpose, see [How can I grant public read access to some objects in my Amazon S3 bucket?](https://repost.aws/knowledge-center/read-access-objects-s3-bucket) in the AWS Knowledge Center.

By default, Amazon S3 blocks public access to your account and buckets. If you want to use a bucket to host a static website, you can use these steps to edit your block public access settings. 

**Warning**  
Before you complete these steps, review [Blocking public access to your Amazon S3 storage](access-control-block-public-access.md) to ensure that you understand and accept the risks involved with allowing public access. When you turn off block public access settings to make your bucket public, anyone on the internet can access your bucket. We recommend that you block all public access to your buckets.

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. Choose the name of the bucket that you have configured as a static website.

1. Choose **Permissions**.

1. Under **Block public access (bucket settings)**, choose **Edit**.

1. Clear **Block *all* public access**, and choose **Save changes**.  
![\[The Amazon S3 console, showing the block public access bucket settings.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/edit-public-access-clear.png)

   Amazon S3 turns off the Block Public Access settings for your bucket. To create a public static website, you might also have to [edit the Block Public Access settings](https://docs.aws.amazon.com/AmazonS3/latest/user-guide/block-public-access-account.html) for your account before adding a bucket policy. If the Block Public Access settings for your account are currently turned on, you see a note under **Block public access (bucket settings)**.

## Requiring encryption
<a name="example-bucket-policies-encryption"></a>

You can require server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS), as shown in the following examples.

### Require SSE-KMS for all objects written to a bucket
<a name="example-bucket-policies-encryption-1"></a>

The following example policy requires every object that is written to the bucket to be encrypted with server-side encryption using AWS Key Management Service (AWS KMS) keys (SSE-KMS). If the object isn't encrypted with SSE-KMS, the request is denied.

------
#### [ JSON ]

****  

```
{
"Version":"2012-10-17",		 	 	 
"Id": "PutObjPolicy",
"Statement": [{
  "Sid": "DenyObjectsThatAreNotSSEKMS",
  "Principal": "*",
  "Effect": "Deny",
  "Action": "s3:PutObject",
  "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*",
  "Condition": {
    "Null": {
      "s3:x-amz-server-side-encryption-aws-kms-key-id": "true"
    }
  }
}]
}
```

------

### Require SSE-KMS with a specific AWS KMS key for all objects written to a bucket
<a name="example-bucket-policies-encryption-2"></a>

The following example policy denies any objects from being written to the bucket if they aren’t encrypted with SSE-KMS by using a specific KMS key ID. Even if the objects are encrypted with SSE-KMS by using a per-request header or bucket default encryption, the objects can't be written to the bucket if they haven't been encrypted with the specified KMS key. Make sure to replace the KMS key ARN that's used in this example with your own KMS key ARN.

------
#### [ JSON ]

****  

```
{
"Version":"2012-10-17",		 	 	 
"Id": "PutObjPolicy",
"Statement": [{
  "Sid": "DenyObjectsThatAreNotSSEKMSWithSpecificKey",
  "Principal": "*",
  "Effect": "Deny",
  "Action": "s3:PutObject",
  "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*",
  "Condition": {
    "ArnNotEqualsIfExists": {
      "s3:x-amz-server-side-encryption-aws-kms-key-id": "arn:aws:kms:us-east-1:111122223333:key/01234567-89ab-cdef-0123-456789abcdef"
    }
  }
}]
}
```

------

## Managing buckets using canned ACLs
<a name="example-bucket-policies-public-access"></a>

### Granting permissions to multiple accounts to upload objects or set object ACLs for public access
<a name="example-bucket-policies-acl-1"></a>

The following example policy grants the `s3:PutObject` and `s3:PutObjectAcl` permissions to multiple AWS accounts. Also, the example policy requires that any requests for these operations must include the `public-read` [canned access control list (ACL)](acl-overview.md#canned-acl). For more information, see [Policy actions for Amazon S3](security_iam_service-with-iam.md#security_iam_service-with-iam-id-based-policies-actions) and [Policy condition keys for Amazon S3](security_iam_service-with-iam.md#security_iam_service-with-iam-id-based-policies-conditionkeys).

**Warning**  
The `public-read` canned ACL allows anyone in the world to view the objects in your bucket. Use caution when granting anonymous access to your Amazon S3 bucket or disabling block public access settings. When you grant anonymous access, anyone in the world can access your bucket. We recommend that you never grant anonymous access to your Amazon S3 bucket unless you specifically need to, such as with [static website hosting](WebsiteHosting.md). If you want to enable block public access settings for static website hosting, see [Tutorial: Configuring a static website on Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/HostingWebsiteOnS3Setup.html).

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "AddPublicReadCannedAcl",
            "Effect": "Allow",
            "Principal": {
                "AWS": [
                    "arn:aws:iam::111122223333:root",
                    "arn:aws:iam::444455556666:root"
                ]
            },
            "Action": [
                "s3:PutObject",
                "s3:PutObjectAcl"
            ],
            "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*",
            "Condition": {
                "StringEquals": {
                    "s3:x-amz-acl": [
                        "public-read"
                    ]
                }
            }
        }
    ]
}
```

------

### Grant cross-account permissions to upload objects while ensuring that the bucket owner has full control
<a name="example-bucket-policies-acl-2"></a>

The following example shows how to allow another AWS account to upload objects to your bucket while ensuring that you have full control of the uploaded objects. This policy grants a specific AWS account (*`111122223333`*) the ability to upload objects only if that account includes the `bucket-owner-full-control` canned ACL on upload. The `StringEquals` condition in the policy specifies the `s3:x-amz-acl` condition key to express the canned ACL requirement. For more information, see [Policy condition keys for Amazon S3](security_iam_service-with-iam.md#security_iam_service-with-iam-id-based-policies-conditionkeys). 

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
     {
       "Sid":"PolicyForAllowUploadWithACL",
       "Effect":"Allow",
       "Principal":{"AWS":"111122223333"},
       "Action":"s3:PutObject",
       "Resource":"arn:aws:s3:::amzn-s3-demo-bucket/*",
       "Condition": {
         "StringEquals": {"s3:x-amz-acl":"bucket-owner-full-control"}
       }
     }
   ]
}
```

------

## Managing object access with object tagging
<a name="example-bucket-policies-object-tags"></a>

### Allow a user to read only objects that have a specific tag key and value
<a name="example-bucket-policies-tagging-1"></a>

The following permissions policy limits a user to only reading objects that have the `environment: production` tag key and value. This policy uses the `s3:ExistingObjectTag` condition key to specify the tag key and value.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Principal":{
            "AWS":"arn:aws:iam::111122223333:role/JohnDoe"
         },
         "Effect":"Allow",
         "Action":[
            "s3:GetObject",
            "s3:GetObjectVersion"
         ],
         "Resource":"arn:aws:s3:::amzn-s3-demo-bucket/*",
         "Condition":{
            "StringEquals":{
               "s3:ExistingObjectTag/environment":"production"
            }
         }
      }
   ]
}
```

------

### Restrict which object tag keys that users can add
<a name="example-bucket-policies-tagging-2"></a>

The following example policy grants a user permission to perform the `s3:PutObjectTagging` action, which allows a user to add tags to an existing object. The condition uses the `s3:RequestObjectTagKeys` condition key to specify the allowed tag keys, such as `Owner` or `CreationDate`. For more information, see [Creating a condition that tests multiple key values](https://docs.aws.amazon.com//IAM/latest/UserGuide/reference_policies_multi-value-conditions.html) in the *IAM User Guide*.

The policy ensures that every tag key specified in the request is an authorized tag key. The `ForAnyValue` qualifier in the condition ensures that at least one of the specified keys must be present in the request.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
  "Statement": [
    {"Principal":{"AWS":[
            "arn:aws:iam::111122223333:role/JohnDoe"
         ]
       },
 "Effect": "Allow",
      "Action": [
        "s3:PutObjectTagging"
      ],
      "Resource": [
        "arn:aws:s3:::amzn-s3-demo-bucket/*"
      ],
      "Condition": {"ForAnyValue:StringEquals": {"s3:RequestObjectTagKeys": [
            "Owner",
            "CreationDate"
          ]
        }
      }
    }
  ]
}
```

------

### Require a specific tag key and value when allowing users to add object tags
<a name="example-bucket-policies-tagging-3"></a>

The following example policy grants a user permission to perform the `s3:PutObjectTagging` action, which allows a user to add tags to an existing object. The condition requires the user to include a specific tag key (such as `Project`) with the value set to `X`.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
  "Statement": [
    {"Principal":{"AWS":[
       "arn:aws:iam::111122223333:user/JohnDoe"
         ]
       },
      "Effect": "Allow",
      "Action": [
        "s3:PutObjectTagging"
      ],
      "Resource": [
        "arn:aws:s3:::amzn-s3-demo-bucket/*"
      ],
      "Condition": {"StringEquals": {"s3:RequestObjectTag/Project": "X"
        }
      }
    }
  ]
}
```

------

### Allow a user to only add objects with a specific object tag key and value
<a name="example-bucket-policies-tagging-4"></a>

The following example policy grants a user permission to perform the `s3:PutObject` action so that they can add objects to a bucket. However, the `Condition` statement restricts the tag keys and values that are allowed on the uploaded objects. In this example, the user can only add objects that have the specific tag key (`Department`) with the value set to `Finance` to the bucket.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [{
        "Principal":{
            "AWS":[
                 "arn:aws:iam::111122223333:user/JohnDoe"
         ]
        },
        "Effect": "Allow",
        "Action": [
            "s3:PutObject"
        ],
        "Resource": [
            "arn:aws:s3:::amzn-s3-demo-bucket/*"
        ],
        "Condition": {
            "StringEquals": {
                "s3:RequestObjectTag/Department": "Finance"
            }
        }
    }]
}
```

------

## Managing object access by using global condition keys
<a name="example-bucket-policies-global-condition-keys"></a>

[Global condition keys](https://docs.aws.amazon.com//IAM/latest/UserGuide/reference_policies_condition-keys.html) are condition context keys with an `aws` prefix. AWS services can support global condition keys or service-specific keys that include the service prefix. You can use the `Condition` element of a JSON policy to compare the keys in a request with the key values that you specify in your policy.

### Restrict access to only Amazon S3 server access log deliveries
<a name="example-bucket-policies-global-condition-keys-1"></a>

In the following example bucket policy, the [https://docs.aws.amazon.com//IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourcearn](https://docs.aws.amazon.com//IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourcearn) global condition key is used to compare the [Amazon Resource Name (ARN)](https://docs.aws.amazon.com//IAM/latest/UserGuide/reference_identifiers.html#identifiers-arns) of the resource, making a service-to-service request with the ARN that is specified in the policy. The `aws:SourceArn` global condition key is used to prevent the Amazon S3 service from being used as a [confused deputy](https://docs.aws.amazon.com//IAM/latest/UserGuide/confused-deputy.html) during transactions between services. Only the Amazon S3 service is allowed to add objects to the Amazon S3 bucket.

This example bucket policy grants `s3:PutObject` permissions to only the logging service principal (`logging.s3.amazonaws.com`). 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "AllowPutObjectS3ServerAccessLogsPolicy",
            "Principal": {
                "Service": "logging.s3.amazonaws.com"
            },
            "Effect": "Allow",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::amzn-s3-demo-bucket-logs/*",
            "Condition": {
                "StringEquals": {
                "aws:SourceAccount": "111122223333"
                },
                "ArnLike": {
                "aws:SourceArn": "arn:aws:s3:::amzn-s3-demo-source-bucket1"
                }
            }
        },
        {
            "Sid": "RestrictToS3ServerAccessLogs",
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::amzn-s3-demo-bucket-logs/*",
            "Condition": {
                "ForAllValues:StringNotEquals": {
                    "aws:PrincipalServiceNamesList": "logging.s3.amazonaws.com"
                }
            }
        }
    ]
}
```

------

### Allow access to only your organization
<a name="example-bucket-policies-global-condition-keys-2"></a>

If you want to require all [IAM principals](https://docs.aws.amazon.com//IAM/latest/UserGuide/intro-structure.html#intro-structure-principal) accessing a resource to be from an AWS account in your organization (including the AWS Organizations management account), you can use the `aws:PrincipalOrgID` global condition key.

To grant or restrict this type of access, define the `aws:PrincipalOrgID` condition and set the value to your [organization ID](https://docs.aws.amazon.com//organizations/latest/userguide/orgs_manage_org_details.html) in the bucket policy. The organization ID is used to control access to the bucket. When you use the `aws:PrincipalOrgID` condition, the permissions from the bucket policy are also applied to all new accounts that are added to the organization.

Here’s an example of a resource-based bucket policy that you can use to grant specific IAM principals in your organization direct access to your bucket. By adding the `aws:PrincipalOrgID` global condition key to your bucket policy, the principal account is now required to be in your organization to obtain access to the resource. Even if you accidentally specify an incorrect account when granting access, the [aws:PrincipalOrgID global condition key](https://docs.aws.amazon.com//IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-principalorgid) acts as an additional safeguard. When this global key is used in a policy, it prevents all principals from outside of the specified organization from accessing the S3 bucket. Only principals from accounts in the listed organization are able to obtain access to the resource.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [{
        "Sid": "AllowGetObject",
        "Principal": {
            "AWS": "*"
        },
        "Effect": "Allow",
        "Action": "s3:GetObject",
        "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*",
        "Condition": {
            "StringEquals": {
                "aws:PrincipalOrgID": ["o-aa111bb222"]
            }
        }
    }]
}
```

------

## Managing access based on HTTP or HTTPS requests
<a name="example-bucket-policies-HTTP-HTTPS"></a>

### Restrict access to only HTTPS requests
<a name="example-bucket-policies-use-case-HTTP-HTTPS-1"></a>

If you want to prevent potential attackers from manipulating network traffic, you can use HTTPS (TLS) to only allow encrypted connections while restricting HTTP requests from accessing your bucket. To determine whether the request is HTTP or HTTPS, use the [https://docs.aws.amazon.com//IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-securetransport](https://docs.aws.amazon.com//IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-securetransport) global condition key in your S3 bucket policy. The `aws:SecureTransport` condition key checks whether a request was sent by using HTTP.

If a request returns `true`, then the request was sent through HTTPS. If the request returns `false`, then the request was sent through HTTP. You can then allow or deny access to your bucket based on the desired request scheme.

In the following example, the bucket policy explicitly denies HTTP requests. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [{
        "Sid": "RestrictToTLSRequestsOnly",
        "Action": "s3:*",
        "Effect": "Deny",
        "Resource": [
            "arn:aws:s3:::amzn-s3-demo-bucket",
            "arn:aws:s3:::amzn-s3-demo-bucket/*"
        ],
        "Condition": {
            "Bool": {
                "aws:SecureTransport": "false"
            }
        },
        "Principal": "*"
    }]
}
```

------

### Restrict access to a specific HTTP referer
<a name="example-bucket-policies-HTTP-HTTPS-2"></a>

Suppose that you have a website with the domain name *`www.example.com`* or *`example.com`* with links to photos and videos stored in your bucket named `amzn-s3-demo-bucket`. By default, all Amazon S3 resources are private, so only the AWS account that created the resources can access them. 

To allow read access to these objects from your website, you can add a bucket policy that allows the `s3:GetObject` permission with a condition that the `GET` request must originate from specific webpages. The following policy restricts requests by using the `StringLike` condition with the `aws:Referer` condition key.

Make sure that the browsers that you use include the HTTP `referer` header in the request.

**Warning**  
We recommend that you use caution when using the `aws:Referer` condition key. It is dangerous to include a publicly known HTTP referer header value. Unauthorized parties can use modified or custom browsers to provide any `aws:Referer` value that they choose. Therefore, do not use `aws:Referer` to prevent unauthorized parties from making direct AWS requests.   
The `aws:Referer` condition key is offered only to allow customers to protect their digital content, such as content stored in Amazon S3, from being referenced on unauthorized third-party sites. For more information, see [https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-referer](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-referer) in the *IAM User Guide*.

## Managing user access to specific folders
<a name="example-bucket-policies-folders"></a>

### Grant users access to specific folders
<a name="example-bucket-policies-folders-1"></a>

Suppose that you're trying to grant users access to a specific folder. If the IAM user and the S3 bucket belong to the same AWS account, then you can use an IAM policy to grant the user access to a specific bucket folder. With this approach, you don't need to update your bucket policy to grant access. You can add the IAM policy to an IAM role that multiple users can switch to. 

If the IAM identity and the S3 bucket belong to different AWS accounts, then you must grant cross-account access in both the IAM policy and the bucket policy. For more information about granting cross-account access, see [Bucket owner granting cross-account bucket permissions](https://docs.aws.amazon.com//AmazonS3/latest/userguide/example-walkthroughs-managing-access-example2.html).

The following example bucket policy grants `JohnDoe` full console access to only his folder (`home/JohnDoe/`). By creating a `home` folder and granting the appropriate permissions to your users, you can have multiple users share a single bucket. This policy consists of three `Allow` statements:
+ `AllowRootAndHomeListingOfCompanyBucket`: Allows the user (`JohnDoe`) to list objects at the root level of the `amzn-s3-demo-bucket` bucket and in the `home` folder. This statement also allows the user to search on the prefix `home/` by using the console.
+ `AllowListingOfUserFolder`: Allows the user (`JohnDoe`) to list all objects in the `home/JohnDoe/` folder and any subfolders.
+ `AllowAllS3ActionsInUserFolder`: Allows the user to perform all Amazon S3 actions by granting `Read`, `Write`, and `Delete` permissions. Permissions are limited to the bucket owner's home folder.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "AllowRootAndHomeListingOfCompanyBucket",
            "Principal": {
                "AWS": [
                    "arn:aws:iam::111122223333:user/JohnDoe"
                ]
            },
            "Effect": "Allow",
            "Action": ["s3:ListBucket"],
            "Resource": ["arn:aws:s3:::amzn-s3-demo-bucket"],
            "Condition": {
                "StringEquals": {
                    "s3:prefix": ["", "home/", "home/JohnDoe"],
                    "s3:delimiter": ["/"]
                }
            }
        },
        {
            "Sid": "AllowListingOfUserFolder",
            "Principal": {
                "AWS": [
                    "arn:aws:iam::111122223333:user/JohnDoe"
                ]
            },
            "Action": ["s3:ListBucket"],
            "Effect": "Allow",
            "Resource": ["arn:aws:s3:::amzn-s3-demo-bucket"],
            "Condition": {
                "StringLike": {
                    "s3:prefix": ["home/JohnDoe/*"]
                }
            }
        },
        {
            "Sid": "AllowAllS3ActionsInUserFolder",
            "Effect": "Allow",
            "Principal": {
                "AWS": [
                    "arn:aws:iam::111122223333:user/JohnDoe"
                ]
            },
            "Action": ["s3:*"],
            "Resource": ["arn:aws:s3:::amzn-s3-demo-bucket/home/JohnDoe/*"]
        }
    ]
}
```

------

## Managing access for access logs
<a name="example-bucket-policies-access-logs"></a>

### Grant access to Application Load Balancer for enabling access logs
<a name="example-bucket-policies-access-logs-1"></a>

When you enable access logs for Application Load Balancer, you must specify the name of the S3 bucket where the load balancer will [store the logs](https://docs.aws.amazon.com//elasticloadbalancing/latest/application/enable-access-logging.html#access-log-create-bucket). The bucket must have an [attached policy](https://docs.aws.amazon.com//elasticloadbalancing/latest/application/enable-access-logging.html#attach-bucket-policy) that grants Elastic Load Balancing permission to write to the bucket.

In the following example, the bucket policy grants Elastic Load Balancing (ELB) permission to write the access logs to the bucket:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:root"
            },
            "Effect": "Allow",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/prefix/AWSLogs/111122223333/*"
        }
    ]
}
```

------

**Note**  
Make sure to replace `elb-account-id` with the AWS account ID for Elastic Load Balancing for your AWS Region. For the list of Elastic Load Balancing Regions, see [Attach a policy to your Amazon S3 bucket](https://docs.aws.amazon.com//elasticloadbalancing/latest/classic/enable-access-logs.html#attach-bucket-policy) in the *Elastic Load Balancing User Guide*.

If your AWS Region does not appear in the supported Elastic Load Balancing Regions list, use the following policy, which grants permissions to the specified log delivery service.

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
       "Principal": {
         "Service": "logdelivery.elasticloadbalancing.amazonaws.com"
          },
      "Effect": "Allow",
      "Action": "s3:PutObject",
      "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/prefix/AWSLogs/111122223333/*"
    }
  ]
}
```

------

Then, make sure to configure your [Elastic Load Balancing access logs](https://docs.aws.amazon.com//elasticloadbalancing/latest/application/enable-access-logging.html#enable-access-logs) by enabling them. You can [verify your bucket permissions](https://docs.aws.amazon.com//elasticloadbalancing/latest/application/enable-access-logging.html#verify-bucket-permissions) by creating a test file.

## Managing access to an Amazon CloudFront OAI
<a name="example-bucket-policies-cloudfront"></a>

### Grant permission to an Amazon CloudFront OAI
<a name="example-bucket-policies-cloudfront-1"></a>

The following example bucket policy grants a CloudFront origin access identity (OAI) permission to get (read) all objects in your S3 bucket. You can use a CloudFront OAI to allow users to access objects in your bucket through CloudFront but not directly through Amazon S3. For more information, see [Restricting access to Amazon S3 content by using an Origin Access Identity](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html) in the *Amazon CloudFront Developer Guide*.

The following policy uses the OAI's ID as the policy's `Principal`. For more information about using S3 bucket policies to grant access to a CloudFront OAI, see [Migrating from origin access identity (OAI) to origin access control (OAC)](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html#migrate-from-oai-to-oac) in the *Amazon CloudFront Developer Guide*.

To use this example:
+ Replace `EH1HDMB1FH2TC` with the OAI's ID. To find the OAI's ID, see the [Origin Access Identity page](https://console.aws.amazon.com/cloudfront/home?region=us-east-1#oai:) on the CloudFront console, or use [https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_ListCloudFrontOriginAccessIdentities.html](https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_ListCloudFrontOriginAccessIdentities.html) in the CloudFront API.
+ Replace `amzn-s3-demo-bucket` with the name of your bucket.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Id": "PolicyForCloudFrontPrivateContent",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity EH1HDMB1FH2TC"
            },
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*"
        }
    ]
}
```

------

## Managing access for Amazon S3 Storage Lens
<a name="example-bucket-policies-lens"></a>

### Grant permissions for Amazon S3 Storage Lens
<a name="example-bucket-policies-lens-1"></a>

S3 Storage Lens aggregates your metrics and displays the information in the **Account snapshot** section on the Amazon S3 console **Buckets** page. S3 Storage Lens also provides an interactive dashboard that you can use to visualize insights and trends, flag outliers, and receive recommendations for optimizing storage costs and applying data protection best practices. Your dashboard has drill-down options to generate and visualize insights at the organization, account, AWS Region, storage class, bucket, prefix, or Storage Lens group level. You can also send a daily metrics report in CSV or Parquet format to a general purpose S3 bucket or export the metrics directly to an AWS-managed S3 table bucket.

S3 Storage Lens can export your aggregated storage usage metrics to an Amazon S3 bucket for further analysis. The bucket where S3 Storage Lens places its metrics exports is known as the *destination bucket*. When setting up your S3 Storage Lens metrics export, you must have a bucket policy for the destination bucket. For more information, see [Monitoring your storage activity and usage with Amazon S3 Storage Lens](storage_lens.md).

The following example bucket policy grants Amazon S3 permission to write objects (`PUT` requests) to a destination bucket. You use a bucket policy like this on the destination bucket when setting up an S3 Storage Lens metrics export.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "S3StorageLensExamplePolicy",
            "Effect": "Allow",
            "Principal": {
                "Service": "storage-lens.s3.amazonaws.com"
            },
            "Action": "s3:PutObject",
            "Resource": [
                "arn:aws:s3:::amzn-s3-demo-destination-bucket/destination-prefix/StorageLens/111122223333/*"
            ],
            "Condition": {
                "StringEquals": {
                    "s3:x-amz-acl": "bucket-owner-full-control",
                    "aws:SourceAccount": "111122223333",
                    "aws:SourceArn": "arn:aws:s3:region-code:111122223333:storage-lens/storage-lens-dashboard-configuration-id"
                }
            }
        }
    ]
}
```

------

When you're setting up an S3 Storage Lens organization-level metrics export, use the following modification to the previous bucket policy's `Resource` statement.

```
1. "Resource": "arn:aws:s3:::amzn-s3-demo-destination-bucket/destination-prefix/StorageLens/your-organization-id/*",
```

## Managing permissions for S3 Inventory, S3 analytics, and S3 Inventory reports
<a name="example-bucket-policies-s3-inventory"></a>

### Grant permissions for S3 Inventory and S3 analytics
<a name="example-bucket-policies-s3-inventory-1"></a>

S3 Inventory creates lists of the objects in a bucket, and S3 analytics Storage Class Analysis export creates output files of the data used in the analysis. The bucket that the inventory lists the objects for is called the *source bucket*. The bucket where the inventory file or the analytics export file is written to is called a *destination bucket*. When setting up an inventory or an analytics export, you must create a bucket policy for the destination bucket. For more information, see [Cataloging and analyzing your data with S3 Inventory](storage-inventory.md) and [Amazon S3 analytics – Storage Class Analysis](analytics-storage-class.md).

The following example bucket policy grants Amazon S3 permission to write objects (`PUT` requests) from the account for the source bucket to the destination bucket. You use a bucket policy like this on the destination bucket when setting up S3 Inventory and S3 analytics export.

------
#### [ JSON ]

****  

```
{  
      "Version":"2012-10-17",		 	 	 
      "Statement": [
        {
            "Sid": "InventoryAndAnalyticsExamplePolicy",
            "Effect": "Allow",
            "Principal": {
                "Service": "s3.amazonaws.com"
            },
            "Action": "s3:PutObject",
            "Resource": [
            "arn:aws:s3:::amzn-s3-demo-destination-bucket/*"
            ],
            "Condition": {
                "ArnLike": {
                "aws:SourceArn": "arn:aws:s3:::amzn-s3-demo-source-bucket"
                },
                "StringEquals": {
                    "aws:SourceAccount": "111122223333",
                    "s3:x-amz-acl": "bucket-owner-full-control"
                }
            }
        }
    ]
}
```

------

### Control S3 Inventory report configuration creation
<a name="example-bucket-policies-s3-inventory-2"></a>

[Cataloging and analyzing your data with S3 Inventory](storage-inventory.md) creates lists of the objects in an S3 bucket and the metadata for each object. The `s3:PutInventoryConfiguration` permission allows a user to create an inventory configuration that includes all object metadata fields that are available by default and to specify the destination bucket to store the inventory. A user with read access to objects in the destination bucket can access all object metadata fields that are available in the inventory report. For more information about the metadata fields that are available in S3 Inventory, see [Amazon S3 Inventory list](storage-inventory.md#storage-inventory-contents).

To restrict a user from configuring an S3 Inventory report, remove the `s3:PutInventoryConfiguration` permission from the user.

Some object metadata fields in S3 Inventory report configurations are optional, meaning that they're available by default but they can be restricted when you grant a user the `s3:PutInventoryConfiguration` permission. You can control whether users can include these optional metadata fields in their reports by using the `s3:InventoryAccessibleOptionalFields` condition key. For a list of the optional metadata fields available in S3 Inventory, see [https://docs.aws.amazon.com//AmazonS3/latest/API/API_PutBucketInventoryConfiguration.html#API_PutBucketInventoryConfiguration_RequestBody](https://docs.aws.amazon.com//AmazonS3/latest/API/API_PutBucketInventoryConfiguration.html#API_PutBucketInventoryConfiguration_RequestBody) in the *Amazon Simple Storage Service API Reference*.

To grant a user permission to create an inventory configuration with specific optional metadata fields, use the `s3:InventoryAccessibleOptionalFields` condition key to refine the conditions in your bucket policy. 

The following example policy grants a user (`Ana`) permission to create an inventory configuration conditionally. The `ForAllValues:StringEquals` condition in the policy uses the `s3:InventoryAccessibleOptionalFields` condition key to specify the two allowed optional metadata fields, namely `Size` and `StorageClass`. So, when `Ana` is creating an inventory configuration, the only optional metadata fields that she can include are `Size` and `StorageClass`. 

------
#### [ JSON ]

****  

```
{
	"Id": "InventoryConfigPolicy",
	"Version":"2012-10-17",		 	 	 
	"Statement": [{
			"Sid": "AllowInventoryCreationConditionally",
			"Effect": "Allow",			
			"Principal": {
				"AWS": "arn:aws:iam::111122223333:user/Ana"
			},			
			"Action": 
				"s3:PutInventoryConfiguration",
			"Resource": 
				"arn:aws:s3:::DOC-EXAMPLE-SOURCE-BUCKET",
			"Condition": {
				"ForAllValues:StringEquals": {
					"s3:InventoryAccessibleOptionalFields": [
					   "Size",
					   "StorageClass"
					   ]
				  }
				}
			}
	]
}
```

------

To restrict a user from configuring an S3 Inventory report that includes specific optional metadata fields, add an explicit `Deny` statement to the bucket policy for the source bucket. The following example bucket policy denies the user `Ana` from creating an inventory configuration in the source bucket `amzn-s3-demo-source-bucket` that includes the optional `ObjectAccessControlList` or `ObjectOwner` metadata fields. The user `Ana` can still create an inventory configuration with other optional metadata fields.

```
 1. {
 2. 	"Id": "InventoryConfigSomeFields",
 3. 	"Version": "2012-10-17",		 	 	 
 4. 	"Statement": [{
 5. 			"Sid": "AllowInventoryCreation",
 6. 			"Effect": "Allow",
 7. 			"Principal": {
 8. 				"AWS": "arn:aws:iam::111122223333:user/Ana"
 9. 			},
10. 			"Action": "s3:PutInventoryConfiguration",			
11. 			"Resource": 
12. 				"arn:aws:s3:::amzn-s3-demo-source-bucket",
13. 
14. 		},
15. 		{
16. 			"Sid": "DenyCertainInventoryFieldCreation",
17. 			"Effect": "Deny",
18. 			"Principal": {
19. 				"AWS": "arn:aws:iam::111122223333:user/Ana"
20. 			},
21. 			"Action": "s3:PutInventoryConfiguration",	
22. 			"Resource": 
23. 			  "arn:aws:s3:::amzn-s3-demo-source-bucket",			
24. 			"Condition": {
25. 				"ForAnyValue:StringEquals": {
26. 					"s3:InventoryAccessibleOptionalFields": [
27. 					   "ObjectOwner",
28. 					   "ObjectAccessControlList"
29. 					   ]
30. 				  }
31. 				}
32. 			}
33. 	]
34. }
```

**Note**  
The use of the `s3:InventoryAccessibleOptionalFields` condition key in bucket policies doesn't affect the delivery of inventory reports based on the existing inventory configurations. 

**Important**  
We recommend that you use `ForAllValues` with an `Allow` effect or `ForAnyValue` with a `Deny` effect, as shown in the prior examples.  
Don't use `ForAllValues` with a `Deny` effect nor `ForAnyValue` with an `Allow` effect, because these combinations can be overly restrictive and block inventory configuration deletion.  
To learn more about the `ForAllValues` and `ForAnyValue` condition set operators, see [Multivalued context keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-single-vs-multi-valued-context-keys.html#reference_policies_condition-multi-valued-context-keys) in the *IAM User Guide*.

## Requiring MFA
<a name="example-bucket-policies-MFA"></a>

Amazon S3 supports MFA-protected API access, a feature that can enforce multi-factor authentication (MFA) for access to your Amazon S3 resources. Multi-factor authentication provides an extra level of security that you can apply to your AWS environment. MFA is a security feature that requires users to prove physical possession of an MFA device by providing a valid MFA code. For more information, see [AWS Multi-Factor Authentication](https://aws.amazon.com/mfa/). You can require MFA for any requests to access your Amazon S3 resources. 

To enforce the MFA requirement, use the `aws:MultiFactorAuthAge` condition key in a bucket policy. IAM users can access Amazon S3 resources by using temporary credentials issued by the AWS Security Token Service (AWS STS). You provide the MFA code at the time of the AWS STS request. 

When Amazon S3 receives a request with multi-factor authentication, the `aws:MultiFactorAuthAge` condition key provides a numeric value that indicates how long ago (in seconds) the temporary credential was created. If the temporary credential provided in the request was not created by using an MFA device, this key value is null (absent). In a bucket policy, you can add a condition to check this value, as shown in the following example. 

This example policy denies any Amazon S3 operation on the *`/taxdocuments`* folder in the `amzn-s3-demo-bucket` bucket if the request is not authenticated by using MFA. To learn more about MFA, see [Using Multi-Factor Authentication (MFA) in AWS](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa.html) in the *IAM User Guide*.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Id": "123",
    "Statement": [
      {
        "Sid": "",
        "Effect": "Deny",
        "Principal": "*",
        "Action": "s3:*",
        "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/taxdocuments/*",
        "Condition": { "Null": { "aws:MultiFactorAuthAge": true }}
      }
    ]
 }
```

------

The `Null` condition in the `Condition` block evaluates to `true` if the `aws:MultiFactorAuthAge` condition key value is null, indicating that the temporary security credentials in the request were created without an MFA device. 

The following bucket policy is an extension of the preceding bucket policy. The following policy includes two policy statements. One statement allows the `s3:GetObject` permission on a bucket (`amzn-s3-demo-bucket`) to everyone. Another statement further restricts access to the `amzn-s3-demo-bucket/taxdocuments` folder in the bucket by requiring MFA. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Id": "123",
    "Statement": [
      {
        "Sid": "DenyInsecureConnections",
        "Effect": "Deny",
        "Principal": {
            "AWS": "arn:aws:iam::111122223333:root"
        },
        "Action": "s3:*",
        "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/taxdocuments/*",
        "Condition": { "Null": { "aws:MultiFactorAuthAge": true } }
      },
      {
        "Sid": "AllowGetObject",
        "Effect": "Allow",
        "Principal": {
            "AWS": "arn:aws:iam::111122223333:root"
        },
        "Action": ["s3:GetObject"],
        "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*"
      }
    ]
 }
```

------

You can optionally use a numeric condition to limit the duration for which the `aws:MultiFactorAuthAge` key is valid. The duration that you specify with the `aws:MultiFactorAuthAge` key is independent of the lifetime of the temporary security credential that's used in authenticating the request. 

For example, the following bucket policy, in addition to requiring MFA authentication, also checks how long ago the temporary session was created. The policy denies any operation if the `aws:MultiFactorAuthAge` key value indicates that the temporary session was created more than an hour ago (3,600 seconds). 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Id": "123",
    "Statement": [
      {
        "Sid": "",
        "Effect": "Deny",
        "Principal": "*",
        "Action": "s3:*",
        "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/taxdocuments/*",
        "Condition": {"Null": {"aws:MultiFactorAuthAge": true }}
      },
      {
        "Sid": "",
        "Effect": "Deny",
        "Principal": "*",
        "Action": "s3:*",
        "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/taxdocuments/*",
        "Condition": {"NumericGreaterThan": {"aws:MultiFactorAuthAge": 3600 }}
       },
       {
         "Sid": "",
         "Effect": "Allow",
         "Principal": "*",
         "Action": ["s3:GetObject"],
         "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*"
       }
    ]
 }
```

------

## Preventing users from deleting objects
<a name="using-with-s3-actions-related-to-bucket-subresources"></a>

By default, users have no permissions. But as you create policies, you might grant users permissions that you didn't intend to grant. To avoid such permission loopholes, you can write a stricter access policy by adding an explicit deny. 

To explicitly block users or accounts from deleting objects, you must add the following actions to a bucket policy: `s3:DeleteObject`, `s3:DeleteObjectVersion`, and `s3:PutLifecycleConfiguration` permissions. All three actions are required because you can delete objects either by explicitly calling the `DELETE Object` API operations or by configuring their lifecycle (see [Managing the lifecycle of objects](object-lifecycle-mgmt.md)) so that Amazon S3 can remove the objects when their lifetime expires.

In the following policy example, you explicitly deny `DELETE Object` permissions to the user `MaryMajor`. An explicit `Deny` statement always supersedes any other permission granted.

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "statement1",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::123456789012:user/MaryMajor"
      },
      "Action": [
        "s3:GetObjectVersion",
        "s3:GetBucketAcl"
      ],
      "Resource": [
        "arn:aws:s3:::amzn-s3-demo-bucket1",
	 	"arn:aws:s3:::amzn-s3-demo-bucket1/*"
      ]
    },
    {
      "Sid": "statement2",
      "Effect": "Deny",
      "Principal": {
        "AWS": "arn:aws:iam::123456789012:user/MaryMajor"
      },
      "Action": [
        "s3:DeleteObject",
        "s3:DeleteObjectVersion",
        "s3:PutLifecycleConfiguration"
      ],
      "Resource": [
        "arn:aws:s3:::amzn-s3-demo-bucket1",
	    "arn:aws:s3:::amzn-s3-demo-bucket1/*"
      ]
    }
  ]
}
```

------

# Bucket policy examples using condition keys
<a name="amazon-s3-policy-keys"></a>

You can use access policy language to specify conditions when you grant permissions. You can use the optional `Condition` element, or `Condition` block, to specify conditions for when a policy is in effect. 

For policies that use Amazon S3 condition keys for object and bucket operations, see the following examples. For more information about condition keys, see [Policy condition keys for Amazon S3](security_iam_service-with-iam.md#security_iam_service-with-iam-id-based-policies-conditionkeys). For a complete list of Amazon S3 actions, condition keys, and resources that you can specify in policies, see [ Actions, resources, and condition keys for Amazon S3](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html) in the *Service Authorization Reference*.

For more information about the permissions to S3 API operations by S3 resource types, see [Required permissions for Amazon S3 API operations](using-with-s3-policy-actions.md).

## Examples: Amazon S3 condition keys for object operations
<a name="object-keys-in-amazon-s3-policies"></a>

The following examples show how you can use Amazon S3‐specific condition keys for object operations. For a complete list of Amazon S3 actions, condition keys, and resources that you can specify in policies, see [ Actions, resources, and condition keys for Amazon S3](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html) in the *Service Authorization Reference*.

For more information about the permissions to S3 API operations by S3 resource types, see [Required permissions for Amazon S3 API operations](using-with-s3-policy-actions.md).

Several of the example policies show how you can use conditions keys with [PUT Object](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html) operations. PUT Object operations allow access control list (ACL)–specific headers that you can use to grant ACL-based permissions. By using these condition keys, you can set a condition to require specific access permissions when the user uploads an object. You can also grant ACL–based permissions with the PutObjectAcl operation. For more information, see [PutObjectAcl](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectAcl.html) in the *Amazon S3 Amazon Simple Storage Service API Reference*. For more information about ACLs, see [Access control list (ACL) overview](acl-overview.md).

**Topics**
+ [

### Example 1: Granting `s3:PutObject` permission requiring that objects be stored using server-side encryption
](#putobject-require-sse-2)
+ [

### Example 2: Granting `s3:PutObject` permission to copy objects with a restriction on the copy source
](#putobject-limit-copy-source-3)
+ [

### Example 3: Granting access to a specific version of an object
](#getobjectversion-limit-access-to-specific-version-3)
+ [

### Example 4: Granting permissions based on object tags
](#example-object-tagging-access-control)
+ [

### Example 5: Restricting access by the AWS account ID of the bucket owner
](#example-object-resource-account)
+ [

### Example 6: Requiring a minimum TLS version
](#example-object-tls-version)
+ [

### Example 7: Excluding certain principals from a `Deny` statement
](#example-exclude-principal-from-deny-statement)
+ [

### Example 8: Enforcing clients to conditionally upload objects based on object key names or ETags
](#example-conditional-writes-enforce)

### Example 1: Granting `s3:PutObject` permission requiring that objects be stored using server-side encryption
<a name="putobject-require-sse-2"></a>

Suppose that Account A owns a bucket. The account administrator wants to grant Jane, a user in Account A, permission to upload objects with the condition that Jane always request server-side encryption with Amazon S3 managed keys (SSE-S3). The Account A administrator can specify this requirement by using the `s3:x-amz-server-side-encryption` condition key as shown. The key-value pair in the following `Condition` block specifies the `s3:x-amz-server-side-encryption` condition key and SSE-S3 (`AES256`) as the encryption type:

```
"Condition": {
     "StringNotEquals": {
         "s3:x-amz-server-side-encryption": "AES256"
     }}
```

When testing this permission by using the AWS CLI, you must add the required encryption by using the `--server-side-encryption` parameter, as shown in the following example. To use this example command, replace the `user input placeholders` with your own information. 

```
aws s3api put-object --bucket amzn-s3-demo-bucket --key HappyFace.jpg --body c:\HappyFace.jpg --server-side-encryption "AES256" --profile AccountAadmin
```

### Example 2: Granting `s3:PutObject` permission to copy objects with a restriction on the copy source
<a name="putobject-limit-copy-source-3"></a>

In a `PUT` object request, when you specify a source object, the request is a copy operation (see [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html)). Accordingly, the bucket owner can grant a user permission to copy objects with restrictions on the source, for example:
+ Allow copying objects only from the specified source bucket (for example, `amzn-s3-demo-source-bucket`).
+ Allow copying objects from the specified source bucket and only the objects whose key name prefix starts with as specific prefix, such as *`public/`* (for example, `amzn-s3-demo-source-bucket/public/*`).
+ Allow copying only a specific object from the source bucket (for example, `amzn-s3-demo-source-bucket/example.jpg`).

The following bucket policy grants a user (`Dave`) the `s3:PutObject` permission. This policy allows him to copy objects only with a condition that the request include the `s3:x-amz-copy-source` header and that the header value specify the `/amzn-s3-demo-source-bucket/public/*` key name prefix. To use this example policy, replace the `user input placeholders` with your own information.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
       {
            "Sid": "cross-account permission to user in your own account",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::123456789012:user/Dave"
            },
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::amzn-s3-demo-source-bucket/*"
        },
        {
            "Sid": "Deny your user permission to upload object if copy source is not /bucket/prefix",
            "Effect": "Deny",
            "Principal": {
                "AWS": "arn:aws:iam::123456789012:user/Dave"
            },
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::amzn-s3-demo-source-bucket/*",
            "Condition": {
                "StringNotLike": {
                    "s3:x-amz-copy-source": "amzn-s3-demo-source-bucket/public/*"
                }
            }
        }
    ]
}
```

------

**Test the policy with the AWS CLI**  
You can test the permission using the AWS CLI `copy-object` command. You specify the source by adding the `--copy-source` parameter; the key name prefix must match the prefix allowed in the policy. You need to provide the user Dave credentials using the `--profile` parameter. For more information about setting up the AWS CLI, see [Developing with Amazon S3 using the AWS CLI](https://docs.aws.amazon.com/AmazonS3/latest/API/setup-aws-cli.html) in the *Amazon S3 API Reference*.

```
aws s3api copy-object --bucket amzn-s3-demo-source-bucket --key HappyFace.jpg 
--copy-source amzn-s3-demo-source-bucket/public/PublicHappyFace1.jpg --profile AccountADave
```

**Give permission to copy only a specific object**  
The preceding policy uses the `StringNotLike` condition. To grant permission to copy only a specific object, you must change the condition from `StringNotLike` to `StringNotEquals` and then specify the exact object key, as shown in the following example. To use this example command, replace the `user input placeholders` with your own information.

```
"Condition": {
       "StringNotEquals": {
           "s3:x-amz-copy-source": "amzn-s3-demo-source-bucket/public/PublicHappyFace1.jpg"
       }
}
```

### Example 3: Granting access to a specific version of an object
<a name="getobjectversion-limit-access-to-specific-version-3"></a>

Suppose that Account A owns a versioning-enabled bucket. The bucket has several versions of the `HappyFace.jpg` object. The Account A administrator now wants to grant the user `Dave` permission to get only a specific version of the object. The account administrator can accomplish this by granting the user `Dave` the `s3:GetObjectVersion` permission conditionally, as shown in the following example. The key-value pair in the `Condition` block specifies the `s3:VersionId` condition key. In this case, to retrieve the object from the specified versioning-enabled bucket, `Dave` needs to know the exact object version ID. To use this example policy, replace the `user input placeholders` with your own information.

For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html) in the *Amazon Simple Storage Service API Reference*. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "statement1",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::123456789012:user/Dave"
            },
            "Action": "s3:GetObjectVersion",
            "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/HappyFace.jpg"
        },
        {
            "Sid": "statement2",
            "Effect": "Deny",
            "Principal": {
                "AWS": "arn:aws:iam::123456789012:user/Dave"
            },
            "Action": "s3:GetObjectVersion",
            "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/HappyFace.jpg",
            "Condition": {
                "StringNotEquals": {
                    "s3:VersionId": "AaaHbAQitwiL_h47_44lRO2DDfLlBO5e"
                }
            }
        }
    ]
}
```

------

**Test the policy with the AWS CLI**  
You can test the permissions in this policy by using the AWS CLI `get-object` command with the `--version-id` parameter to identify the specific object version to retrieve. The command retrieves the specified version of the object and saves it to the `OutputFile.jpg` file.

```
aws s3api get-object --bucket amzn-s3-demo-bucket --key HappyFace.jpg OutputFile.jpg --version-id AaaHbAQitwiL_h47_44lRO2DDfLlBO5e --profile AccountADave
```

### Example 4: Granting permissions based on object tags
<a name="example-object-tagging-access-control"></a>

For examples of how to use object tagging condition keys with Amazon S3 operations, see [Tagging and access control policies](tagging-and-policies.md).

### Example 5: Restricting access by the AWS account ID of the bucket owner
<a name="example-object-resource-account"></a>

You can use either the `aws:ResourceAccount` or `s3:ResourceAccount` condition key to write IAM or virtual private cloud (VPC) endpoint policies that restrict user, role, or application access to the Amazon S3 buckets that are owned by a specific AWS account ID. You can use these condition keys to restrict clients within your VPC from accessing buckets that you don't own.

However, be aware that some AWS services rely on access to AWS managed buckets. Therefore, using the `aws:ResourceAccount` or `s3:ResourceAccount` key in your IAM policy might also affect access to these resources. For more information, see the following resources:
+ [Restrict access to buckets in a specified AWS account](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html#bucket-policies-s3) in the *AWS PrivateLink Guide*
+ [Restrict access to buckets that Amazon ECR uses](https://docs.aws.amazon.com/AmazonECR/latest/userguide/vpc-endpoints.html#ecr-minimum-s3-perms) in the *Amazon ECR Guide*
+ [Provide required access to Systems Manager for AWS managed Amazon S3 buckets](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent-minimum-s3-permissions.html) in the *AWS Systems Manager Guide*

For more information about the `aws:ResourceAccount` and `s3:ResourceAccount` condition keys and examples that show how to use them, see [Limit access to Amazon S3 buckets owned by specific AWS accounts](https://aws.amazon.com/blogs/storage/limit-access-to-amazon-s3-buckets-owned-by-specific-aws-accounts/) in the *AWS Storage Blog*.

### Example 6: Requiring a minimum TLS version
<a name="example-object-tls-version"></a>

You can use the `s3:TlsVersion` condition key to write IAM, virtual private cloud endpoint (VPCE), or bucket policies that restrict user or application access to Amazon S3 buckets based on the TLS version that's used by the client. You can use this condition key to write policies that require a minimum TLS version. 

**Note**  
When AWS services make calls to other AWS services on your behalf (service-to-service calls), certain network-specific authorization context is redacted, including `s3:TlsVersion`, `aws:SecureTransport`, `aws:SourceIp`, and `aws:VpcSourceIp`. If your policy uses these condition keys with `Deny` statements, AWS service principals might be unintentionally blocked. To allow AWS services to work properly while maintaining your security requirements, exclude service principals from your `Deny` statements by adding the `aws:PrincipalIsAWSService` condition key with a value of `false`. For example:  

```
{
  "Effect": "Deny",
  "Action": "s3:*",
  "Resource": "*",
  "Condition": {
    "Bool": {
      "aws:SecureTransport": "false",
      "aws:PrincipalIsAWSService": "false"
    }
  }
}
```
This policy denies access to S3 operations when HTTPS is not used (`aws:SecureTransport` is false), but only for non-AWS service principals. This ensures your conditional restrictions apply to all principals except AWS service principals.

**Example**  
The following example bucket policy *denies* `PutObject` requests by clients that have a TLS version earlier than 1.2, for example, 1.1 or 1.0. To use this example policy, replace the `user input placeholders` with your own information.    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:PutObject",
            "Resource": [
                "arn:aws:s3:::amzn-s3-demo-bucket1",
                "arn:aws:s3:::amzn-s3-demo-bucket1/*"
            ],
            "Condition": {
                "NumericLessThan": {
                    "s3:TlsVersion": 1.2
                }
            }
        }
    ]
}
```

**Example**  
The following example bucket policy *allows* `PutObject` requests by clients that have a TLS version later than 1.1, for example, 1.2, 1.3, or later:    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:PutObject",
            "Resource": [
                "arn:aws:s3:::amzn-s3-demo-bucket1",
                "arn:aws:s3:::amzn-s3-demo-bucket1/*"
            ],
            "Condition": {
                "NumericGreaterThan": {
                    "s3:TlsVersion": 1.1
                }
            }
        }
    ]
}
```

### Example 7: Excluding certain principals from a `Deny` statement
<a name="example-exclude-principal-from-deny-statement"></a>

The following bucket policy denies `s3:GetObject` access to the `amzn-s3-demo-bucket`, except to principals with the account number *`123456789012`*. To use this example policy, replace the `user input placeholders` with your own information.

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "DenyAccessFromPrincipalNotInSpecificAccount",
      "Principal": {
        "AWS": "*"
      },
      "Action": "s3:GetObject",
      "Effect": "Deny",
      "Resource": [
        "arn:aws:s3:::amzn-s3-demo-bucket/*"
      ],
      "Condition": {
        "StringNotEquals": {
          "aws:PrincipalAccount": [
            "123456789012"
          ]
        }
      }
    }
  ]
}
```

------

### Example 8: Enforcing clients to conditionally upload objects based on object key names or ETags
<a name="example-conditional-writes-enforce"></a>

With conditional writes, you can add an additional header to your `WRITE` requests in order to specify preconditions for your S3 operation. This header specifies a condition that, if not met, will result in the S3 operation failing. For example you can prevent overwrites of existing data by validating there is no object with the same key name already in your bucket during object upload. You can alternatively check an object's entity tag (ETag) in Amazon S3 before writing an object.

For bucket policy examples that use conditions in a bucket policy to enforce conditional writes, see [Enforce conditional writes on Amazon S3 buckets](conditional-writes-enforce.md).

## Examples: Amazon S3 condition keys for bucket operations
<a name="bucket-keys-in-amazon-s3-policies"></a>

The following example policies show how you can use Amazon S3 specific condition keys for bucket operations.

**Topics**
+ [

### Example 1: Granting `s3:GetObject` permission with a condition on an IP address
](#AvailableKeys-iamV2)
+ [

### Example 2: Getting a list of objects in a bucket with a specific prefix
](#condition-key-bucket-ops-2)
+ [

### Example 3: Setting the maximum number of keys
](#example-numeric-condition-operators)

### Example 1: Granting `s3:GetObject` permission with a condition on an IP address
<a name="AvailableKeys-iamV2"></a>

You can give authenticated users permission to use the `s3:GetObject` action if the request originates from a specific range of IP addresses (for example, `192.0.2.*`), unless the IP address is one that you want to exclude (for example, `192.0.2.188`). In the `Condition` block, `IpAddress` and `NotIpAddress` are conditions, and each condition is provided a key-value pair for evaluation. Both of the key-value pairs in this example use the `aws:SourceIp` AWS wide key. To use this example policy, replace the `user input placeholders` with your own information.

**Note**  
The `IPAddress` and `NotIpAddress` key values specified in the `Condition` block use CIDR notation, as described in RFC 4632. For more information, see [http://www.rfc-editor.org/rfc/rfc4632.txt](http://www.rfc-editor.org/rfc/rfc4632.txt).

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Id": "S3PolicyId1",
    "Statement": [
        {
            "Sid": "statement1",
            "Effect": "Allow",
            "Principal": "*",
            "Action":"s3:GetObject",
            "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*",
            "Condition" : {
                "IpAddress" : {
                    "aws:SourceIp": "192.0.2.0/24" 
                },
                "NotIpAddress" : {
                    "aws:SourceIp": "192.0.2.188/32" 
                } 
            } 
        } 
    ]
}
```

------

You can also use other AWS‐wide condition keys in Amazon S3 policies. For example, you can specify the `aws:SourceVpce` and `aws:SourceVpc` condition keys in bucket policies for VPC endpoints. For specific examples, see [Controlling access from VPC endpoints with bucket policies](example-bucket-policies-vpc-endpoint.md).

**Note**  
For some AWS global condition keys, only certain resource types are supported. Therefore, check whether Amazon S3 supports the global condition key and resource type that you want to use, or if you'll need to use an Amazon S3 specific condition key instead. For a complete list of supported resource types and condition keys for Amazon S3, see [ Actions, resources, and condition keys for Amazon S3](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html) in the *Service Authorization Reference*.  
For more information about the permissions to S3 API operations by S3 resource types, see [Required permissions for Amazon S3 API operations](using-with-s3-policy-actions.md).

### Example 2: Getting a list of objects in a bucket with a specific prefix
<a name="condition-key-bucket-ops-2"></a>

You can use the `s3:prefix` condition key to limit the response of the [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html) API operation to key names with a specific prefix. If you are the bucket owner, you can use this condition key to restrict a user to list the contents of a specific prefix in the bucket. The `s3:prefix` condition key is useful if the objects in the bucket are organized by key name prefixes. 

The Amazon S3 console uses key name prefixes to show a folder concept. Only the console supports the concept of folders; the Amazon S3 API supports only buckets and objects. For example, if you have two objects with the key names *`public/object1.jpg`* and *`public/object2.jpg`*, the console shows the objects under the *`public`* folder. In the Amazon S3 API, these are objects with prefixes, not objects in folders. For more information about using prefixes and delimiters to filter access permissions, see [Controlling access to a bucket with user policies](walkthrough1.md). 

In the following scenario, the bucket owner and the parent account to which the user belongs are the same. So the bucket owner can use either a bucket policy or a user policy to grant access. For more information about other condition keys that you can use with the `ListObjectsV2` API operation, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html).

**Note**  
If the bucket is versioning-enabled, to list the objects in the bucket, you must grant the `s3:ListBucketVersions` permission in the following policies, instead of the `s3:ListBucket` permission. The `s3:ListBucketVersions` permission also supports the `s3:prefix` condition key. 

**User policy**  
The following user policy grants the `s3:ListBucket` permission (see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html)) with a `Condition` statement that requires the user to specify a prefix in the request with a value of `projects`. To use this example policy, replace the `user input placeholders` with your own information.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Sid":"statement1",
         "Effect":"Allow",
         "Action": "s3:ListBucket",
         "Resource":"arn:aws:s3:::amzn-s3-demo-bucket",
         "Condition" : {
             "StringEquals" : {
                 "s3:prefix": "projects" 
             }
          } 
       },
      {
         "Sid":"statement2",
         "Effect":"Deny",
         "Action": "s3:ListBucket",
         "Resource": "arn:aws:s3:::amzn-s3-demo-bucket",
         "Condition" : {
             "StringNotEquals" : {
                 "s3:prefix": "projects" 
             }
          } 
       }         
    ]
}
```

------

The `Condition` statement restricts the user to listing only object keys that have the `projects` prefix. The added explicit `Deny` statement denies the user from listing keys with any other prefix, no matter what other permissions the user might have. For example, it's possible that the user could get permission to list object keys without any restriction, either through updates to the preceding user policy or through a bucket policy. Because explicit `Deny` statements always override `Allow` statements, if the user tries to list keys other than those that have the `projects` prefix, the request is denied. 

**Bucket policy**  
If you add the `Principal` element to the above user policy, identifying the user, you now have a bucket policy, as shown in the following example. To use this example policy, replace the `user input placeholders` with your own information.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Sid":"statement1",
         "Effect":"Allow",
         "Principal": {
            "AWS": "arn:aws:iam::123456789012:user/bucket-owner"
         },  
         "Action":  "s3:ListBucket",
         "Resource": "arn:aws:s3:::amzn-s3-demo-bucket",
         "Condition" : {
             "StringEquals" : {
                 "s3:prefix": "projects" 
             }
          } 
       },
      {
         "Sid":"statement2",
         "Effect":"Deny",
         "Principal": {
            "AWS": "arn:aws:iam::123456789012:user/bucket-owner"
         },  
         "Action": "s3:ListBucket",
         "Resource": "arn:aws:s3:::amzn-s3-demo-bucket",
         "Condition" : {
             "StringNotEquals" : {
                 "s3:prefix": "projects"  
             }
          } 
       }         
    ]
}
```

------

**Test the policy with the AWS CLI**  
You can test the policy using the following `list-object` AWS CLI command. In the command, you provide user credentials using the `--profile` parameter. For more information about setting up and using the AWS CLI, see [Developing with Amazon S3 using the AWS CLI](https://docs.aws.amazon.com/AmazonS3/latest/API/setup-aws-cli.html) in the *Amazon S3 API Reference*.

```
aws s3api list-objects --bucket amzn-s3-demo-bucket --prefix projects --profile AccountA
```

### Example 3: Setting the maximum number of keys
<a name="example-numeric-condition-operators"></a>

You can use the `s3:max-keys` condition key to set the maximum number of keys that a requester can return in a [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html) or [https://docs.aws.amazon.com//AmazonS3/latest/API/API_ListObjectVersions.html](https://docs.aws.amazon.com//AmazonS3/latest/API/API_ListObjectVersions.html) request. By default, these API operations return up to 1,000 keys. For a list of numeric condition operators that you can use with `s3:max-keys` and accompanying examples, see [Numeric Condition Operators](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition_operators.html#Conditions_Numeric) in the *IAM User Guide*.

# Identity-based policies for Amazon S3
<a name="security_iam_id-based-policy-examples"></a>

By default, users and roles don't have permission to create or modify Amazon S3 resources. To grant users permission to perform actions on the resources that they need, an IAM administrator can create IAM policies.

To learn how to create an IAM identity-based policy by using these example JSON policy documents, see [Create IAM policies (console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create-console.html) in the *IAM User Guide*.

For details about actions and resource types defined by Amazon S3, including the format of the ARNs for each of the resource types, see [Actions, resources, and condition keys for Amazon S3](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html) in the *Service Authorization Reference*.

For more information about the permissions to S3 API operations by S3 resource types, see [Required permissions for Amazon S3 API operations](using-with-s3-policy-actions.md).

**Topics**
+ [

## Policy best practices
](#security_iam_service-with-iam-policy-best-practices)
+ [

# Controlling access to a bucket with user policies
](walkthrough1.md)
+ [

# Identity-based policy examples for Amazon S3
](example-policies-s3.md)

## Policy best practices
<a name="security_iam_service-with-iam-policy-best-practices"></a>

Identity-based policies determine whether someone can create, access, or delete Amazon S3 resources in your account. These actions can incur costs for your AWS account. When you create or edit identity-based policies, follow these guidelines and recommendations:
+ **Get started with AWS managed policies and move toward least-privilege permissions** – To get started granting permissions to your users and workloads, use the *AWS managed policies* that grant permissions for many common use cases. They are available in your AWS account. We recommend that you reduce permissions further by defining AWS customer managed policies that are specific to your use cases. For more information, see [AWS managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html#aws-managed-policies) or [AWS managed policies for job functions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_job-functions.html) in the *IAM User Guide*.
+ **Apply least-privilege permissions** – When you set permissions with IAM policies, grant only the permissions required to perform a task. You do this by defining the actions that can be taken on specific resources under specific conditions, also known as *least-privilege permissions*. For more information about using IAM to apply permissions, see [ Policies and permissions in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html) in the *IAM User Guide*.
+ **Use conditions in IAM policies to further restrict access** – You can add a condition to your policies to limit access to actions and resources. For example, you can write a policy condition to specify that all requests must be sent using SSL. You can also use conditions to grant access to service actions if they are used through a specific AWS service, such as CloudFormation. For more information, see [ IAM JSON policy elements: Condition](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition.html) in the *IAM User Guide*.
+ **Use IAM Access Analyzer to validate your IAM policies to ensure secure and functional permissions** – IAM Access Analyzer validates new and existing policies so that the policies adhere to the IAM policy language (JSON) and IAM best practices. IAM Access Analyzer provides more than 100 policy checks and actionable recommendations to help you author secure and functional policies. For more information, see [Validate policies with IAM Access Analyzer](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-policy-validation.html) in the *IAM User Guide*.
+ **Require multi-factor authentication (MFA)** – If you have a scenario that requires IAM users or a root user in your AWS account, turn on MFA for additional security. To require MFA when API operations are called, add MFA conditions to your policies. For more information, see [ Secure API access with MFA](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_configure-api-require.html) in the *IAM User Guide*.

For more information about best practices in IAM, see [Security best practices in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) in the *IAM User Guide*.

# Controlling access to a bucket with user policies
<a name="walkthrough1"></a>

This walkthrough explains how user permissions work with Amazon S3. In this example, you create a bucket with folders. You then create AWS Identity and Access Management IAM users in your AWS account and grant those users incremental permissions on your Amazon S3 bucket and the folders in it. 

**Topics**
+ [

## Basics of buckets and folders
](#walkthrough-background1)
+ [

## Walkthrough summary
](#walkthrough-scenario)
+ [

## Preparing for the walkthrough
](#walkthrough-what-you-need)
+ [

## Step 1: Create a bucket
](#walkthrough1-create-bucket)
+ [

## Step 2: Create IAM users and a group
](#walkthrough1-add-users)
+ [

## Step 3: Verify that IAM users have no permissions
](#walkthrough1-verify-no-user-permissions)
+ [

## Step 4: Grant group-level permissions
](#walkthrough-group-policy)
+ [

## Step 5: Grant IAM user Alice specific permissions
](#walkthrough-grant-user1-permissions)
+ [

## Step 6: Grant IAM user Bob specific permissions
](#walkthrough1-grant-permissions-step5)
+ [

## Step 7: Secure the private folder
](#walkthrough-secure-private-folder-explicit-deny)
+ [

## Step 8: Clean up
](#walkthrough-cleanup)
+ [

## Related resources
](#RelatedResources-walkthrough1)

## Basics of buckets and folders
<a name="walkthrough-background1"></a>

The Amazon S3 data model is a flat structure: You create a bucket, and the bucket stores objects. There is no hierarchy of subbuckets or subfolders, but you can emulate a folder hierarchy. Tools like the Amazon S3 console can present a view of these logical folders and subfolders in your bucket.

The console shows that a bucket named `companybucket` has three folders, `Private`, `Development`, and `Finance`, and an object, `s3-dg.pdf`. The console uses the object names (keys) to create a logical hierarchy with folders and subfolders. Consider the following examples:
+ When you create the `Development` folder, the console creates an object with the key `Development/`. Note the trailing slash (`/`) delimiter.
+ When you upload an object named `Projects1.xls` in the `Development` folder, the console uploads the object and gives it the key `Development/Projects1.xls`. 

  In the key, `Development` is the [prefix](https://docs.aws.amazon.com/general/latest/gr/glos-chap.html#keyprefix) and `/` is the delimiter. The Amazon S3 API supports prefixes and delimiters in its operations. For example, you can get a list of all objects from a bucket with a specific prefix and delimiter. On the console, when you open the `Development` folder, the console lists the objects in that folder. In the following example, the `Development` folder contains one object. 

  When the console lists the `Development` folder in the `companybucket` bucket, it sends a request to Amazon S3 in which it specifies a prefix of `Development` and a delimiter of `/` in the request. The console's response looks just like a folder list in your computer's file system. The preceding example shows that the bucket `companybucket` has an object with the key `Development/Projects1.xls`.

The console is using object keys to infer a logical hierarchy. Amazon S3 has no physical hierarchy. Amazon S3 only has buckets that contain objects in a flat file structure. When you create objects using the Amazon S3 API, you can use object keys that imply a logical hierarchy. When you create a logical hierarchy of objects, you can manage access to individual folders, as this walkthrough demonstrates.

Before you start, be sure that you are familiar with the concept of the *root-level* bucket content. Suppose that your `companybucket` bucket has the following objects:
+ `Private/privDoc1.txt`
+ `Private/privDoc2.zip`
+ `Development/project1.xls`
+ `Development/project2.xls`
+ `Finance/Tax2011/document1.pdf`
+ `Finance/Tax2011/document2.pdf`
+ `s3-dg.pdf`

These object keys create a logical hierarchy with `Private`, `Development`, and the `Finance` as root-level folders and `s3-dg.pdf` as a root-level object. When you choose the bucket name on the Amazon S3 console, the root-level items appear. The console shows the top-level prefixes (`Private/`, `Development/`, and `Finance/`) as root-level folders. The object key `s3-dg.pdf` has no prefix, and so it appears as a root-level item.



## Walkthrough summary
<a name="walkthrough-scenario"></a>

In this walkthrough, you create a bucket with three folders (`Private`, `Development`, and `Finance`) in it. 

You have two users, Alice and Bob. You want Alice to access only the `Development` folder, and you want Bob to access only the `Finance` folder. You want to keep the `Private` folder content private. In the walkthrough, you manage access by creating IAM users (the example uses the usernames Alice and Bob) and granting them the necessary permissions. 

IAM also supports creating user groups and granting group-level permissions that apply to all users in the group. This helps you better manage permissions. For this exercise, both Alice and Bob need some common permissions. So you also create a group named `Consultants` and then add both Alice and Bob to the group. You first grant permissions by attaching a group policy to the group. Then you add user-specific permissions by attaching policies to specific users.

**Note**  
The walkthrough uses `companybucket` as the bucket name, Alice and Bob as the IAM users, and `Consultants` as the group name. Because Amazon S3 requires that bucket names be globally unique, you must replace the bucket name with a name that you create.

## Preparing for the walkthrough
<a name="walkthrough-what-you-need"></a>

 In this example, you use your AWS account credentials to create IAM users. Initially, these users have no permissions. You incrementally grant these users permissions to perform specific Amazon S3 actions. To test these permissions, you sign in to the console with each user's credentials. As you incrementally grant permissions as an AWS account owner and test permissions as an IAM user, you need to sign in and out, each time using different credentials. You can do this testing with one browser, but the process will go faster if you can use two different browsers. Use one browser to connect to the AWS Management Console with your AWS account credentials and another browser to connect with the IAM user credentials. 

 To sign in to the AWS Management Console with your AWS account credentials, go to [https://console.aws.amazon.com/](https://console.aws.amazon.com/).  An IAM user can't sign in using the same link. An IAM user must use an IAM-enabled sign-in page. As the account owner, you can provide this link to your users. 

For more information about IAM, see [The AWS Management Console Sign-in Page](https://docs.aws.amazon.com/IAM/latest/UserGuide/console.html) in the *IAM User Guide*.

### To provide a sign-in link for IAM users
<a name="walkthrough-sign-in-user-credentials"></a>

1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the **Navigation** pane, choose **IAM Dashboard **.

1. Note the URL under **IAM users sign in link:**. You will give this link to IAM users to sign in to the console with their IAM user name and password.

## Step 1: Create a bucket
<a name="walkthrough1-create-bucket"></a>

In this step, you sign in to the Amazon S3 console with your AWS account credentials, create a bucket, add folders to the bucket, and upload one or two sample documents in each folder. 

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. Create a bucket. 

   For step-by-step instructions, see [Creating a general purpose bucket](create-bucket-overview.md).

1. Upload one document to the bucket.

   This exercise assumes that you have the `s3-dg.pdf` document at the root level of this bucket. If you upload a different document, substitute its file name for `s3-dg.pdf`.

1. Add three folders named `Private`, `Finance`, and `Development` to the bucket.

   For step-by-step instructions to create a folder, see [Organizing objects in the Amazon S3 console by using folders](using-folders.md)> in the *Amazon Simple Storage Service User Guide*.

1. Upload one or two documents to each folder. 

   For this exercise, assume that you have uploaded a couple of documents in each folder, resulting in the bucket having objects with the following keys:
   + `Private/privDoc1.txt`
   + `Private/privDoc2.zip`
   + `Development/project1.xls`
   + `Development/project2.xls`
   + `Finance/Tax2011/document1.pdf`
   + `Finance/Tax2011/document2.pdf`
   + `s3-dg.pdf`

   

   For step-by-step instructions, see [Uploading objects](upload-objects.md). 

## Step 2: Create IAM users and a group
<a name="walkthrough1-add-users"></a>

Now use the [IAM Console](https://console.aws.amazon.com/iam/) to add two IAM users, Alice and Bob, to your AWS account. For step-by-step instructions, see [Creating an IAM user in your AWS account](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html#id_users_create_console) in the *IAM User Guide*. 

Also create an administrative group named `Consultants`. Then add both users to the group. For step-by-step instructions, see [Creating IAM user groups](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups_create.html). 

**Warning**  
When you add users and a group, do not attach any policies that grant permissions to these users. At first, these users don't have any permissions. In the following sections, you grant permissions incrementally. First you must ensure that you have assigned passwords to these IAM users. You use these user credentials to test Amazon S3 actions and verify that the permissions work as expected.

For step-by-step instructions for creating a new IAM user, see [Creating an IAM user in your AWS account](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html) in the *IAM User Guide*. When you create the users for this walkthrough, select **AWS Management Console access** and clear [programmatic access](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys).

For step-by-step instructions for creating an administrative group, see [Creating Your First IAM Admin User and Group](https://docs.aws.amazon.com/IAM/latest/UserGuide/getting-started_create-admin-group.html) in the *IAM User Guide*.



## Step 3: Verify that IAM users have no permissions
<a name="walkthrough1-verify-no-user-permissions"></a>

If you are using two browsers, you can now use the second browser to sign in to the console using one of the IAM user credentials.

1. Using the IAM user sign-in link (see [To provide a sign-in link for IAM users](#walkthrough-sign-in-user-credentials)), sign in to the AWS Management Console using either of the IAM user credentials.

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

    Verify the console message telling you that access is denied. 

Now, you can begin granting incremental permissions to the users. First, you attach a group policy that grants permissions that both users must have. 

## Step 4: Grant group-level permissions
<a name="walkthrough-group-policy"></a>

You want the users to be able to do the following:
+ List all buckets owned by the parent account. To do so, Bob and Alice must have permission for the `s3:ListAllMyBuckets` action.
+ List root-level items, folders, and objects in the `companybucket` bucket. To do so, Bob and Alice must have permission for the `s3:ListBucket` action on the `companybucket` bucket.

First, you create a policy that grants these permissions, and then you attach it to the `Consultants` group. 

### Step 4.1: Grant permission to list all buckets
<a name="walkthrough1-grant-permissions-step1"></a>

In this step, you create a managed policy that grants the users minimum permissions to enable them to list all buckets owned by the parent account. Then you attach the policy to the `Consultants` group. When you attach the managed policy to a user or a group, you grant the user or group permission to obtain a list of buckets owned by the parent AWS account.

1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).
**Note**  
Because you are granting user permissions, sign in using your AWS account credentials, not as an IAM user.

1. Create the managed policy.

   1. In the navigation pane on the left, choose **Policies**, and then choose **Create Policy**.

   1. Choose the **JSON** tab.

   1. Copy the following access policy and paste it into the policy text field.

------
#### [ JSON ]

****  

      ```
      {
        "Version":"2012-10-17",		 	 	 
        "Statement": [
          {
            "Sid": "AllowGroupToSeeBucketListInTheConsole",
            "Action": ["s3:ListAllMyBuckets"],
            "Effect": "Allow",
            "Resource": ["arn:aws:s3:::*"]
          }
        ]
      }
      ```

------

      A policy is a JSON document. In the document, a `Statement` is an array of objects, each describing a permission using a collection of name-value pairs. The preceding policy describes one specific permission. The `Action` specifies the type of access. In the policy, the `s3:ListAllMyBuckets` is a predefined Amazon S3 action. This action covers the Amazon S3 GET Service operation, which returns a list of all buckets owned by the authenticated sender. The `Effect` element value determines whether specific permission is allowed or denied.

   1. Choose **Review Policy**. On the next page, enter `AllowGroupToSeeBucketListInTheConsole` in the **Name** field, and then choose **Create policy**.
**Note**  
The **Summary** entry displays a message stating that the policy does not grant any permissions. For this walkthrough, you can safely ignore this message.

1. Attach the `AllowGroupToSeeBucketListInTheConsole` managed policy that you created to the `Consultants` group.

   For step-by-step instructions for attaching a managed policy, see [Adding and removing IAM identity permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html#attach-managed-policy-console) in the *IAM User Guide*. 

   You attach policy documents to IAM users and groups in the IAM console. Because you want both users to be able to list the buckets, you attach the policy to the group. 

1. Test the permission.

   1. Using the IAM user sign-in link (see [To provide a sign-in link for IAM users](#walkthrough-sign-in-user-credentials)), sign in to the console using any one of IAM user credentials.

   1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

      The console should now list all the buckets but not the objects in any of the buckets.

### Step 4.2: Enable users to list root-level content of a bucket
<a name="walkthrough1-grant-permissions-step2"></a>

Next, you allow all users in the `Consultants` group to list the root-level `companybucket` bucket items. When a user chooses the company bucket on the Amazon S3 console, the user can see the root-level items in the bucket.

**Note**  
This example uses `companybucket` for illustration. You must use the name of the bucket that you created.

To understand the request that the console sends to Amazon S3 when you choose a bucket name, the response that Amazon S3 returns, and how the console interprets the response, examine the flow a little more closely.

When you choose a bucket name, the console sends the [GET Bucket (List Objects)](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGET.html) request to Amazon S3. This request includes the following parameters:
+ The `prefix` parameter with an empty string as its value. 
+ The `delimiter` parameter with `/` as its value. 

The following is an example request.

```
GET ?prefix=&delimiter=/ HTTP/1.1 
Host: companybucket.s3.amazonaws.com
Date: Wed, 01 Aug  2012 12:00:00 GMT
Authorization: AWS AKIAIOSFODNN7EXAMPLE:xQE0diMbLRepdf3YB+FIEXAMPLE=
```

Amazon S3 returns a response that includes the following `<ListBucketResult/>` element.

```
<ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
  <Name>companybucket</Name>
  <Prefix></Prefix>
  <Delimiter>/</Delimiter>
   ...
  <Contents>
    <Key>s3-dg.pdf</Key>
    ...
  </Contents>
  <CommonPrefixes>
    <Prefix>Development/</Prefix>
  </CommonPrefixes>
  <CommonPrefixes>
    <Prefix>Finance/</Prefix>
  </CommonPrefixes>
  <CommonPrefixes>
    <Prefix>Private/</Prefix>
  </CommonPrefixes>
</ListBucketResult>
```

The key `s3-dg.pdf` object does not contain the slash (`/`) delimiter, and Amazon S3 returns the key in the `<Contents>` element. However, all other keys in the example bucket contain the `/` delimiter. Amazon S3 groups these keys and returns a `<CommonPrefixes>` element for each of the distinct prefix values `Development/`, `Finance/`, and `Private/` that is a substring from the beginning of these keys to the first occurrence of the specified `/` delimiter. 

The console interprets this result and displays the root-level items as three folders and one object key. 

If Bob or Alice opens the **Development** folder, the console sends the [GET Bucket (List Objects)](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGET.html) request to Amazon S3 with the `prefix` and the `delimiter` parameters set to the following values:
+ The `prefix` parameter with the value `Development/`.
+ The `delimiter` parameter with the "`/`" value. 

In response, Amazon S3 returns the object keys that start with the specified prefix. 

```
<ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
  <Name>companybucket</Name>
  <Prefix>Development</Prefix>
  <Delimiter>/</Delimiter>
   ...
  <Contents>
    <Key>Project1.xls</Key>
    ...
  </Contents>
  <Contents>
    <Key>Project2.xls</Key>
    ...
  </Contents> 
</ListBucketResult>
```

The console shows the object keys.

Now, return to granting users permission to list the root-level bucket items. To list bucket content, users need permission to call the `s3:ListBucket` action, as shown in the following policy statement. To ensure that they see only the root-level content, you add a condition that users must specify an empty `prefix` in the request—that is, they are not allowed to double-click any of the root-level folders. Finally, you add a condition to require folder-style access by requiring user requests to include the `delimiter` parameter with the value "`/`". 

```
{
  "Sid": "AllowRootLevelListingOfCompanyBucket",
  "Action": ["s3:ListBucket"],
  "Effect": "Allow",
  "Resource": ["arn:aws:s3:::companybucket"],
  "Condition":{ 
         "StringEquals":{
             "s3:prefix":[""], "s3:delimiter":["/"]
                        }
              }
}
```

When you choose a bucket on the Amazon S3 console, the console first sends the [GET Bucket location](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETlocation.html) request to find the AWS Region where the bucket is deployed. Then the console uses the Region-specific endpoint for the bucket to send the [GET Bucket (List Objects)](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGET.html) request. As a result, if users are going to use the console, you must grant permission for the `s3:GetBucketLocation` action as shown in the following policy statement.

```
{
   "Sid": "RequiredByS3Console",
   "Action": ["s3:GetBucketLocation"],
   "Effect": "Allow",
   "Resource": ["arn:aws:s3:::*"]
}
```

**To enable users to list root-level bucket content**

1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

   Use your AWS account credentials, not the credentials of an IAM user, to sign in to the console.

1. Replace the existing `AllowGroupToSeeBucketListInTheConsole` managed policy that is attached to the `Consultants` group with the following policy, which also allows the `s3:ListBucket` action. Remember to replace *`companybucket`* in the policy `Resource` with the name of your bucket. 

   For step-by-step instructions, see [Editing IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-edit.html) in the *IAM User Guide*. When following the step-by-step instructions, be sure to follow the steps for applying your changes to all principal entities that the policy is attached to. 

------
#### [ JSON ]

****  

   ```
   {
     "Version":"2012-10-17",		 	 	                  
     "Statement": [
        {
          "Sid": "AllowGroupToSeeBucketListAndAlsoAllowGetBucketLocationRequiredForListBucket",
          "Action": [ "s3:ListAllMyBuckets", "s3:GetBucketLocation" ],
          "Effect": "Allow",
          "Resource": [ "arn:aws:s3:::*"  ]
        },
        {
          "Sid": "AllowRootLevelListingOfCompanyBucket",
          "Action": ["s3:ListBucket"],
          "Effect": "Allow",
          "Resource": ["arn:aws:s3:::companybucket"],
          "Condition":{ 
                "StringEquals":{
                       "s3:prefix":[""], "s3:delimiter":["/"]
                              }
                      }
        }
     ] 
   }
   ```

------

1. Test the updated permissions.

   1. Using the IAM user sign-in link (see [To provide a sign-in link for IAM users](#walkthrough-sign-in-user-credentials)), sign in to the AWS Management Console. 

      Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

   1. Choose the bucket that you created, and the console shows the root-level bucket items. If you choose any folders in the bucket, you won't be able to see the folder content because you haven't yet granted those permissions.

This test succeeds when users use the Amazon S3 console. When you choose a bucket on the console, the console implementation sends a request that includes the `prefix` parameter with an empty string as its value and the `delimiter` parameter with "`/`" as its value.

### Step 4.3: Summary of the group policy
<a name="walkthrough-group-policy-summary"></a>

The net effect of the group policy that you added is to grant the IAM users Alice and Bob the following minimum permissions:
+ List all buckets owned by the parent account.
+ See root-level items in the `companybucket` bucket. 

However, the users still can't do much. Next, you grant user-specific permissions, as follows:
+ Allow Alice to get and put objects in the `Development` folder.
+ Allow Bob to get and put objects in the `Finance` folder.

For user-specific permissions, you attach a policy to the specific user, not to the group. In the following section, you grant Alice permission to work in the `Development` folder. You can repeat the steps to grant similar permission to Bob to work in the `Finance` folder.

## Step 5: Grant IAM user Alice specific permissions
<a name="walkthrough-grant-user1-permissions"></a>

Now you grant additional permissions to Alice so that she can see the content of the `Development` folder and get and put objects in that folder.

### Step 5.1: Grant IAM user Alice permission to list the development folder content
<a name="walkthrough-grant-user1-permissions-listbucket"></a>

For Alice to list the `Development` folder content, you must apply a policy to the user Alice that grants permission for the `s3:ListBucket` action on the `companybucket` bucket, provided the request includes the prefix `Development/`. You want this policy to be applied only to the user Alice, so you use an inline policy. For more information about inline policies, see [Managed policies and inline policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html) in the *IAM User Guide*.

1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

   Use your AWS account credentials, not the credentials of an IAM user, to sign in to the console.

1. Create an inline policy to grant the user Alice permission to list the `Development` folder content.

   1. In the navigation pane on the left, choose **Users**.

   1. Choose the username **Alice**.

   1. On the user details page, choose the **Permissions** tab and then choose **Add inline policy**.

   1. Choose the **JSON** tab.

   1. Copy the following policy, and paste it into the policy text field.

------
#### [ JSON ]

****  

      ```
      {
          "Version":"2012-10-17",		 	 	   
          "Statement": [
          {
            "Sid": "AllowListBucketIfSpecificPrefixIsIncludedInRequest",
            "Action": ["s3:ListBucket"],
            "Effect": "Allow",
            "Resource": ["arn:aws:s3:::companybucket"],
            "Condition": { "StringLike": {"s3:prefix": ["Development/*"] }
             }
          }
        ]
      }
      ```

------

   1. Choose **Review Policy**. On the next page, enter a name in the **Name** field, and then choose **Create policy**.

1. Test the change to Alice's permissions:

   1. Using the IAM user sign-in link (see [To provide a sign-in link for IAM users](#walkthrough-sign-in-user-credentials)), sign in to the AWS Management Console. 

   1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

   1. On the Amazon S3 console, verify that Alice can see the list of objects in the `Development/` folder in the bucket. 

      When the user chooses the `/Development` folder to see the list of objects in it, the Amazon S3 console sends the `ListObjects` request to Amazon S3 with the prefix `/Development`. Because the user is granted permission to see the object list with the prefix `Development` and delimiter `/`, Amazon S3 returns the list of objects with the key prefix `Development/`, and the console displays the list.

### Step 5.2: Grant IAM user Alice permissions to get and put objects in the development folder
<a name="walkthrough-grant-user1-permissions-get-put-object"></a>

For Alice to get and put objects in the `Development` folder, she needs permission to call the `s3:GetObject` and `s3:PutObject` actions. The following policy statements grant these permissions, provided that the request includes the `prefix` parameter with a value of `Development/`.

```
{
    "Sid":"AllowUserToReadWriteObjectData",
    "Action":["s3:GetObject", "s3:PutObject"],
    "Effect":"Allow",
    "Resource":["arn:aws:s3:::companybucket/Development/*"]
 }
```



1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

   Use your AWS account credentials, not the credentials of an IAM user, to sign in to the console.

1. Edit the inline policy that you created in the previous step. 

   1. In the navigation pane on the left, choose **Users**.

   1. Choose the user name Alice.

   1. On the user details page, choose the **Permissions** tab and expand the **Inline Policies** section.

   1. Next to the name of the policy that you created in the previous step, choose **Edit Policy**.

   1. Copy the following policy and paste it into the policy text field, replacing the existing policy.

------
#### [ JSON ]

****  

      ```
      {
           "Version":"2012-10-17",		 	 	 
           "Statement":[
            {
               "Sid":"AllowListBucketIfSpecificPrefixIsIncludedInRequest",
               "Action":["s3:ListBucket"],
               "Effect":"Allow",
               "Resource":["arn:aws:s3:::companybucket"],
               "Condition":{
                  "StringLike":{"s3:prefix":["Development/*"]
                  }
               }
            },
            {
              "Sid":"AllowUserToReadWriteObjectDataInDevelopmentFolder", 
              "Action":["s3:GetObject", "s3:PutObject"],
              "Effect":"Allow",
              "Resource":["arn:aws:s3:::companybucket/Development/*"]
            }
         ]
      }
      ```

------

1. Test the updated policy:

   1. Using the IAM user sign-in link (see [To provide a sign-in link for IAM users](#walkthrough-sign-in-user-credentials)), sign into the AWS Management Console. 

   1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

   1. On the Amazon S3 console, verify that Alice can now add an object and download an object in the `Development` folder. 

### Step 5.3: Explicitly deny IAM user Alice permissions to any other folders in the bucket
<a name="walkthrough-grant-user1-explicit-deny-other-access"></a>

User Alice can now list the root-level content in the `companybucket` bucket. She can also get and put objects in the `Development` folder. If you really want to tighten the access permissions, you could explicitly deny Alice access to any other folders in the bucket. If there is any other policy (bucket policy or ACL) that grants Alice access to any other folders in the bucket, this explicit deny overrides those permissions. 

You can add the following statement to the user Alice policy that requires all requests that Alice sends to Amazon S3 to include the `prefix` parameter, whose value can be either `Development/*` or an empty string. 



```
{
   "Sid": "ExplicitlyDenyAnyRequestsForAllOtherFoldersExceptDevelopment",
   "Action": ["s3:ListBucket"],
   "Effect": "Deny",
   "Resource": ["arn:aws:s3:::companybucket"],
   "Condition":{  "StringNotLike": {"s3:prefix":["Development/*",""] },
                  "Null"         : {"s3:prefix":false }
    }
}
```

There are two conditional expressions in the `Condition` block. The result of these conditional expressions is combined by using the logical `AND`. If both conditions are true, the result of the combined condition is true. Because the `Effect` in this policy is `Deny`, when the `Condition` evaluates to true, users can't perform the specified `Action`.
+ The `Null` conditional expression ensures that requests from Alice include the `prefix` parameter. 

  The `prefix` parameter requires folder-like access. If you send a request without the `prefix` parameter, Amazon S3 returns all the object keys. 

  If the request includes the `prefix` parameter with a null value, the expression evaluates to true, and so the entire `Condition` evaluates to true. You must allow an empty string as value of the `prefix` parameter. From the preceding discussion, recall that allowing the null string allows Alice to retrieve root-level bucket items as the console does in the preceding discussion. For more information, see [Step 4.2: Enable users to list root-level content of a bucket](#walkthrough1-grant-permissions-step2). 
+ The `StringNotLike` conditional expression ensures that if the value of the `prefix` parameter is specified and is not `Development/*`, the request fails. 

Follow the steps in the preceding section and again update the inline policy that you created for user Alice.

Copy the following policy and paste it into the policy text field, replacing the existing policy.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Sid":"AllowListBucketIfSpecificPrefixIsIncludedInRequest",
         "Action":["s3:ListBucket"],
         "Effect":"Allow",
         "Resource":["arn:aws:s3:::companybucket"],
         "Condition":{
            "StringLike":{"s3:prefix":["Development/*"]
            }
         }
      },
      {
        "Sid":"AllowUserToReadWriteObjectDataInDevelopmentFolder", 
        "Action":["s3:GetObject", "s3:PutObject"],
        "Effect":"Allow",
        "Resource":["arn:aws:s3:::companybucket/Development/*"]
      },
      {
         "Sid": "ExplicitlyDenyAnyRequestsForAllOtherFoldersExceptDevelopment",
         "Action": ["s3:ListBucket"],
         "Effect": "Deny",
         "Resource": ["arn:aws:s3:::companybucket"],
         "Condition":{  "StringNotLike": {"s3:prefix":["Development/*",""] },
                        "Null"         : {"s3:prefix":false }
          }
      }
   ]
}
```

------

## Step 6: Grant IAM user Bob specific permissions
<a name="walkthrough1-grant-permissions-step5"></a>

Now you want to grant Bob permission to the `Finance` folder. Follow the steps that you used earlier to grant permissions to Alice, but replace the `Development` folder with the `Finance` folder. For step-by-step instructions, see [Step 5: Grant IAM user Alice specific permissions](#walkthrough-grant-user1-permissions). 

## Step 7: Secure the private folder
<a name="walkthrough-secure-private-folder-explicit-deny"></a>

In this example, you have only two users. You granted all the minimum required permissions at the group level and granted user-level permissions only when you really need to permissions at the individual user level. This approach helps minimize the effort of managing permissions. As the number of users increases, managing permissions can become cumbersome. For example, you don't want any of the users in this example to access the content of the `Private` folder. How do you ensure that you don't accidentally grant a user permission to the `Private` folder? You add a policy that explicitly denies access to the folder. An explicit deny overrides any other permissions. 

To ensure that the `Private` folder remains private, you can add the following two deny statements to the group policy:
+ Add the following statement to explicitly deny any action on resources in the `Private` folder (`companybucket/Private/*`).

  ```
  {
    "Sid": "ExplictDenyAccessToPrivateFolderToEveryoneInTheGroup",
    "Action": ["s3:*"],
    "Effect": "Deny",
    "Resource":["arn:aws:s3:::companybucket/Private/*"]
  }
  ```
+ You also deny permission for the list objects action when the request specifies the `Private/` prefix. On the console, if Bob or Alice opens the `Private` folder, this policy causes Amazon S3 to return an error response.

  ```
  {
    "Sid": "DenyListBucketOnPrivateFolder",
    "Action": ["s3:ListBucket"],
    "Effect": "Deny",
    "Resource": ["arn:aws:s3:::*"],
    "Condition":{
        "StringLike":{"s3:prefix":["Private/"]}
     }
  }
  ```

Replace the `Consultants` group policy with an updated policy that includes the preceding deny statements. After the updated policy is applied, none of the users in the group can access the `Private` folder in your bucket. 

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

   Use your AWS account credentials, not the credentials of an IAM user, to sign in to the console.

1. Replace the existing `AllowGroupToSeeBucketListInTheConsole` managed policy that is attached to the `Consultants` group with the following policy. Remember to replace *`companybucket`* in the policy with the name of your bucket. 

   For instructions, see [Editing customer managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-edit.html#edit-managed-policy-console) in the *IAM User Guide*. When following the instructions, make sure to follow the directions for applying your changes to all principal entities that the policy is attached to. 

------
#### [ JSON ]

****  

   ```
   {
     "Version":"2012-10-17",		 	 	 
     "Statement": [
       {
         "Sid": "AllowGroupToSeeBucketListAndAlsoAllowGetBucketLocationRequiredForListBucket",
         "Action": ["s3:ListAllMyBuckets", "s3:GetBucketLocation"],
         "Effect": "Allow",
         "Resource": ["arn:aws:s3:::*"]
       },
       {
         "Sid": "AllowRootLevelListingOfCompanyBucket",
         "Action": ["s3:ListBucket"],
         "Effect": "Allow",
         "Resource": ["arn:aws:s3:::companybucket"],
         "Condition":{
             "StringEquals":{"s3:prefix":[""]}
          }
       },
       {
         "Sid": "RequireFolderStyleList",
         "Action": ["s3:ListBucket"],
         "Effect": "Deny",
         "Resource": ["arn:aws:s3:::*"],
         "Condition":{
             "StringNotEquals":{"s3:delimiter":"/"}
          }
        },
       {
         "Sid": "ExplictDenyAccessToPrivateFolderToEveryoneInTheGroup",
         "Action": ["s3:*"],
         "Effect": "Deny",
         "Resource":["arn:aws:s3:::companybucket/Private/*"]
       },
       {
         "Sid": "DenyListBucketOnPrivateFolder",
         "Action": ["s3:ListBucket"],
         "Effect": "Deny",
         "Resource": ["arn:aws:s3:::*"],
         "Condition":{
             "StringLike":{"s3:prefix":["Private/"]}
          }
       }
     ]
   }
   ```

------



## Step 8: Clean up
<a name="walkthrough-cleanup"></a>

To clean up, open the [IAM Console](https://console.aws.amazon.com/iam/) and remove the users Alice and Bob. For step-by-step instructions, see [Deleting an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_manage.html#id_users_deleting) in the *IAM User Guide*.

To ensure that you aren't charged further for storage, you should also delete the objects and the bucket that you created for this exercise.

## Related resources
<a name="RelatedResources-walkthrough1"></a>
+ [Managing IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage.html) in the *IAM User Guide*

# Identity-based policy examples for Amazon S3
<a name="example-policies-s3"></a>

This section shows several example AWS Identity and Access Management (IAM) identity-based policies for controlling access to Amazon S3. For example *bucket policies* (resource-based policies), see [Bucket policies for Amazon S3](bucket-policies.md). For information about IAM policy language, see [Policies and permissions in Amazon S3](access-policy-language-overview.md).

The following example policies will work if you use them programmatically. However, to use them with the Amazon S3 console, you must grant additional permissions that are required by the console. For information about using policies such as these with the Amazon S3 console, see [Controlling access to a bucket with user policies](walkthrough1.md). 

For more information about the permissions to S3 API operations by S3 resource types, see [Required permissions for Amazon S3 API operations](using-with-s3-policy-actions.md).

**Topics**
+ [

## Allowing an IAM user access to one of your buckets
](#iam-policy-ex0)
+ [

## Allowing each IAM user access to a folder in a bucket
](#iam-policy-ex1)
+ [

## Allowing a group to have a shared folder in Amazon S3
](#iam-policy-ex2)
+ [

## Allowing all your users to read objects in a portion of a bucket
](#iam-policy-ex3)
+ [

## Allowing a partner to drop files into a specific portion of a bucket
](#iam-policy-ex4)
+ [

## Restricting access to Amazon S3 buckets within a specific AWS account
](#iam-policy-ex6)
+ [

## Restricting access to Amazon S3 buckets within your organizational unit
](#iam-policy-ex7)
+ [

## Restricting access to Amazon S3 buckets within your organization
](#iam-policy-ex8)
+ [

## Granting permission to retrieve the PublicAccessBlock configuration for an AWS account
](#using-with-s3-actions-related-to-accountss)
+ [

## Restricting bucket creation to one Region
](#condition-key-bucket-ops-1)

## Allowing an IAM user access to one of your buckets
<a name="iam-policy-ex0"></a>

In this example, you want to grant an IAM user in your AWS account access to one of your buckets, *amzn-s3-demo-bucket1*, and allow the user to add, update, and delete objects. 

In addition to granting the `s3:PutObject`, `s3:GetObject`, and `s3:DeleteObject` permissions to the user, the policy also grants the `s3:ListAllMyBuckets`, `s3:GetBucketLocation`, and `s3:ListBucket` permissions. These are the additional permissions required by the console. Also, the `s3:PutObjectAcl` and the `s3:GetObjectAcl` actions are required to be able to copy, cut, and paste objects in the console. For an example walkthrough that grants permissions to users and tests them using the console, see [Controlling access to a bucket with user policies](walkthrough1.md). 

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action": "s3:ListAllMyBuckets",
         "Resource":"*"
      },
      {
         "Effect":"Allow",
         "Action":["s3:ListBucket","s3:GetBucketLocation"],
         "Resource":"arn:aws:s3:::amzn-s3-demo-bucket1"
      },
      {
         "Effect":"Allow",
         "Action":[
            "s3:PutObject",
            "s3:PutObjectAcl",
            "s3:GetObject",
            "s3:GetObjectAcl",
            "s3:DeleteObject"
         ],
         "Resource":"arn:aws:s3:::amzn-s3-demo-bucket1/*"
      }
   ]
}
```

------

## Allowing each IAM user access to a folder in a bucket
<a name="iam-policy-ex1"></a>

In this example, you want two IAM users, Mary and Carlos, to have access to your bucket, *amzn-s3-demo-bucket1*, so that they can add, update, and delete objects. However, you want to restrict each user's access to a single prefix (folder) in the bucket. You might create folders with names that match their usernames. 

```
amzn-s3-demo-bucket1
   Mary/
   Carlos/
```

To grant each user access only to their folder, you can write a policy for each user and attach it individually. For example, you can attach the following policy to the user Mary to allow her specific Amazon S3 permissions on the `amzn-s3-demo-bucket1/Mary` folder.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement": [
      {
         "Effect": "Allow",
         "Action": [
            "s3:PutObject",
            "s3:GetObject",
            "s3:GetObjectVersion",
            "s3:DeleteObject",
            "s3:DeleteObjectVersion"
         ],
         "Resource": "arn:aws:s3:::amzn-s3-demo-bucket1/Mary/*"
      }
   ]
}
```

------

You can then attach a similar policy to the user Carlos, specifying the folder `Carlos` in the `Resource` value.

Instead of attaching policies to individual users, you can write a single policy that uses a policy variable and then attach the policy to a group. First, you must create a group and add both Mary and Carlos to the group. The following example policy allows a set of Amazon S3 permissions in the `amzn-s3-demo-bucket1/${aws:username}` folder. When the policy is evaluated, the policy variable `${aws:username}` is replaced by the requester's username. For example, if Mary sends a request to put an object, the operation is allowed only if Mary is uploading the object to the `amzn-s3-demo-bucket1/Mary` folder.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "s3:PutObject",
            "s3:GetObject",
            "s3:GetObjectVersion",
            "s3:DeleteObject",
            "s3:DeleteObjectVersion"
         ],
         "Resource":"arn:aws:s3:::amzn-s3-demo-bucket1/${aws:username}/*"
      }
   ]
}
```

------

**Note**  
When using policy variables, you must explicitly specify version `2012-10-17` in the policy. The default version of the IAM policy language, 2008-10-17, does not support policy variables. 

 If you want to test the preceding policy on the Amazon S3 console, the console requires additional permissions, as shown in the following policy. For information about how the console uses these permissions, see [Controlling access to a bucket with user policies](walkthrough1.md). 

------
#### [ JSON ]

****  

```
{
 "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "AllowGroupToSeeBucketListInTheConsole",
      "Action": [ 
      	"s3:ListAllMyBuckets", 
      	"s3:GetBucketLocation" 
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:s3:::*"  
    },
    {
      "Sid": "AllowRootLevelListingOfTheBucket",
      "Action": "s3:ListBucket",
      "Effect": "Allow",
      "Resource": "arn:aws:s3:::amzn-s3-demo-bucket1",
      "Condition": { 
            "StringEquals": {
                    "s3:prefix": [""], "s3:delimiter": ["/"]
                           }
                 }
    },
    {
      "Sid": "AllowListBucketOfASpecificUserPrefix",
      "Action": "s3:ListBucket",
      "Effect": "Allow",
      "Resource": "arn:aws:s3:::amzn-s3-demo-bucket1",
      "Condition": {  "StringLike": {"s3:prefix": ["${aws:username}/*"] }
       }
    },
      {
     "Sid": "AllowUserSpecificActionsOnlyInTheSpecificUserPrefix",
         "Effect": "Allow",
         "Action": [
            "s3:PutObject",
            "s3:GetObject",
            "s3:GetObjectVersion",
            "s3:DeleteObject",
            "s3:DeleteObjectVersion"
         ],
         "Resource": "arn:aws:s3:::amzn-s3-demo-bucket1/${aws:username}/*"
      }
  ]
}
```

------

**Note**  
In the 2012-10-17 version of the policy, policy variables start with `$`. This change in syntax can potentially create a conflict if your object key (object name) includes a `$`.   
To avoid this conflict, specify the `$` character by using `${$}`. For example, to include the object key `my$file` in a policy, specify it as `my${$}file`.

Although IAM user names are friendly, human-readable identifiers, they aren't required to be globally unique. For example, if the user Carlos leaves the organization and another Carlos joins, then the new Carlos could access the old Carlos's information.

Instead of using usernames, you could create folders based on IAM user IDs. Each IAM user ID is unique. In this case, you must modify the preceding policy to use the `${aws:userid}` policy variable. For more information about user identifiers, see [IAM Identifiers](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html) in the *IAM User Guide*.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement": [
      {
         "Effect": "Allow",
         "Action": [
            "s3:PutObject",
            "s3:GetObject",
            "s3:GetObjectVersion",
            "s3:DeleteObject",
            "s3:DeleteObjectVersion"
         ],
         "Resource": "arn:aws:s3:::amzn-s3-demo-bucket1/home/${aws:userid}/*"
      }
   ]
}
```

------

### Allowing non-IAM users (mobile app users) access to folders in a bucket
<a name="non-iam-mobile-app-user-access"></a>

Suppose that you want to develop a mobile app, a game that stores users' data in an S3 bucket. For each app user, you want to create a folder in your bucket. You also want to limit each user's access to their own folder. But you can't create folders before someone downloads your app and starts playing the game, because you don't have their user ID.

In this case, you can require users to sign in to your app by using public identity providers such as Login with Amazon, Facebook, or Google. After users have signed in to your app through one of these providers, they have a user ID that you can use to create user-specific folders at runtime.

You can then use web identity federation in AWS Security Token Service to integrate information from the identity provider with your app and to get temporary security credentials for each user. You can then create IAM policies that allow the app to access your bucket and perform such operations as creating user-specific folders and uploading data. For more information about web identity federation, see [About web identity Federation](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_oidc.html) in the *IAM User Guide*.

## Allowing a group to have a shared folder in Amazon S3
<a name="iam-policy-ex2"></a>

Attaching the following policy to the group grants everybody in the group access to the following folder in Amazon S3: `amzn-s3-demo-bucket1/share/marketing`. Group members are allowed to access only the specific Amazon S3 permissions shown in the policy and only for objects in the specified folder. 

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "s3:PutObject",
            "s3:GetObject",
            "s3:GetObjectVersion",
            "s3:DeleteObject",
            "s3:DeleteObjectVersion"
         ],
         "Resource":"arn:aws:s3:::amzn-s3-demo-bucket1/share/marketing/*"
      }
   ]
}
```

------

## Allowing all your users to read objects in a portion of a bucket
<a name="iam-policy-ex3"></a>

In this example, you create a group named `AllUsers`, which contains all the IAM users that are owned by the AWS account. You then attach a policy that gives the group access to `GetObject` and `GetObjectVersion`, but only for objects in the `amzn-s3-demo-bucket1/readonly` folder. 

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "s3:GetObject",
            "s3:GetObjectVersion"
         ],
         "Resource":"arn:aws:s3:::amzn-s3-demo-bucket1/readonly/*"
      }
   ]
}
```

------

## Allowing a partner to drop files into a specific portion of a bucket
<a name="iam-policy-ex4"></a>

In this example, you create a group called `AnyCompany` that represents a partner company. You create an IAM user for the specific person or application at the partner company that needs access, and then you put the user in the group. 

You then attach a policy that gives the group `PutObject` access to the following folder in a bucket:

`amzn-s3-demo-bucket1/uploads/anycompany` 

You want to prevent the `AnyCompany` group from doing anything else with the bucket, so you add a statement that explicitly denies permission to any Amazon S3 actions except `PutObject` on any Amazon S3 resource in the AWS account.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action":"s3:PutObject",
         "Resource":"arn:aws:s3:::amzn-s3-demo-bucket1/uploads/anycompany/*"
      },
      {
         "Effect":"Deny",
         "Action":"s3:*",
         "NotResource":"arn:aws:s3:::amzn-s3-demo-bucket1/uploads/anycompany/*"
      }
   ]
}
```

------

## Restricting access to Amazon S3 buckets within a specific AWS account
<a name="iam-policy-ex6"></a>

If you want to ensure that your Amazon S3 principals are accessing only the resources that are inside of a trusted AWS account, you can restrict access. For example, this [identity-based IAM policy](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_identity-vs-resource.html) uses a `Deny` effect to block access to Amazon S3 actions, unless the Amazon S3 resource that's being accessed is in account `222222222222`. To prevent an IAM principal in an AWS account from accessing Amazon S3 objects outside of the account, attach the following IAM policy:

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "DenyS3AccessOutsideMyBoundary",
      "Effect": "Deny",
      "Action": [
        "s3:*"
      ],
      "Resource": "*",
      "Condition": {
        "StringNotEquals": {
          "aws:ResourceAccount": [
            "222222222222"
          ]
        }
      }
    }
  ]
}
```

------

**Note**  
This policy doesn't replace your existing IAM access controls, because it doesn't grant any access. Instead, this policy acts as an additional guardrail for your other IAM permissions, regardless of the permissions granted through other IAM policies.

Make sure to replace account ID `222222222222` in the policy with your own AWS account. To apply a policy to multiple accounts while still maintaining this restriction, replace the account ID with the `aws:PrincipalAccount` condition key. This condition requires that the principal and the resource must be in the same account.

## Restricting access to Amazon S3 buckets within your organizational unit
<a name="iam-policy-ex7"></a>

If you have an [organizational unit (OU)](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_ous.html) set up in AWS Organizations, you might want to restrict Amazon S3 bucket access to a specific part of your organization. In this example, we'll use the `aws:ResourceOrgPaths` key to restrict Amazon S3 bucket access to an OU in your organization. For this example, the [OU ID](https://docs.aws.amazon.com/organizations/latest/APIReference/API_OrganizationalUnit.html) is `ou-acroot-exampleou`. Make sure to replace this value in your own policy with your own OU IDs.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement": [
     {
       "Sid": "AllowS3AccessOutsideMyBoundary",
       "Effect": "Allow",
       "Action": [
         "s3:*"
       ],
       "Resource": "*",
       "Condition": {
         "ForAllValues:StringLike": {
           "aws:ResourceOrgPaths": [
             "o-acorg/r-acroot/ou-acroot-exampleou/"
           ] 
         }
       }
     }
   ]
 }
```

------

**Note**  
This policy doesn't grant any access. Instead, this policy acts as a backstop for your other IAM permissions, preventing your principals from accessing Amazon S3 objects outside of an OU-defined boundary.

The policy denies access to Amazon S3 actions unless the Amazon S3 object that's being accessed is in the `ou-acroot-exampleou` OU in your organization. The [IAM policy condition](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition.html) requires `aws:ResourceOrgPaths`, a multivalued condition key, to contain any of the listed OU paths. The policy uses the `ForAllValues:StringNotLike` operator to compare the values of `aws:ResourceOrgPaths` to the listed OUs without case-sensitive matching.

## Restricting access to Amazon S3 buckets within your organization
<a name="iam-policy-ex8"></a>

To restrict access to Amazon S3 objects within your organization, attach an IAM policy to the root of the organization, applying it to all accounts in your organization. To require your IAM principals to follow this rule, use a [service-control policy (SCP)](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html). If you choose to use an SCP, make sure to thoroughly [test the SCP](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html#scp-warning-testing-effect) before attaching the policy to the root of the organization.

In the following example policy, access is denied to Amazon S3 actions unless the Amazon S3 object that's being accessed is in the same organization as the IAM principal that is accessing it:

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement": [
     {
       "Sid": "DenyS3AccessOutsideMyBoundary",
       "Effect": "Deny",
       "Action": [
         "s3:*"
       ],
       "Resource": "arn:aws:s3:::*/*",
       "Condition": {
         "StringNotEquals": {
           "aws:ResourceOrgID": "${aws:PrincipalOrgID}"
         }
       }
     }
   ]
 }
```

------

**Note**  
This policy doesn't grant any access. Instead, this policy acts as a backstop for your other IAM permissions, preventing your principals from accessing any Amazon S3 objects outside of your organization. This policy also applies to Amazon S3 resources that are created after the policy is put into effect.

The [IAM policy condition](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition.html) in this example requires `aws:ResourceOrgID` and `aws:PrincipalOrgID` to be equal to each other. With this requirement, the principal making the request and the resource being accessed must be in the same organization.

## Granting permission to retrieve the PublicAccessBlock configuration for an AWS account
<a name="using-with-s3-actions-related-to-accountss"></a>

The following example identity-based policy grants the `s3:GetAccountPublicAccessBlock` permission to a user. For these permissions, you set the `Resource` value to `"*"`. For information about resource ARNs, see [Policy resources for Amazon S3](security_iam_service-with-iam.md#security_iam_service-with-iam-id-based-policies-resources).

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Sid":"statement1",
         "Effect":"Allow",
         "Action":[
            "s3:GetAccountPublicAccessBlock" 
         ],
         "Resource":[
            "*"
         ]
       }
    ]
}
```

------

## Restricting bucket creation to one Region
<a name="condition-key-bucket-ops-1"></a>

Suppose that an AWS account administrator wants to grant its user (Dave) permission to create a bucket in the South America (São Paulo) Region only. The account administrator can attach the following user policy granting the `s3:CreateBucket` permission with a condition as shown. The key-value pair in the `Condition` block specifies the `s3:LocationConstraint` key and the `sa-east-1` Region as its value.

**Note**  
In this example, the bucket owner is granting permission to one of its users, so either a bucket policy or a user policy can be used. This example shows a user policy.

For a list of Amazon S3 Regions, see [Regions and Endpoints](https://docs.aws.amazon.com/general/latest/gr/s3.html) in the *AWS General Reference*. 

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement": [
      {
         "Sid": "statement1",
         "Effect": "Allow",
         "Action": "s3:CreateBucket",
         "Resource": "arn:aws:s3:::*",
         "Condition": {
             "StringLike": {
                 "s3:LocationConstraint": "sa-east-1"
             }
         }
       }
    ]
}
```

------

**Add explicit deny**  
The preceding policy restricts the user from creating a bucket in any other Region except `sa-east-1`. However, some other policy might grant this user permission to create buckets in another Region. For example, if the user belongs to a group, the group might have a policy attached to it that allows all users in the group permission to create buckets in another Region. To ensure that the user doesn't get permission to create buckets in any other Region, you can add an explicit deny statement in the above policy. 

The `Deny` statement uses the `StringNotLike` condition. That is, a create bucket request is denied if the location constraint is not `sa-east-1`. The explicit deny doesn't allow the user to create a bucket in any other Region, no matter what other permission the user gets. The following policy includes an explicit deny statement.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Sid":"statement1",
         "Effect":"Allow",
         "Action": "s3:CreateBucket",
         "Resource": "arn:aws:s3:::*",
         "Condition": {
             "StringLike": {
                 "s3:LocationConstraint": "sa-east-1"
             }
         }
       },
      {
         "Sid":"statement2",
         "Effect":"Deny",
         "Action": "s3:CreateBucket",
         "Resource": "arn:aws:s3:::*",
         "Condition": {
             "StringNotLike": {
                 "s3:LocationConstraint": "sa-east-1"
             }
         }
       }
    ]
}
```

------

**Test the policy with the AWS CLI**  
You can test the policy using the following `create-bucket` AWS CLI command. This example uses the `bucketconfig.txt` file to specify the location constraint. Note the Windows file path. You need to update the bucket name and path as appropriate. You must provide user credentials using the `--profile` parameter. For more information about setting up and using the AWS CLI, see [Developing with Amazon S3 using the AWS CLI](https://docs.aws.amazon.com/AmazonS3/latest/API/setup-aws-cli.html) in the *Amazon S3 API Reference*.

```
aws s3api create-bucket --bucket examplebucket --profile AccountADave --create-bucket-configuration file://c:/Users/someUser/bucketconfig.txt
```

The `bucketconfig.txt` file specifies the configuration as follows.

```
{"LocationConstraint": "sa-east-1"}
```

# Walkthroughs that use policies to manage access to your Amazon S3 resources
<a name="example-walkthroughs-managing-access"></a>

This topic provides the following introductory walkthrough examples for granting access to Amazon S3 resources. These examples use the AWS Management Console to create resources (buckets, objects, users) and grant them permissions. The examples then show you how to verify permissions using the command line tools, so you don't have to write any code. We provide commands using both the AWS Command Line Interface (AWS CLI) and the AWS Tools for Windows PowerShell.
+ [Example 1: Bucket owner granting its users bucket permissions](example-walkthroughs-managing-access-example1.md)

  The IAM users you create in your account have no permissions by default. In this exercise, you grant a user permission to perform bucket and object operations.
+ [Example 2: Bucket owner granting cross-account bucket permissions](example-walkthroughs-managing-access-example2.md)

  In this exercise, a bucket owner, Account A, grants cross-account permissions to another AWS account, Account B. Account B then delegates those permissions to users in its account. 
+ **Managing object permissions when the object and bucket owners are not the same**

  The example scenarios in this case are about a bucket owner granting object permissions to others, but not all objects in the bucket are owned by the bucket owner. What permissions does the bucket owner need, and how can it delegate those permissions?

  The AWS account that creates a bucket is called the *bucket owner*. The owner can grant other AWS accounts permission to upload objects, and the AWS accounts that create objects own them. The bucket owner has no permissions on those objects created by other AWS accounts. If the bucket owner writes a bucket policy granting access to objects, the policy doesn't apply to objects that are owned by other accounts. 

  In this case, the object owner must first grant permissions to the bucket owner using an object ACL. The bucket owner can then delegate those object permissions to others, to users in its own account, or to another AWS account, as illustrated by the following examples.
  + [Example 3: Bucket owner granting permissions to objects it does not own](example-walkthroughs-managing-access-example3.md)

    In this exercise, the bucket owner first gets permissions from the object owner. The bucket owner then delegates those permissions to users in its own account.
  + [Example 4 - Bucket owner granting cross-account permission to objects it does not own](example-walkthroughs-managing-access-example4.md)

    After receiving permissions from the object owner, the bucket owner can't delegate permission to other AWS accounts because cross-account delegation isn't supported (see [Permission delegation](access-policy-language-overview.md#permission-delegation)). Instead, the bucket owner can create an IAM role with permissions to perform specific operations (such as get object) and allow another AWS account to assume that role. Anyone who assumes the role can then access objects. This example shows how a bucket owner can use an IAM role to enable this cross-account delegation. 

## Before you try the example walkthroughs
<a name="before-you-try-example-walkthroughs-manage-access"></a>

These examples use the AWS Management Console to create resources and grant permissions. To test permissions, the examples use the command line tools, AWS CLI, and AWS Tools for Windows PowerShell, so you don't need to write any code. To test permissions, you must set up one of these tools. For more information, see [Setting up the tools for the walkthroughs](policy-eval-walkthrough-download-awscli.md). 

In addition, when creating resources, these examples don't use root user credentials of an AWS account. Instead, you create an administrator user in these accounts to perform these tasks. 

### About using an administrator user to create resources and grant permissions
<a name="about-using-root-credentials"></a>

AWS Identity and Access Management (IAM) recommends not using the root user credentials of your AWS account to make requests. Instead, create an IAM user or role, grant them full access, and then use their credentials to make requests. We refer to this as an administrative user or role. For more information, go to [AWS account root user credentials and IAM identities](https://docs.aws.amazon.com/general/latest/gr/root-vs-iam.html) in the *AWS General Reference* and [IAM Best Practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) in the *IAM User Guide*.

All example walkthroughs in this section use the administrator user credentials. If you have not created an administrator user for your AWS account, the topics show you how. 

To sign in to the AWS Management Console using the user credentials, you must use the IAM user sign-In URL. The [IAM Console](https://console.aws.amazon.com/iam/) provides this URL for your AWS account. The topics show you how to get the URL.

# Setting up the tools for the walkthroughs
<a name="policy-eval-walkthrough-download-awscli"></a>

The introductory examples (see [Walkthroughs that use policies to manage access to your Amazon S3 resources](example-walkthroughs-managing-access.md)) use the AWS Management Console to create resources and grant permissions. To test permissions, the examples use the command line tools, AWS Command Line Interface (AWS CLI) and AWS Tools for Windows PowerShell, so you don't need to write any code. To test permissions, you must set up one of these tools. 

**To set up the AWS CLI**

1. Download and configure the AWS CLI. For instructions, see the following topics in the *AWS Command Line Interface User Guide*: 

    [Install or update to the latest version of the AWS Command Line Interface](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-set-up.html) 

    [Get started with the AWS Command Line Interface](https://docs.aws.amazon.com/cli/latest/userguide/installing.html) 

1. Set the default profile. 

   You store user credentials in the AWS CLI config file. Create a default profile in the config file using your AWS account credentials. For instructions on finding and editing your AWS CLI config file, see [Configuration and credential file settings](https://docs.aws.amazon.com/cli/latest/userguide/cli-config-files.html).

   ```
   [default]
   aws_access_key_id = access key ID
   aws_secret_access_key = secret access key
   region = us-west-2
   ```

1. Verify the setup by entering the following command at the command prompt. Both these commands don't provide credentials explicitly, so the credentials of the default profile are used.
   + Try the `help` command.

     ```
     aws help
     ```
   + To get a list of buckets on the configured account, use the `aws s3 ls` command.

     ```
     aws s3 ls
     ```

As you go through the walkthroughs, you will create users, and you will save user credentials in the config files by creating profiles, as the following example shows. These profiles have the names of `AccountAadmin` and `AccountBadmin`.

```
[profile AccountAadmin]
aws_access_key_id = User AccountAadmin access key ID
aws_secret_access_key = User AccountAadmin secret access key
region = us-west-2

[profile AccountBadmin]
aws_access_key_id = Account B access key ID
aws_secret_access_key = Account B secret access key
region = us-east-1
```

To run a command using these user credentials, you add the `--profile` parameter specifying the profile name. The following AWS CLI command retrieves a listing of objects in *`examplebucket`* and specifies the `AccountBadmin` profile. 

```
aws s3 ls s3://examplebucket --profile AccountBadmin
```

Alternatively, you can configure one set of user credentials as the default profile by changing the `AWS_DEFAULT_PROFILE` environment variable from the command prompt. After you've done this, whenever you perform AWS CLI commands without the `--profile` parameter, the AWS CLI uses the profile you set in the environment variable as the default profile.

```
$ export AWS_DEFAULT_PROFILE=AccountAadmin
```

**To set up AWS Tools for Windows PowerShell**

1. Download and configure the AWS Tools for Windows PowerShell. For instructions, go to [Installing the AWS Tools for Windows PowerShell](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-getting-set-up.html#pstools-installing-download) in the *AWS Tools for PowerShell User Guide*. 
**Note**  
To load the AWS Tools for Windows PowerShell module, you must enable PowerShell script execution. For more information, see [Enable Script Execution](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-getting-set-up.html#enable-script-execution) in the *AWS Tools for PowerShell User Guide*.

1. For these walkthroughs, you specify AWS credentials per session using the `Set-AWSCredentials` command. The command saves the credentials to a persistent store (`-StoreAs `parameter).

   ```
   Set-AWSCredentials -AccessKey AccessKeyID -SecretKey SecretAccessKey -storeas string
   ```

1. Verify the setup.
   + To retrieve a list of available commands that you can use for Amazon S3 operations, run the `Get-Command` command. 

     ```
     Get-Command -module awspowershell -noun s3* -StoredCredentials string
     ```
   + To retrieve a list of objects in a bucket, run the `Get-S3Object` command.

     ```
     Get-S3Object -BucketName bucketname -StoredCredentials string
     ```

For a list of commands, see [AWS Tools for PowerShell Cmdlet Reference](https://docs.aws.amazon.com/powershell/latest/reference/Index.html). 

Now you're ready to try the walkthroughs. Follow the links provided at the beginning of each section.

# Example 1: Bucket owner granting its users bucket permissions
<a name="example-walkthroughs-managing-access-example1"></a>

**Important**  
Granting permissions to IAM roles is a better practice than granting permissions to individual users.For more information about how to grant permissions to IAM roles, see [Understanding cross-account permissions and using IAM roles](example-walkthroughs-managing-access-example4.md#access-policies-walkthrough-example4-overview).

**Topics**
+ [

## Preparing for the walkthrough
](#grant-permissions-to-user-in-your-account-step0)
+ [

## Step 1: Create resources in Account A and grant permissions
](#grant-permissions-to-user-in-your-account-step1)
+ [

## Step 2: Test permissions
](#grant-permissions-to-user-in-your-account-test)

In this walkthrough, an AWS account owns a bucket, and the account includes an IAM user By default, the user has no permissions. For the user to perform any tasks, the parent account must grant them permissions. The bucket owner and parent account are the same. Therefore, to grant the user permissions on the bucket, the AWS account can use a bucket policy, a user policy, or both. The account owner will grant some permissions using a bucket policy and other permissions using a user policy.

The following steps summarize the walkthrough:

![\[Diagram showing an AWS account granting permissions.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/access-policy-ex1.png)


1. Account administrator creates a bucket policy granting a set of permissions to the user.

1. Account administrator attaches a user policy to the user granting additional permissions.

1. User then tries permissions granted via both the bucket policy and the user policy.

For this example, you will need an AWS account. Instead of using the root user credentials of the account, you will create an administrator user (see [About using an administrator user to create resources and grant permissions](example-walkthroughs-managing-access.md#about-using-root-credentials)). We refer to the AWS account and the administrator user as shown in the following table.


| Account ID | Account referred to as | Administrator user in the account | 
| --- | --- | --- | 
|  *1111-1111-1111*  |  Account A  |  AccountAadmin  | 

**Note**  
The administrator user in this example is **AccountAadmin**, which refers to Account A, and not **AccountAdmin**.

All the tasks of creating users and granting permissions are done in the AWS Management Console. To verify permissions, the walkthrough uses the command line tools, AWS Command Line Interface (AWS CLI) and AWS Tools for Windows PowerShell, so you don't need to write any code.

## Preparing for the walkthrough
<a name="grant-permissions-to-user-in-your-account-step0"></a>

1. Make sure you have an AWS account and that it has a user with administrator privileges.

   1. Sign up for an AWS account, if needed. We refer to this account as Account A.

      1.  Go to [https://aws.amazon.com/s3](https://aws.amazon.com/s3) and choose **Create an AWS account**. 

      1. Follow the on-screen instructions.

         AWS will notify you by email when your account is active and available for you to use.

   1. In Account A, create an administrator user **AccountAadmin**. Using Account A credentials, sign in to the [IAM console](https://console.aws.amazon.com/iam/home?#home) and do the following: 

      1. Create user **AccountAadmin** and note the user security credentials. 

         For instructions, see [Creating an IAM user in your AWS account](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html) in the *IAM User Guide*. 

      1. Grant administrator privileges to **AccountAadmin** by attaching a user policy giving full access. 

         For instructions, see [Managing IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage.html) in the *IAM User Guide*. 

      1. Note the **IAM user Sign-In URL** for **AccountAadmin**. You will need to use this URL when signing in to the AWS Management Console. For more information about where to find the sign-in URL, see [Sign in to the AWS Management Console as an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/getting-started_how-users-sign-in.html) in *IAM User Guide*. Note the URL for each of the accounts.

1. Set up either the AWS CLI or the AWS Tools for Windows PowerShell. Make sure that you save administrator user credentials as follows:
   + If using the AWS CLI, create a profile, `AccountAadmin`, in the config file.
   + If using the AWS Tools for Windows PowerShell, make sure you store credentials for the session as `AccountAadmin`.

   For instructions, see [Setting up the tools for the walkthroughs](policy-eval-walkthrough-download-awscli.md). 

## Step 1: Create resources in Account A and grant permissions
<a name="grant-permissions-to-user-in-your-account-step1"></a>

Using the credentials of user `AccountAadmin` in Account A, and the special IAM user sign-in URL, sign in to the AWS Management Console and do the following:

1. Create resources of a bucket and an IAM user

   1. In the Amazon S3 console, create a bucket. Note the AWS Region in which you created the bucket. For instructions, see [Creating a general purpose bucket](create-bucket-overview.md). 

   1. In the [IAM Console](https://console.aws.amazon.com/iam/), do the following: 

      1. Create a user named Dave.

         For step-by-step instructions, see [Creating IAM users (console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html#id_users_create_console) in the *IAM User Guide*. 

      1. Note the `UserDave` credentials.

      1. Note the Amazon Resource Name (ARN) for user Dave. In the [IAM Console](https://console.aws.amazon.com/iam/), select the user, and the **Summary** tab provides the user ARN.

1. Grant permissions. 

   Because the bucket owner and the parent account to which the user belongs are the same, the AWS account can grant user permissions using a bucket policy, a user policy, or both. In this example, you do both. If the object is also owned by the same account, the bucket owner can grant object permissions in the bucket policy (or an IAM policy).

   1. In the Amazon S3 console, attach the following bucket policy to *awsexamplebucket1*. 

      The policy has two statements. 
      + The first statement grants Dave the bucket operation permissions `s3:GetBucketLocation` and `s3:ListBucket`.
      + The second statement grants the `s3:GetObject` permission. Because Account A also owns the object, the account administrator is able to grant the `s3:GetObject` permission. 

      In the `Principal` statement, Dave is identified by his user ARN. For more information about policy elements, see [Policies and permissions in Amazon S3](access-policy-language-overview.md).

------
#### [ JSON ]

****  

      ```
      {
          "Version":"2012-10-17",		 	 	 
          "Statement": [
              {
                  "Sid": "statement1",
                  "Effect": "Allow",
                  "Principal": {
                      "AWS": "arn:aws:iam::111122223333:user/Dave"
                  },
                  "Action": [
                      "s3:GetBucketLocation",
                      "s3:ListBucket"
                  ],
                  "Resource": [
                      "arn:aws:s3:::awsexamplebucket1"
                  ]
              },
              {
                  "Sid": "statement2",
                  "Effect": "Allow",
                  "Principal": {
                      "AWS": "arn:aws:iam::111122223333:user/Dave"
                  },
                  "Action": [
                      "s3:GetObject"
                  ],
                  "Resource": [
                      "arn:aws:s3:::awsexamplebucket1/*"
                  ]
              }
          ]
      }
      ```

------

   1. Create an inline policy for the user Dave by using the following policy. The policy grants Dave the `s3:PutObject` permission. You need to update the policy by providing your bucket name.

------
#### [ JSON ]

****  

      ```
      {
         "Version":"2012-10-17",		 	 	 
         "Statement": [
            {
               "Sid": "PermissionForObjectOperations",
               "Effect": "Allow",
               "Action": [
                  "s3:PutObject"
               ],
               "Resource": [
                  "arn:aws:s3:::awsexamplebucket1/*"
               ]
            }
         ]
      }
      ```

------

      For instructions, see [Managing IAMpolicies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_inline-using.html) in the *IAM User Guide*. Note you need to sign in to the console using Account A credentials.

## Step 2: Test permissions
<a name="grant-permissions-to-user-in-your-account-test"></a>

Using Dave's credentials, verify that the permissions work. You can use either of the following two procedures.

**Test permissions using the AWS CLI**

1. Update the AWS CLI config file by adding the following `UserDaveAccountA` profile. For more information, see [Setting up the tools for the walkthroughs](policy-eval-walkthrough-download-awscli.md).

   ```
   [profile UserDaveAccountA]
   aws_access_key_id = access-key
   aws_secret_access_key = secret-access-key
   region = us-east-1
   ```

1. Verify that Dave can perform the operations as granted in the user policy. Upload a sample object using the following AWS CLI `put-object` command. 

   The `--body` parameter in the command identifies the source file to upload. For example, if the file is in the root of the C: drive on a Windows machine, you specify `c:\HappyFace.jpg`. The `--key` parameter provides the key name for the object.

   ```
   aws s3api put-object --bucket awsexamplebucket1 --key HappyFace.jpg --body HappyFace.jpg --profile UserDaveAccountA
   ```

   Run the following AWS CLI command to get the object. 

   ```
   aws s3api get-object --bucket awsexamplebucket1 --key HappyFace.jpg OutputFile.jpg --profile UserDaveAccountA
   ```

**Test permissions using the AWS Tools for Windows PowerShell**

1. Store Dave's credentials as `AccountADave`. You then use these credentials to `PUT` and `GET` an object.

   ```
   set-awscredentials -AccessKey AccessKeyID -SecretKey SecretAccessKey -storeas AccountADave
   ```

1. Upload a sample object using the AWS Tools for Windows PowerShell `Write-S3Object` command using user Dave's stored credentials. 

   ```
   Write-S3Object -bucketname awsexamplebucket1 -key HappyFace.jpg -file HappyFace.jpg -StoredCredentials AccountADave
   ```

   Download the previously uploaded object.

   ```
   Read-S3Object -bucketname awsexamplebucket1 -key HappyFace.jpg -file Output.jpg -StoredCredentials AccountADave
   ```

# Example 2: Bucket owner granting cross-account bucket permissions
<a name="example-walkthroughs-managing-access-example2"></a>

**Important**  
Granting permissions to IAM roles is a better practice than granting permissions to individual users. To learn how to do this, see [Understanding cross-account permissions and using IAM roles](example-walkthroughs-managing-access-example4.md#access-policies-walkthrough-example4-overview).

**Topics**
+ [

## Preparing for the walkthrough
](#cross-acct-access-step0)
+ [

## Step 1: Do the Account A tasks
](#access-policies-walkthrough-cross-account-permissions-acctA-tasks)
+ [

## Step 2: Do the Account B tasks
](#access-policies-walkthrough-cross-account-permissions-acctB-tasks)
+ [

## Step 3: (Optional) Try explicit deny
](#access-policies-walkthrough-example2-explicit-deny)
+ [

## Step 4: Clean up
](#access-policies-walkthrough-example2-cleanup-step)

An AWS account—for example, Account A—can grant another AWS account, Account B, permission to access its resources such as buckets and objects. Account B can then delegate those permissions to users in its account. In this example scenario, a bucket owner grants cross-account permission to another account to perform specific bucket operations.

**Note**  
Account A can also directly grant a user in Account B permissions using a bucket policy. However, the user will still need permission from the parent account, Account B, to which the user belongs, even if Account B doesn't have permissions from Account A. As long as the user has permission from both the resource owner and the parent account, the user will be able to access the resource.

The following is a summary of the walkthrough steps:

![\[An AWS account granting another AWS account permission to access its resources.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/access-policy-ex2.png)


1. Account A administrator user attaches a bucket policy granting cross-account permissions to Account B to perform specific bucket operations.

   Note that administrator user in Account B will automatically inherit the permissions.

1. Account B administrator user attaches user policy to the user delegating the permissions it received from Account A.

1. User in Account B then verifies permissions by accessing an object in the bucket owned by Account A.

For this example, you need two accounts. The following table shows how we refer to these accounts and the administrator users in them. In accordance with the IAM guidelines (see [About using an administrator user to create resources and grant permissions](example-walkthroughs-managing-access.md#about-using-root-credentials)), we don't use the root user credentials in this walkthrough. Instead, you create an administrator user in each account and use those credentials when creating resources and granting them permissions. 


| AWS account ID | Account referred to as | Administrator user in the account  | 
| --- | --- | --- | 
|  *1111-1111-1111*  |  Account A  |  AccountAadmin  | 
|  *2222-2222-2222*  |  Account B  |  AccountBadmin  | 

All the tasks of creating users and granting permissions are done in the AWS Management Console. To verify permissions, the walkthrough uses the command line tools, AWS Command Line Interface (CLI) and AWS Tools for Windows PowerShell, so you don't need to write any code.

## Preparing for the walkthrough
<a name="cross-acct-access-step0"></a>

1. Make sure you have two AWS accounts and that each account has one administrator user as shown in the table in the preceding section.

   1. Sign up for an AWS account, if needed. 

   1. Using Account A credentials, sign in to the [IAM console](https://console.aws.amazon.com/iam/home?#home) to create the administrator user:

      1. Create user **AccountAadmin** and note the security credentials. For instructions, see [Creating an IAM user in Your AWS account](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html) in the *IAM User Guide*. 

      1. Grant administrator privileges to **AccountAadmin** by attaching a user policy giving full access. For instructions, see [Working with Policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage.html) in the *IAM User Guide*. 

   1. While you are in the IAM console, note the **IAM user Sign-In URL** on the **Dashboard**. All users in the account must use this URL when signing in to the AWS Management Console.

      For more information, see [How Users Sign in to Your Account](https://docs.aws.amazon.com/IAM/latest/UserGuide/getting-started_how-users-sign-in.html) in *IAM User Guide*. 

   1. Repeat the preceding step using Account B credentials and create administrator user **AccountBadmin**.

1. Set up either the AWS Command Line Interface (AWS CLI) or the AWS Tools for Windows PowerShell. Make sure that you save administrator user credentials as follows:
   + If using the AWS CLI, create two profiles, `AccountAadmin` and `AccountBadmin`, in the config file.
   + If using the AWS Tools for Windows PowerShell, make sure that you store credentials for the session as `AccountAadmin` and `AccountBadmin`.

   For instructions, see [Setting up the tools for the walkthroughs](policy-eval-walkthrough-download-awscli.md). 

1. Save the administrator user credentials, also referred to as profiles. You can use the profile name instead of specifying credentials for each command you enter. For more information, see [Setting up the tools for the walkthroughs](policy-eval-walkthrough-download-awscli.md). 

   1. Add profiles in the AWS CLI credentials file for each of the administrator users, `AccountAadmin` and `AccountBadmin`, in the two accounts. 

      ```
      [AccountAadmin]
      aws_access_key_id = access-key-ID
      aws_secret_access_key = secret-access-key
      region = us-east-1
      
      [AccountBadmin]
      aws_access_key_id = access-key-ID
      aws_secret_access_key = secret-access-key
      region = us-east-1
      ```

   1. If you're using the AWS Tools for Windows PowerShell, run the following command.

      ```
      set-awscredentials –AccessKey AcctA-access-key-ID –SecretKey AcctA-secret-access-key –storeas AccountAadmin
      set-awscredentials –AccessKey AcctB-access-key-ID –SecretKey AcctB-secret-access-key –storeas AccountBadmin
      ```

## Step 1: Do the Account A tasks
<a name="access-policies-walkthrough-cross-account-permissions-acctA-tasks"></a>

### Step 1.1: Sign in to the AWS Management Console
<a name="access-policies-walkthrough-cross-account-permissions-acctA-tasks-sign-in"></a>

Using the IAM user sign-in URL for Account A, first sign in to the AWS Management Console as **AccountAadmin** user. This user will create a bucket and attach a policy to it. 

### Step 1.2: Create a bucket
<a name="access-policies-walkthrough-example2a-create-bucket"></a>

1. In the Amazon S3 console, create a bucket. This exercise assumes the bucket is created in the US East (N. Virginia) AWS Region and is named `amzn-s3-demo-bucket`.

   For instructions, see [Creating a general purpose bucket](create-bucket-overview.md). 

1. Upload a sample object to the bucket.

   For instructions, go to [Step 2: Upload an object to your bucket](GetStartedWithS3.md#uploading-an-object-bucket). 

### Step 1.3: Attach a bucket policy to grant cross-account permissions to Account B
<a name="access-policies-walkthrough-example2a"></a>

The bucket policy grants the `s3:GetLifecycleConfiguration` and `s3:ListBucket` permissions to Account B. It's assumed that you're still signed in to the console using **AccountAadmin** user credentials.

1. Attach the following bucket policy to `amzn-s3-demo-bucket`. The policy grants Account B permission for the `s3:GetLifecycleConfiguration` and `s3:ListBucket` actions.

   For instructions, see [Adding a bucket policy by using the Amazon S3 console](add-bucket-policy.md). 

------
#### [ JSON ]

****  

   ```
   {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
         {
            "Sid": "Example permissions",
            "Effect": "Allow",
            "Principal": {
               "AWS": "arn:aws:iam::111122223333:root"
            },
            "Action": [
               "s3:GetLifecycleConfiguration",
               "s3:ListBucket"
            ],
            "Resource": [
               "arn:aws:s3:::amzn-s3-demo-bucket"
            ]
         }
      ]
   }
   ```

------

1. Verify that Account B (and thus its administrator user) can perform the operations.
   + Verify using the AWS CLI

     ```
     aws s3 ls s3://amzn-s3-demo-bucket --profile AccountBadmin
     aws s3api get-bucket-lifecycle-configuration --bucket amzn-s3-demo-bucket --profile AccountBadmin
     ```
   + Verify using the AWS Tools for Windows PowerShell

     ```
     get-s3object -BucketName amzn-s3-demo-bucket -StoredCredentials AccountBadmin 
     get-s3bucketlifecycleconfiguration -BucketName amzn-s3-demo-bucket -StoredCredentials AccountBadmin
     ```

## Step 2: Do the Account B tasks
<a name="access-policies-walkthrough-cross-account-permissions-acctB-tasks"></a>

Now the Account B administrator creates a user, Dave, and delegates the permissions received from Account A. 

### Step 2.1: Sign in to the AWS Management Console
<a name="access-policies-walkthrough-cross-account-permissions-acctB-tasks-sign-in"></a>

Using the IAM user sign-in URL for Account B, first sign in to the AWS Management Console as **AccountBadmin** user. 

### Step 2.2: Create user Dave in Account B
<a name="access-policies-walkthrough-example2b-create-user"></a>

In the [IAM Console](https://console.aws.amazon.com/iam/), create a user, **Dave**. 

For instructions, see [Creating IAM users (console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html#id_users_create_console) in the *IAM User Guide*. 

### Step 2.3: Delegate permissions to user Dave
<a name="access-policies-walkthrough-example2-delegate-perm-userdave"></a>

Create an inline policy for the user Dave by using the following policy. You will need to update the policy by providing your bucket name.

It's assumed that you're signed in to the console using **AccountBadmin** user credentials.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement": [
      {
         "Sid": "Example",
         "Effect": "Allow",
         "Action": [
            "s3:ListBucket"
         ],
         "Resource": [
            "arn:aws:s3:::amzn-s3-demo-bucket"
         ]
      }
   ]
}
```

------

For instructions, see [Managing IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_inline-using.html) in the *IAM User Guide*.

### Step 2.4: Test permissions
<a name="access-policies-walkthrough-example2b-user-dave-access"></a>

Now Dave in Account B can list the contents of `amzn-s3-demo-bucket` owned by Account A. You can verify the permissions using either of the following procedures. 

**Test permissions using the AWS CLI**

1. Add the `UserDave` profile to the AWS CLI config file. For more information about the config file, see [Setting up the tools for the walkthroughs](policy-eval-walkthrough-download-awscli.md).

   ```
   [profile UserDave]
   aws_access_key_id = access-key
   aws_secret_access_key = secret-access-key
   region = us-east-1
   ```

1. At the command prompt, enter the following AWS CLI command to verify Dave can now get an object list from the `amzn-s3-demo-bucket` owned by Account A. Note the command specifies the `UserDave` profile.

   ```
   aws s3 ls s3://amzn-s3-demo-bucket --profile UserDave
   ```

   Dave doesn't have any other permissions. So, if he tries any other operation—for example, the following `get-bucket-lifecycle` configuration—Amazon S3 returns permission denied. 

   ```
   aws s3api get-bucket-lifecycle-configuration --bucket amzn-s3-demo-bucket --profile UserDave
   ```

**Test permissions using AWS Tools for Windows PowerShell**

1. Store Dave's credentials as `AccountBDave`.

   ```
   set-awscredentials -AccessKey AccessKeyID -SecretKey SecretAccessKey -storeas AccountBDave
   ```

1. Try the List Bucket command.

   ```
   get-s3object -BucketName amzn-s3-demo-bucket -StoredCredentials AccountBDave
   ```

   Dave doesn't have any other permissions. So, if he tries any other operation—for example, the following `get-s3bucketlifecycleconfiguration`—Amazon S3 returns permission denied. 

   ```
   get-s3bucketlifecycleconfiguration -BucketName amzn-s3-demo-bucket -StoredCredentials AccountBDave
   ```

## Step 3: (Optional) Try explicit deny
<a name="access-policies-walkthrough-example2-explicit-deny"></a>

You can have permissions granted by using an access control list (ACL), a bucket policy, or a user policy. But if there is an explicit deny set by either a bucket policy or a user policy, the explicit deny takes precedence over any other permissions. For testing, update the bucket policy and explicitly deny Account B the `s3:ListBucket` permission. The policy also grants `s3:ListBucket` permission. However, explicit deny takes precedence, and Account B or users in Account B will not be able to list objects in `amzn-s3-demo-bucket`.

1. Using credentials of user `AccountAadmin` in Account A, replace the bucket policy by the following. 

1. Now if you try to get a bucket list using `AccountBadmin` credentials, access is denied.
   + Using the AWS CLI, run the following command:

     ```
     aws s3 ls s3://amzn-s3-demo-bucket --profile AccountBadmin
     ```
   + Using the AWS Tools for Windows PowerShell, run the following command:

     ```
     get-s3object -BucketName amzn-s3-demo-bucket -StoredCredentials AccountBDave
     ```

## Step 4: Clean up
<a name="access-policies-walkthrough-example2-cleanup-step"></a>

1. After you're done testing, you can do the following to clean up:

   1. Sign in to the AWS Management Console ([AWS Management Console](https://console.aws.amazon.com/)) using Account A credentials, and do the following:
     + In the Amazon S3 console, remove the bucket policy attached to `amzn-s3-demo-bucket`. In the bucket **Properties**, delete the policy in the **Permissions** section. 
     + If the bucket is created for this exercise, in the Amazon S3 console, delete the objects and then delete the bucket. 
     + In the [IAM Console](https://console.aws.amazon.com/iam/), remove the `AccountAadmin` user.

1. Sign in to the [IAM Console](https://console.aws.amazon.com/iam/) using Account B credentials. Delete user `AccountBadmin`. For step-by-step instructions, see [Deleting an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_manage.html#id_users_deleting) in the *IAM User Guide*.

# Example 3: Bucket owner granting permissions to objects it does not own
<a name="example-walkthroughs-managing-access-example3"></a>

**Important**  
Granting permissions to IAM roles is a better practice than granting permissions to individual users. To learn how to do this, see [Understanding cross-account permissions and using IAM roles](example-walkthroughs-managing-access-example4.md#access-policies-walkthrough-example4-overview).

**Topics**
+ [

## Step 0: Preparing for the walkthrough
](#access-policies-walkthrough-cross-account-acl-step0)
+ [

## Step 1: Do the Account A tasks
](#access-policies-walkthrough-cross-account-acl-acctA-tasks)
+ [

## Step 2: Do the Account B tasks
](#access-policies-walkthrough-cross-account-acl-acctB-tasks)
+ [

## Step 3: Test permissions
](#access-policies-walkthrough-cross-account-acl-verify)
+ [

## Step 4: Clean up
](#access-policies-walkthrough-cross-account-acl-cleanup)

The scenario for this example is that a bucket owner wants to grant permission to access objects, but the bucket owner doesn't own all objects in the bucket. For this example, the bucket owner is trying to grant permission to users in its own account.

A bucket owner can enable other AWS accounts to upload objects. By default, the bucket owner doesn't own objects written to a bucket by another AWS account. Objects are owned by the accounts that write them to an S3 bucket. If the bucket owner doesn't own objects in the bucket, the object owner must first grant permission to the bucket owner using an object access control list (ACL). Then, the bucket owner can grant permissions to an object that they don't own. For more information, see [Amazon S3 bucket and object ownership](access-policy-language-overview.md#about-resource-owner).

If the bucket owner applies the bucket owner enforced setting for S3 Object Ownership for the bucket, the bucket owner will own all objects in the bucket, including objects written by another AWS account. This approach resolves the issue that objects are not owned by the bucket owner. Then, you can delegate permission to users in your own account or to other AWS accounts.

**Note**  
S3 Object Ownership is an Amazon S3 bucket-level setting that you can use to both control ownership of the objects that are uploaded to your bucket and to disable or enable ACLs. By default, Object Ownership is set to the Bucket owner enforced setting, and all ACLs are disabled. When ACLs are disabled, the bucket owner owns all the objects in the bucket and manages access to them exclusively by using access-management policies.  
 A majority of modern use cases in Amazon S3 no longer require the use of ACLs. We recommend that you keep ACLs disabled, except in circumstances where you need to control access for each object individually. With ACLs disabled, you can use policies to control access to all objects in your bucket, regardless of who uploaded the objects to your bucket. For more information, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md).

In this example, we assume the bucket owner has not applied the bucket owner enforced setting for Object Ownership. The bucket owner delegates permission to users in its own account. The following is a summary of the walkthrough steps:

![\[A bucket owner granting permissions to objects it does not own.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/access-policy-ex3.png)


1. Account A administrator user attaches a bucket policy with two statements.
   + Allow cross-account permission to Account B to upload objects.
   + Allow a user in its own account to access objects in the bucket.

1. Account B administrator user uploads objects to the bucket owned by Account A.

1. Account B administrator updates the object ACL adding grant that gives the bucket owner full-control permission on the object.

1. User in Account A verifies by accessing objects in the bucket, regardless of who owns them.

For this example, you need two accounts. The following table shows how we refer to these accounts and the administrator users in these accounts. In this walkthrough, you don't use the account root user credentials, according to the recommended IAM guidelines. For more information, see [About using an administrator user to create resources and grant permissions](example-walkthroughs-managing-access.md#about-using-root-credentials). Instead, you create an administrator in each account and use those credentials in creating resources and granting them permissions.


| AWS account ID | Account referred to as | Administrator in the account  | 
| --- | --- | --- | 
|  *1111-1111-1111*  |  Account A  |  AccountAadmin  | 
|  *2222-2222-2222*  |  Account B  |  AccountBadmin  | 

All the tasks of creating users and granting permissions are done in the AWS Management Console. To verify permissions, the walkthrough uses the command line tools, AWS Command Line Interface (AWS CLI) and AWS Tools for Windows PowerShell, so you don't need to write any code. 

## Step 0: Preparing for the walkthrough
<a name="access-policies-walkthrough-cross-account-acl-step0"></a>

1. Make sure that you have two AWS accounts and each account has one administrator as shown in the table in the preceding section.

   1. Sign up for an AWS account, if needed. 

   1. Using Account A credentials, sign in to the [IAM Console](https://console.aws.amazon.com/iam/) and do the following to create an administrator user:
      + Create user **AccountAadmin** and note the user's security credentials. For more information about adding users, see [Creating an IAM user in your AWS account](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html) in the *IAM User Guide*. 
      + Grant administrator permissions to **AccountAadmin** by attaching a user policy that gives full access. For instructions, see [Managing IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage.html) in the *IAM User Guide*. 
      + In the [IAM Console](https://console.aws.amazon.com/iam/) **Dashboard**, note the** IAM User Sign-In URL**. Users in this account must use this URL when signing in to the AWS Management Console. For more information, see [How users sign in to your account](https://docs.aws.amazon.com/IAM/latest/UserGuide/getting-started_how-users-sign-in.html) in *IAM User Guide*. 

   1. Repeat the preceding step using Account B credentials and create administrator user **AccountBadmin**.

1. Set up either the AWS CLI or the Tools for Windows PowerShell. Make sure that you save the administrator credentials as follows:
   + If using the AWS CLI, create two profiles, `AccountAadmin` and `AccountBadmin`, in the config file.
   + If using the Tools for Windows PowerShell, make sure that you store credentials for the session as `AccountAadmin` and `AccountBadmin`.

   For instructions, see [Setting up the tools for the walkthroughs](policy-eval-walkthrough-download-awscli.md). 

## Step 1: Do the Account A tasks
<a name="access-policies-walkthrough-cross-account-acl-acctA-tasks"></a>

Perform the following steps for Account A:

### Step 1.1: Sign in to the console
<a name="access-policies-walkthrough-cross-account-permissions-acctA-tasks-sign-in-example3"></a>

Using the IAM user sign-in URL for Account A, sign in to the AWS Management Console as **AccountAadmin** user. This user will create a bucket and attach a policy to it. 

### Step 1.2: Create a bucket and user, and add a bucket policy to grant user permissions
<a name="access-policies-walkthrough-cross-account-acl-create-bucket"></a>

1. In the Amazon S3 console, create a bucket. This exercise assumes that the bucket is created in the US East (N. Virginia) AWS Region, and the name is `amzn-s3-demo-bucket1`.

   For instructions, see [Creating a general purpose bucket](create-bucket-overview.md). 

1. In the [IAM Console](https://console.aws.amazon.com/iam/), create a user **Dave**. 

   For step-by-step instructions, see [Creating IAM users (console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html#id_users_create_console) in the *IAM User Guide*. 

1. Note the user Dave credentials. 

1. In the Amazon S3 console, attach the following bucket policy to `amzn-s3-demo-bucket1` bucket. For instructions, see [Adding a bucket policy by using the Amazon S3 console](add-bucket-policy.md). Follow the steps to add a bucket policy. For information about how to find account IDs, see [Finding your AWS account ID](https://docs.aws.amazon.com/general/latest/gr/acct-identifiers.html#FindingYourAccountIdentifiers). 

   The policy grants Account B the `s3:PutObject` and `s3:ListBucket` permissions. The policy also grants user `Dave` the `s3:GetObject` permission. 

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "Statement1",
               "Effect": "Allow",
               "Principal": {
                   "AWS": "arn:aws:iam::111122223333:root"
               },
               "Action": [
                   "s3:PutObject",
                   "s3:ListBucket"
               ],
               "Resource": [
                   "arn:aws:s3:::amzn-s3-demo-bucket1/*",
                   "arn:aws:s3:::amzn-s3-demo-bucket1"
               ]
           },
           {
               "Sid": "Statement3",
               "Effect": "Allow",
               "Principal": {
                   "AWS": "arn:aws:iam::111122223333:user/Dave"
               },
               "Action": [
                   "s3:GetObject"
               ],
               "Resource": [
                   "arn:aws:s3:::amzn-s3-demo-bucket1/*"
               ]
           }
       ]
   }
   ```

------

## Step 2: Do the Account B tasks
<a name="access-policies-walkthrough-cross-account-acl-acctB-tasks"></a>

Now that Account B has permissions to perform operations on Account A's bucket, the Account B administrator does the following:
+ Uploads an object to Account A's bucket 
+ Adds a grant in the object ACL to allow Account A, the bucket owner, full control

**Using the AWS CLI**

1. Using the `put-object` AWS CLI command, upload an object. The `--body` parameter in the command identifies the source file to upload. For example, if the file is on the `C:` drive of a Windows machine, specify `c:\HappyFace.jpg`. The `--key` parameter provides the key name for the object. 

   ```
   aws s3api put-object --bucket amzn-s3-demo-bucket1 --key HappyFace.jpg --body HappyFace.jpg --profile AccountBadmin
   ```

1. Add a grant to the object ACL to allow the bucket owner full control of the object. For information about how to find a canonical user ID, see [Find the canonical user ID for your AWS account](https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-identifiers.html#FindCanonicalId) in the *AWS Account Management Reference Guide*.

   ```
   aws s3api put-object-acl --bucket amzn-s3-demo-bucket1 --key HappyFace.jpg --grant-full-control id="AccountA-CanonicalUserID" --profile AccountBadmin
   ```

**Using the Tools for Windows PowerShell**

1. Using the `Write-S3Object` command, upload an object. 

   ```
   Write-S3Object -BucketName amzn-s3-demo-bucket1 -key HappyFace.jpg -file HappyFace.jpg -StoredCredentials AccountBadmin
   ```

1. Add a grant to the object ACL to allow the bucket owner full control of the object.

   ```
   Set-S3ACL -BucketName amzn-s3-demo-bucket1 -Key HappyFace.jpg -CannedACLName "bucket-owner-full-control" -StoredCreden
   ```

## Step 3: Test permissions
<a name="access-policies-walkthrough-cross-account-acl-verify"></a>

Now verify that user Dave in Account A can access the object owned by Account B.

**Using the AWS CLI**

1. Add user Dave credentials to the AWS CLI config file and create a new profile, `UserDaveAccountA`. For more information, see [Setting up the tools for the walkthroughs](policy-eval-walkthrough-download-awscli.md).

   ```
   [profile UserDaveAccountA]
   aws_access_key_id = access-key
   aws_secret_access_key = secret-access-key
   region = us-east-1
   ```

1. Run the `get-object` CLI command to download `HappyFace.jpg` and save it locally. You provide user Dave credentials by adding the `--profile` parameter.

   ```
   aws s3api get-object --bucket amzn-s3-demo-bucket1 --key HappyFace.jpg Outputfile.jpg --profile UserDaveAccountA
   ```

**Using the Tools for Windows PowerShell**

1. Store user Dave AWS credentials, as `UserDaveAccountA`, to persistent store. 

   ```
   Set-AWSCredentials -AccessKey UserDave-AccessKey -SecretKey UserDave-SecretAccessKey -storeas UserDaveAccountA
   ```

1. Run the `Read-S3Object` command to download the `HappyFace.jpg` object and save it locally. You provide user Dave credentials by adding the `-StoredCredentials` parameter. 

   ```
   Read-S3Object -BucketName amzn-s3-demo-bucket1 -Key HappyFace.jpg -file HappyFace.jpg  -StoredCredentials UserDaveAccountA
   ```

## Step 4: Clean up
<a name="access-policies-walkthrough-cross-account-acl-cleanup"></a>

1. After you're done testing, you can do the following to clean up:

   1. Sign in to the [AWS Management Console](https://console.aws.amazon.com/) using Account A credentials, and do the following:
     + In the Amazon S3 console, remove the bucket policy attached to *amzn-s3-demo-bucket1*. In the bucket **Properties**, delete the policy in the **Permissions** section. 
     + If the bucket is created for this exercise, in the Amazon S3 console, delete the objects and then delete the bucket. 
     + In the [IAM Console](https://console.aws.amazon.com/iam/), remove the **AccountAadmin** user. For step-by-step instructions, see [Deleting an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_manage.html#id_users_deleting) in the *IAM User Guide*.

1. Sign in to the [AWS Management Console](https://console.aws.amazon.com/) using Account B credentials. In the [IAM Console](https://console.aws.amazon.com/iam/), delete the user **AccountBadmin**.

# Example 4 - Bucket owner granting cross-account permission to objects it does not own
<a name="example-walkthroughs-managing-access-example4"></a>

**Topics**
+ [

## Understanding cross-account permissions and using IAM roles
](#access-policies-walkthrough-example4-overview)
+ [

## Step 0: Preparing for the walkthrough
](#access-policies-walkthrough-example4-step0)
+ [

## Step 1: Do the account A tasks
](#access-policies-walkthrough-example4-step1)
+ [

## Step 2: Do the Account B tasks
](#access-policies-walkthrough-example4-step2)
+ [

## Step 3: Do the Account C tasks
](#access-policies-walkthrough-example4-step3)
+ [

## Step 4: Clean up
](#access-policies-walkthrough-example4-step6)
+ [

## Related resources
](#RelatedResources-managing-access-example4)

 In this example scenario, you own a bucket and you have enabled other AWS accounts to upload objects. If you have applied the bucket owner enforced setting for S3 Object Ownership for the bucket, you will own all objects in the bucket, including objects written by another AWS account. This approach resolves the issue that objects are not owned by you, the bucket owner. Then, you can delegate permission to users in your own account or to other AWS accounts. Suppose the bucket owner enforced setting for S3 Object Ownership is not enabled. That is, your bucket can have objects that other AWS accounts own. 

Now, suppose as a bucket owner, you need to grant cross-account permission on objects, regardless of who the owner is, to a user in another account. For example, that user could be a billing application that needs to access object metadata. There are two core issues:
+ The bucket owner has no permissions on those objects created by other AWS accounts. For the bucket owner to grant permissions on objects it doesn't own, the object owner must first grant permission to the bucket owner. The object owner is the AWS account that created the objects. The bucket owner can then delegate those permissions.
+ The bucket owner account can delegate permissions to users in its own account (see [Example 3: Bucket owner granting permissions to objects it does not own](example-walkthroughs-managing-access-example3.md)). However, the bucket owner account can't delegate permissions to other AWS accounts because cross-account delegation isn't supported. 

In this scenario, the bucket owner can create an AWS Identity and Access Management (IAM) role with permission to access objects Then, the bucket owner can grant another AWS account permission to assume the role, temporarily enabling it to access objects in the bucket. 

**Note**  
S3 Object Ownership is an Amazon S3 bucket-level setting that you can use to both control ownership of the objects that are uploaded to your bucket and to disable or enable ACLs. By default, Object Ownership is set to the Bucket owner enforced setting, and all ACLs are disabled. When ACLs are disabled, the bucket owner owns all the objects in the bucket and manages access to them exclusively by using access-management policies.  
 A majority of modern use cases in Amazon S3 no longer require the use of ACLs. We recommend that you keep ACLs disabled, except in circumstances where you need to control access for each object individually. With ACLs disabled, you can use policies to control access to all objects in your bucket, regardless of who uploaded the objects to your bucket. For more information, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md).

## Understanding cross-account permissions and using IAM roles
<a name="access-policies-walkthrough-example4-overview"></a>

 IAM roles enable several scenarios to delegate access to your resources, and cross-account access is one of the key scenarios. In this example, the bucket owner, Account A, uses an IAM role to temporarily delegate object access cross-account to users in another AWS account, Account C. Each IAM role that you create has the following two policies attached to it:
+ A trust policy identifying another AWS account that can assume the role.
+ An access policy defining what permissions—for example, `s3:GetObject`—are allowed when someone assumes the role. For a list of permissions you can specify in a policy, see [Policy actions for Amazon S3](security_iam_service-with-iam.md#security_iam_service-with-iam-id-based-policies-actions).

The AWS account identified in the trust policy then grants its user permission to assume the role. The user can then do the following to access objects:
+ Assume the role and, in response, get temporary security credentials. 
+ Using the temporary security credentials, access the objects in the bucket.

For more information about IAM roles, see [IAM Roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) in the *IAM User Guide*. 

The following is a summary of the walkthrough steps:

![\[Cross-account permissions using IAM roles.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/access-policy-ex4.png)


1. Account A administrator user attaches a bucket policy granting Account B conditional permission to upload objects.

1. Account A administrator creates an IAM role, establishing trust with Account C, so users in that account can access Account A. The access policy attached to the role limits what user in Account C can do when the user accesses Account A.

1. Account B administrator uploads an object to the bucket owned by Account A, granting full-control permission to the bucket owner.

1. Account C administrator creates a user and attaches a user policy that allows the user to assume the role.

1. User in Account C first assumes the role, which returns the user temporary security credentials. Using those temporary credentials, the user then accesses objects in the bucket.

For this example, you need three accounts. The following table shows how we refer to these accounts and the administrator users in these accounts. In accordance with the IAM guidelines (see [About using an administrator user to create resources and grant permissions](example-walkthroughs-managing-access.md#about-using-root-credentials)), we don't use the AWS account root user credentials in this walkthrough. Instead, you create an administrator user in each account and use those credentials when creating resources and granting them permissions.


| AWS account ID | Account referred to as | Administrator user in the account  | 
| --- | --- | --- | 
|  *1111-1111-1111*  |  Account A  |  AccountAadmin  | 
|  *2222-2222-2222*  |  Account B  |  AccountBadmin  | 
|  *3333-3333-3333*  |  Account C  |  AccountCadmin  | 



## Step 0: Preparing for the walkthrough
<a name="access-policies-walkthrough-example4-step0"></a>

**Note**  
You might want to open a text editor, and write down some of the information as you go through the steps. In particular, you will need account IDs, canonical user IDs, IAM user Sign-in URLs for each account to connect to the console, and Amazon Resource Names (ARNs) of the IAM users, and roles. 

1. Make sure that you have three AWS accounts and each account has one administrator user as shown in the table in the preceding section.

   1. Sign up for AWS accounts, as needed. We refer to these accounts as Account A, Account B, and Account C.

   1. Using Account A credentials, sign in to the [IAM console](https://console.aws.amazon.com/iam/home?#home) and do the following to create an administrator user:
      + Create user **AccountAadmin** and note its security credentials. For more information about adding users, see [Creating an IAM user in your AWS account](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html) in the *IAM User Guide*. 
      + Grant administrator privileges to **AccountAadmin** by attaching a user policy giving full access. For instructions, see [Managing IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage.html) in the *IAM User Guide*. 
      + In the IAM Console **Dashboard**, note the **IAM User Sign-In URL**. Users in this account must use this URL when signing in to the AWS Management Console. For more information, see [Sign in to the AWS Management Console as an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/getting-started_how-users-sign-in.html) in the *IAM User Guide*. 

   1. Repeat the preceding step to create administrator users in Account B and Account C.

1. For Account C, note the canonical user ID. 

   When you create an IAM role in Account A, the trust policy grants Account C permission to assume the role by specifying the account ID. You can find account information as follows:

   1. Use your AWS account ID or account alias, your IAM user name, and your password to sign in to the [Amazon S3 console](https://console.aws.amazon.com/s3/).

   1. Choose the name of an Amazon S3 bucket to view the details about that bucket.

   1. Choose the **Permissions** tab and then choose **Access Control List**. 

   1. In the **Access for your AWS account** section, in the **Account** column is a long identifier, such as `c1daexampleaaf850ea79cf0430f33d72579fd1611c97f7ded193374c0b163b6`. This is your canonical user ID.

1. When creating a bucket policy, you will need the following information. Note these values:
   + **Canonical user ID of Account A** – When the Account A administrator grants conditional upload object permission to the Account B administrator, the condition specifies the canonical user ID of the Account A user that must get full-control of the objects. 
**Note**  
The canonical user ID is the Amazon S3–only concept. It is a 64-character obfuscated version of the account ID. 
   + **User ARN for Account B administrator** – You can find the user ARN in the [IAM Console](https://console.aws.amazon.com/iam/).You must select the user and find the user's ARN in the **Summary** tab.

     In the bucket policy, you grant `AccountBadmin` permission to upload objects and you specify the user using the ARN. Here's an example ARN value:

     ```
     arn:aws:iam::AccountB-ID:user/AccountBadmin
     ```

1. Set up either the AWS Command Line Interface (CLI) or the AWS Tools for Windows PowerShell. Make sure that you save administrator user credentials as follows:
   + If using the AWS CLI, create profiles, `AccountAadmin` and `AccountBadmin`, in the config file.
   + If using the AWS Tools for Windows PowerShell, make sure that you store credentials for the session as `AccountAadmin` and `AccountBadmin`.

   For instructions, see [Setting up the tools for the walkthroughs](policy-eval-walkthrough-download-awscli.md).

## Step 1: Do the account A tasks
<a name="access-policies-walkthrough-example4-step1"></a>

In this example, Account A is the bucket owner. So user AccountAadmin in Account A will do the following: 
+ Create a bucket.
+ Attach a bucket policy that grants the Account B administrator permission to upload objects.
+ Create an IAM role that grants Account C permission to assume the role so it can access objects in the bucket.

### Step 1.1: Sign in to the AWS Management Console
<a name="access-policies-walkthrough-cross-account-permissions-acctA-tasks-sign-in-example4"></a>

Using the IAM user Sign-in URL for Account A, first sign in to the AWS Management Console as **AccountAadmin** user. This user will create a bucket and attach a policy to it. 

### Step 1.2: Create a bucket and attach a bucket policy
<a name="access-policies-walkthrough-example2d-step1-1"></a>

In the Amazon S3 console, do the following:

1. Create a bucket. This exercise assumes the bucket name is `amzn-s3-demo-bucket1`.

   For instructions, see [Creating a general purpose bucket](create-bucket-overview.md). 

1. Attach the following bucket policy. The policy grants conditional permission to the Account B administrator permission to upload objects.

   Update the policy by providing your own values for `amzn-s3-demo-bucket1`, `AccountB-ID`, and the `CanonicalUserId-of-AWSaccountA-BucketOwner`. 

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "111",
               "Effect": "Allow",
               "Principal": {
                   "AWS": "arn:aws:iam::111122223333:user/AccountBadmin"
               },
               "Action": "s3:PutObject",
               "Resource": "arn:aws:s3:::amzn-s3-demo-bucket1/*"
           },
           {
               "Sid": "112",
               "Effect": "Deny",
               "Principal": {
                   "AWS": "arn:aws:iam::111122223333:user/AccountBadmin"
               },
               "Action": "s3:PutObject",
               "Resource": "arn:aws:s3:::amzn-s3-demo-bucket1/*",
               "Condition": {
                   "StringNotEquals": {
                       "s3:x-amz-grant-full-control": "id=CanonicalUserId-of-AWSaccountA-BucketOwner"
                   }
               }
           }
       ]
   }
   ```

------

### Step 1.3: Create an IAM role to allow Account C cross-account access in Account A
<a name="access-policies-walkthrough-example2d-step1-2"></a>

In the [IAM Console](https://console.aws.amazon.com/iam/), create an IAM role (**examplerole**) that grants Account C permission to assume the role. Make sure that you are still signed in as the Account A administrator because the role must be created in Account A.

1. Before creating the role, prepare the managed policy that defines the permissions that the role requires. You attach this policy to the role in a later step.

   1. In the navigation pane on the left, choose **Policies** and then choose **Create Policy**.

   1. Next to **Create Your Own Policy**, choose **Select**.

   1. Enter **access-accountA-bucket** in the **Policy Name** field.

   1. Copy the following access policy and paste it into the **Policy Document** field. The access policy grants the role `s3:GetObject` permission so, when the Account C user assumes the role, it can only perform the `s3:GetObject` operation.

------
#### [ JSON ]

****  

      ```
      {
        "Version":"2012-10-17",		 	 	 
        "Statement": [
          {
            "Effect": "Allow",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::amzn-s3-demo-bucket1/*"
          }
        ]
      }
      ```

------

   1. Choose **Create Policy**.

      The new policy appears in the list of managed policies.

1. In the navigation pane on the left, choose **Roles** and then choose **Create New Role**.

1. Under **Select Role Type**, select **Role for Cross-Account Access**, and then choose the **Select** button next to **Provide access between AWS accounts you own**.

1. Enter the Account C account ID.

   For this walkthrough, you don't need to require users to have multi-factor authentication (MFA) to assume the role, so leave that option unselected.

1. Choose **Next Step** to set the permissions that will be associated with the role.

1. 

   Select the checkbox next to the **access-accountA-bucket** policy that you created, and then choose **Next Step**.

   The Review page appears so you can confirm the settings for the role before it's created. One very important item to note on this page is the link that you can send to your users who need to use this role. Users who use the link go straight to the **Switch Role** page with the Account ID and Role Name fields already filled in. You can also see this link later on the **Role Summary** page for any cross-account role.

1. Enter `examplerole` for the role name, and then choose **Next Step**.

1. After reviewing the role, choose **Create Role**.

   The `examplerole` role is displayed in the list of roles.

1. Choose the role name `examplerole`.

1. Select the **Trust Relationships** tab.

1. Choose **Show policy document** and verify the trust policy shown matches the following policy.

   The following trust policy establishes trust with Account C, by allowing it the `sts:AssumeRole` action. For more information, see [https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html) in the *AWS Security Token Service API Reference*.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "",
               "Effect": "Allow",
               "Principal": {
                   "AWS": "arn:aws:iam::111122223333:root"
               },
               "Action": "sts:AssumeRole"
           }
       ]
   }
   ```

------

1. Note the Amazon Resource Name (ARN) of the `examplerole` role that you created. 

   Later in the following steps, you attach a user policy to allow an IAM user to assume this role, and you identify the role by the ARN value. 

## Step 2: Do the Account B tasks
<a name="access-policies-walkthrough-example4-step2"></a>

The example bucket owned by Account A needs objects owned by other accounts. In this step, the Account B administrator uploads an object using the command line tools.
+ Using the `put-object` AWS CLI command, upload an object to `amzn-s3-demo-bucket1`. 

  ```
  aws s3api put-object --bucket amzn-s3-demo-bucket1 --key HappyFace.jpg --body HappyFace.jpg --grant-full-control id="canonicalUserId-ofTheBucketOwner" --profile AccountBadmin
  ```

  Note the following:
  + The `--Profile` parameter specifies the `AccountBadmin` profile, so the object is owned by Account B.
  + The parameter `grant-full-control` grants the bucket owner full-control permission on the object as required by the bucket policy.
  + The `--body` parameter identifies the source file to upload. For example, if the file is on the C: drive of a Windows computer, you specify `c:\HappyFace.jpg`. 

## Step 3: Do the Account C tasks
<a name="access-policies-walkthrough-example4-step3"></a>

In the preceding steps, Account A has already created a role, `examplerole`, establishing trust with Account C. This role allows users in Account C to access Account A. In this step, the Account C administrator creates a user (Dave) and delegates him the `sts:AssumeRole` permission it received from Account A. This approach allows Dave to assume the `examplerole` and temporarily gain access to Account A. The access policy that Account A attached to the role limits what Dave can do when he accesses Account A—specifically, get objects in `amzn-s3-demo-bucket1`.

### Step 3.1: Create a user in Account C and delegate permission to assume examplerole
<a name="cross-acct-access-using-role-step3-1"></a>

1. Using the IAM user sign-in URL for Account C, first sign in to the AWS Management Console as **AccountCadmin** user. 

   

1. In the [IAM Console](https://console.aws.amazon.com/iam/), create a user, Dave. 

   For step-by-step instructions, see [Creating IAM users (AWS Management Console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html#id_users_create_console) in the *IAM User Guide*. 

1. Note the Dave credentials. Dave will need these credentials to assume the `examplerole` role.

1. Create an inline policy for the Dave IAM user to delegate the `sts:AssumeRole` permission to Dave on the `examplerole` role in Account A. 

   1. In the navigation pane on the left, choose **Users**.

   1. Choose the user name **Dave**.

   1. On the user details page, select the **Permissions** tab and then expand the **Inline Policies** section.

   1. Choose **click here** (or **Create User Policy**).

   1. Choose **Custom Policy**, and then choose **Select**.

   1. Enter a name for the policy in the **Policy Name** field.

   1. Copy the following policy into the **Policy Document** field.

      You must update the policy by providing the `AccountA-ID`.

------
#### [ JSON ]

****  

      ```
      {
          "Version":"2012-10-17",		 	 	 
          "Statement": [
              {
                  "Effect": "Allow",
                  "Action": [
                      "sts:AssumeRole"
                  ],
                  "Resource": "arn:aws:iam::111122223333:role/examplerole"
              }
          ]
      }
      ```

------

   1. Choose **Apply Policy**.

1. Save Dave's credentials to the config file of the AWS CLI by adding another profile, `AccountCDave`.

   ```
   [profile AccountCDave]
   aws_access_key_id = UserDaveAccessKeyID
   aws_secret_access_key = UserDaveSecretAccessKey
   region = us-west-2
   ```

### Step 3.2: Assume role (examplerole) and access objects
<a name="cross-acct-access-using-role-step3-2"></a>

Now Dave can access objects in the bucket owned by Account A as follows:
+ Dave first assumes the `examplerole` using his own credentials. This will return temporary credentials.
+ Using the temporary credentials, Dave will then access objects in Account A's bucket.

1. At the command prompt, run the following AWS CLI `assume-role` command using the `AccountCDave` profile. 

   You must update the ARN value in the command by providing the `AccountA-ID` where `examplerole` is defined.

   ```
   aws sts assume-role --role-arn arn:aws:iam::AccountA-ID:role/examplerole --profile AccountCDave --role-session-name test
   ```

   In response, AWS Security Token Service (AWS STS) returns temporary security credentials (access key ID, secret access key, and a session token).

1. Save the temporary security credentials in the AWS CLI config file under the `TempCred` profile.

   ```
   [profile TempCred]
   aws_access_key_id = temp-access-key-ID
   aws_secret_access_key = temp-secret-access-key
   aws_session_token = session-token
   region = us-west-2
   ```

1. At the command prompt, run the following AWS CLI command to access objects using the temporary credentials. For example, the command specifies the head-object API to retrieve object metadata for the `HappyFace.jpg` object.

   ```
   aws s3api get-object --bucket amzn-s3-demo-bucket1 --key HappyFace.jpg SaveFileAs.jpg --profile TempCred
   ```

   Because the access policy attached to `examplerole` allows the actions, Amazon S3 processes the request. You can try any other action on any other object in the bucket.

   If you try any other action—for example, `get-object-acl`—you will get permission denied because the role isn't allowed that action.

   ```
   aws s3api get-object-acl --bucket amzn-s3-demo-bucket1 --key HappyFace.jpg --profile TempCred
   ```

   We used user Dave to assume the role and access the object using temporary credentials. It could also be an application in Account C that accesses objects in `amzn-s3-demo-bucket1`. The application can obtain temporary security credentials, and Account C can delegate the application permission to assume `examplerole`.

## Step 4: Clean up
<a name="access-policies-walkthrough-example4-step6"></a>

1. After you're done testing, you can do the following to clean up:

   1. Sign in to the [AWS Management Console](https://console.aws.amazon.com/) using Account A credentials, and do the following:
     + In the Amazon S3 console, remove the bucket policy attached to `amzn-s3-demo-bucket1`. In the bucket **Properties**, delete the policy in the **Permissions** section. 
     + If the bucket is created for this exercise, in the Amazon S3 console, delete the objects and then delete the bucket. 
     + In the [IAM Console](https://console.aws.amazon.com/iam/), remove the `examplerole` you created in Account A. For step-by-step instructions, see [Deleting an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_manage.html#id_users_deleting) in the *IAM User Guide*. 
     + In the [IAM Console](https://console.aws.amazon.com/iam/), remove the **AccountAadmin** user.

1. Sign in to the [IAM Console](https://console.aws.amazon.com/iam/) by using Account B credentials. Delete the user **AccountBadmin**. 

1. Sign in to the [IAM Console](https://console.aws.amazon.com/iam/) by using Account C credentials. Delete **AccountCadmin** and the user Dave.

## Related resources
<a name="RelatedResources-managing-access-example4"></a>

For more information that's related to this walkthrough, see the following resources in the *IAM User Guide*:
+ [Creating a role to delegate permissions to an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html)
+ [Tutorial: Delegate Access Across AWS accounts Using IAM Roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial-cross-account-with-roles.html)
+ [Managing IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage.html)

# Using service-linked roles for Amazon S3 Storage Lens
<a name="using-service-linked-roles"></a>

To use Amazon S3 Storage Lens to collect and aggregate metrics across all your accounts in AWS Organizations, you must first ensure that S3 Storage Lens has trusted access enabled by the management account in your organization. S3 Storage Lens creates a service-linked role (SLR) to allow it to get the list of AWS accounts belonging to your organization. This list of accounts is used by S3 Storage Lens to collect metrics for S3 resources in all the member accounts when the S3 Storage Lens dashboard or configurations are created or updated.

Amazon S3 Storage Lens uses AWS Identity and Access Management (IAM) [ service-linked roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_terms-and-concepts.html#iam-term-service-linked-role). A service-linked role is a unique type of IAM role that is linked directly to S3 Storage Lens. Service-linked roles are predefined by S3 Storage Lens and include all the permissions that the service requires to call other AWS services on your behalf.

A service-linked role makes setting up S3 Storage Lens easier because you don't have to add the necessary permissions manually. S3 Storage Lens defines the permissions of its service-linked roles, and unless defined otherwise, only S3 Storage Lens can assume its roles. The defined permissions include the trust policy and the permissions policy, and that permissions policy can't be attached to any other IAM entity.

You can delete this service-linked role only after first deleting the related resources. This protects your S3 Storage Lens resources because you can't inadvertently remove permission to access the resources.

For information about other services that support service-linked roles, see [AWS services that work with IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_aws-services-that-work-with-iam.html) and look for the services that have **Yes **in the **Service-Linked Role** column. Choose a **Yes** with a link to view the service-linked role documentation for that service.

## Service-linked role permissions for Amazon S3 Storage Lens
<a name="slr-permissions"></a>

S3 Storage Lens uses the service-linked role named **AWSServiceRoleForS3StorageLens** – This enables access to AWS services and resources used or managed by S3 Storage Lens. This allows S3 Storage Lens to access AWS Organizations resources on your behalf.

The S3 Storage Lens service-linked role trusts the following service on your organization's storage:
+ `storage-lens.s3.amazonaws.com`

The role permissions policy allows S3 Storage Lens to complete the following actions:
+ `organizations:DescribeOrganization`

  `organizations:ListAccounts`

  `organizations:ListAWSServiceAccessForOrganization`

  `organizations:ListDelegatedAdministrators`

You must configure permissions to allow an IAM entity (such as a user, group, or role) to create, edit, or delete a service-linked role. For more information, see [Service-linked role permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html#service-linked-role-permissions) in the *IAM User Guide*.

## Creating a service-linked role for S3 Storage Lens
<a name="create-slr"></a>

You don't need to manually create a service-linked role. When you complete one of the following tasks while signed into the AWS Organizations management or the delegate administrator accounts, S3 Storage Lens creates the service-linked role for you:
+ Create an S3 Storage Lens dashboard configuration for your organization in the Amazon S3 console.
+ `PUT` an S3 Storage Lens configuration for your organization using the REST API, AWS CLI and SDKs.

**Note**  
S3 Storage Lens will support a maximum of five delegated administrators per organization.

If you delete this service-linked role, the preceding actions will re-create it as needed.

### Example policy for S3 Storage Lens service-linked role
<a name="slr-sample-policy"></a>

**Example Permissions policy for the S3 Storage Lens service-linked role**    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "AwsOrgsAccess",
            "Effect": "Allow",
            "Action": [
                "organizations:DescribeOrganization",
                "organizations:ListAccounts",
                "organizations:ListAWSServiceAccessForOrganization",
                "organizations:ListDelegatedAdministrators"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}
```

## Editing a service-linked role for Amazon S3 Storage Lens
<a name="edit-slr"></a>

S3 Storage Lens doesn't allow you to edit the AWSServiceRoleForS3StorageLens service-linked role. After you create a service-linked role, you can't change the name of the role because various entities might reference the role. However, you can edit the description of the role using IAM. For more information, see [Editing a service-linked role](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html#edit-service-linked-role) in the *IAM User Guide*.

## Deleting a service-linked role for Amazon S3 Storage Lens
<a name="delete-slr"></a>

If you no longer need to use the service-linked role, we recommend that you delete that role. That way you don't have an unused entity that is not actively monitored or maintained. However, you must clean up the resources for your service-linked role before you can manually delete it.

**Note**  
If the Amazon S3 Storage Lens service is using the role when you try to delete the resources, then the deletion might fail. If that happens, wait for a few minutes and try the operation again.

To delete the AWSServiceRoleForS3StorageLens you must delete all the organization level S3 Storage Lens configurations present in all AWS Regions using the AWS Organizations management or the delegate administrator accounts.

The resources are organization-level S3 Storage Lens configurations. Use S3 Storage Lens to clean up the resources and then use the [IAM Console](https://console.aws.amazon.com/iam/), CLI, REST API, or AWS SDK to delete the role. 

In the REST API, AWS CLI, and SDKs, S3 Storage Lens configurations can be discovered using `ListStorageLensConfigurations` in all the Regions where your organization has created S3 Storage Lens configurations. Use the action `DeleteStorageLensConfiguration` to delete these configurations so that you can then delete the role.

**Note**  
To delete the service-linked role, you must delete all the organization-level S3 Storage Lens configurations in all the Regions where they exist.

**To delete Amazon S3 Storage Lens resources used by the AWSServiceRoleForS3StorageLens SLR**

1. To get a list of your organization level configurations, you must use the `ListStorageLensConfigurations` in every Region that you have S3 Storage Lens configurations. This list can also be obtained from the Amazon S3 console.

1. Delete these configurations from the appropriate Regional endpoints by invoking the `DeleteStorageLensConfiguration` API call or by using the Amazon S3 console. 

**To manually delete the service-linked role using IAM**

After you have deleted the configurations, delete the AWSServiceRoleForS3StorageLens SLR from the [IAM Console](https://console.aws.amazon.com/iam/) or by invoking the IAM API `DeleteServiceLinkedRole`, or using the AWS CLI or AWS SDK. For more information, see [Deleting a service-linked role](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html#delete-service-linked-role) in the *IAM User Guide*.

## Supported Regions for S3 Storage Lens service-linked roles
<a name="slr-regions"></a>

S3 Storage Lens supports using service-linked roles in all of the AWS Regions where the service is available. For more information, see [Amazon S3 Regions and Endpoints](https://docs.aws.amazon.com/general/latest/gr/s3.html).

# Troubleshooting Amazon S3 identity and access
<a name="security_iam_troubleshoot"></a>

Use the following information to help you diagnose and fix common issues that you might encounter when working with Amazon S3 and IAM.

**Topics**
+ [

## I received an access denied error
](#access_denied_403)
+ [

## I am not authorized to perform an action in Amazon S3
](#security_iam_troubleshoot-no-permissions)
+ [

## I am not authorized to perform iam:PassRole
](#security_iam_troubleshoot-passrole)
+ [

## I want to allow people outside of my AWS account to access my Amazon S3 resources
](#security_iam_troubleshoot-cross-account-access)
+ [

# Troubleshoot access denied (403 Forbidden) errors in Amazon S3
](troubleshoot-403-errors.md)

## I received an access denied error
<a name="access_denied_403"></a>

Verify that there is not an explicit `Deny` statement against the requester you are trying to grant permissions to in either the bucket policy or the identity-based policy. 

For detailed information about troubleshooting access denied errors, see [Troubleshoot access denied (403 Forbidden) errors in Amazon S3](troubleshoot-403-errors.md).

## I am not authorized to perform an action in Amazon S3
<a name="security_iam_troubleshoot-no-permissions"></a>

If you receive an error that you're not authorized to perform an action, your policies must be updated to allow you to perform the action.

The following example error occurs when the `mateojackson` IAM user tries to use the console to view details about a fictional `my-example-widget` resource but doesn't have the fictional `s3:GetWidget` permissions.

```
User: arn:aws:iam::123456789012:user/mateojackson is not authorized to perform: s3:GetWidget on resource: my-example-widget
```

In this case, the policy for the `mateojackson` user must be updated to allow access to the `my-example-widget` resource by using the `s3:GetWidget` action.

If you need help, contact your AWS administrator. Your administrator is the person who provided you with your sign-in credentials.

## I am not authorized to perform iam:PassRole
<a name="security_iam_troubleshoot-passrole"></a>

If you receive an error that you're not authorized to perform the `iam:PassRole` action, your policies must be updated to allow you to pass a role to Amazon S3.

Some AWS services allow you to pass an existing role to that service instead of creating a new service role or service-linked role. To do this, you must have permissions to pass the role to the service.

The following example error occurs when an IAM user named `marymajor` tries to use the console to perform an action in Amazon S3. However, the action requires the service to have permissions that are granted by a service role. Mary does not have permissions to pass the role to the service.

```
User: arn:aws:iam::123456789012:user/marymajor is not authorized to perform: iam:PassRole
```

In this case, Mary's policies must be updated to allow her to perform the `iam:PassRole` action.

If you need help, contact your AWS administrator. Your administrator is the person who provided you with your sign-in credentials.

## I want to allow people outside of my AWS account to access my Amazon S3 resources
<a name="security_iam_troubleshoot-cross-account-access"></a>

You can create a role that users in other accounts or people outside of your organization can use to access your resources. You can specify who is trusted to assume the role. For services that support resource-based policies or access control lists (ACLs), you can use those policies to grant people access to your resources.

To learn more, consult the following:
+ To learn whether Amazon S3 supports these features, see [How Amazon S3 works with IAM](security_iam_service-with-iam.md).
+ To learn how to provide access to your resources across AWS accounts that you own, see [Providing access to an IAM user in another AWS account that you own](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_aws-accounts.html) in the *IAM User Guide*.
+ To learn how to provide access to your resources to third-party AWS accounts, see [Providing access to AWS accounts owned by third parties](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_third-party.html) in the *IAM User Guide*.
+ To learn how to provide access through identity federation, see [Providing access to externally authenticated users (identity federation)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_federated-users.html) in the *IAM User Guide*.
+ To learn the difference between using roles and resource-based policies for cross-account access, see [Cross account resource access in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies-cross-account-resource-access.html) in the *IAM User Guide*.

# Troubleshoot access denied (403 Forbidden) errors in Amazon S3
<a name="troubleshoot-403-errors"></a>

Access denied (HTTP `403 Forbidden`) errors appear when AWS explicitly or implicitly denies an authorization request. 
+ An *explicit denial* occurs when a policy contains a `Deny` statement for the specific AWS action. 
+ An *implicit denial* occurs when there is no applicable `Deny` statement and also no applicable `Allow` statement. 

Because an AWS Identity and Access Management (IAM) policy implicitly denies an IAM principal by default, the policy must explicitly allow the principal to perform an action. Otherwise, the policy implicitly denies access. For more information, see [The difference between explicit and implicit denies](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_evaluation-logic.html#AccessPolicyLanguage_Interplay) in the *IAM User Guide*. For information about the policy evaluation logic that determines whether an access request is allowed or denied, see [Policy evaluation logic](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_evaluation-logic.html) in the *IAM User Guide*. 

For more information about the permissions to S3 API operations by S3 resource types, see [Required permissions for Amazon S3 API operations](using-with-s3-policy-actions.md).

The following topics cover the most common causes of access denied errors in Amazon S3.

**Note**  
For access denied (HTTP `403 Forbidden`) errors, Amazon S3 doesn't charge the bucket owner when the request is initiated outside of the bucket owner's individual AWS account or the bucket owner's AWS organization.

**Topics**
+ [

## Access denied message examples and how to troubleshoot them
](#access-denied-message-examples)
+ [

## Access denied due to Requester Pays settings
](#access-denied-requester-pays)
+ [

## Bucket policies and IAM policies
](#bucket-iam-policies)
+ [

## Amazon S3 ACL settings
](#troubleshoot-403-acl-settings)
+ [

## S3 Block Public Access settings
](#troubleshoot-403-bpa)
+ [

## Amazon S3 encryption settings
](#troubleshoot-403-encryption)
+ [

## S3 Object Lock settings
](#troubleshoot-403-object-lock)
+ [

## VPC endpoint policies
](#troubleshoot-403-vpc)
+ [

## AWS Organizations policies
](#troubleshoot-403-orgs)
+ [

## CloudFront distribution access
](#troubleshoot-403-cloudfront)
+ [

## Access point settings
](#troubleshoot-403-access-points)
+ [

## Additional resources
](#troubleshoot-403-additional-resources)

**Note**  
If you're trying to troubleshoot a permissions issue, start with the [Access denied message examples and how to troubleshoot them](#access-denied-message-examples) section, then go to the [Bucket policies and IAM policies](#bucket-iam-policies) section. Also be sure to follow the guidance in [Tips for checking permissions](#troubleshoot-403-tips).

## Access denied message examples and how to troubleshoot them
<a name="access-denied-message-examples"></a>

Amazon S3 now includes additional context in access denied (HTTP `403 Forbidden`) errors for requests made to resources within the same AWS account or same organization in AWS Organizations. This new context includes the type of policy that denied access, the reason for denial, and information about the IAM user or role that requested access to the resource. 

This additional context helps you to troubleshoot access issues, identify the root cause of access denied errors, and fix incorrect access controls by updating the relevant policies. This additional context is also available in AWS CloudTrail logs. Enhanced access denied error messages for same-account or same-organization requests are now available in all AWS Regions, including the AWS GovCloud (US) Regions and the China Regions. 

Most access denied error messages appear in the format `User user-arn is not authorized to perform action on "resource-arn" because context`. In this example, *`user-arn`* is the [Amazon Resource Name (ARN)](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html#identifiers-arns) of the user that doesn't receive access, *`action`* is the service action that the policy denies, and *`resource-arn`* is the ARN of the resource on which the policy acts. The *`context`* field represents additional context about the policy type that explains why the policy denied access.

When a policy explicitly denies access because the policy contains a `Deny` statement, then the access denied error message includes the phrase `with an explicit deny in a type policy`. When the policy implicitly denies access, then the access denied error message includes the phrase `because no type policy allows the action action`.

**Important**  
Enhanced access denied messages are returned only for same-account requests or for requests within the same organization in AWS Organizations. Cross-account requests outside of the same organization return a generic `Access Denied` message.   
For information about the policy evaluation logic that determines whether a cross-account access request is allowed or denied, see [ Cross-account policy evaluation logic](https://docs.aws.amazon.com//IAM/latest/UserGuide/reference_policies_evaluation-logic-cross-account.html) in the *IAM User Guide*. For a walkthrough that shows how to grant cross-account access, see [Example 2: Bucket owner granting cross-account bucket permissions](example-walkthroughs-managing-access-example2.md). 
For requests within the same organization in AWS Organizations:  
Enhanced access denied messages aren't returned if a denial occurs because of a virtual private cloud (VPC) endpoint policy.
Enhanced access denied messages are provided whenever both the bucket owner and the caller account belong to the same organization in AWS Organizations. Although buckets configured with the S3 Object Ownership **Bucket owner preferred** or **Object writer** settings might contain objects owned by different accounts, object ownership doesn't affect enhanced access denied messages. Enhanced access denied messages are returned for all object requests as long as the bucket owner and caller are in the same organization, regardless of who owns the specific object. For information about Object Ownership settings and configurations, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md).
Enhanced access denied error messages aren't returned for requests made to directory buckets. Directory bucket requests return a generic `Access Denied` message.
If multiple policies of the same policy type deny an authorization request, the access denied error message doesn't specify the number of policies.
If multiple policy types deny an authorization request, the error message includes only one of those policy types.
If an access request is denied due to multiple reasons, the error message includes only one of the reasons for denial. 

The following examples show the format for different types of access denied error messages and how to troubleshoot each type of message.

### Access denied due to Blocked Encryption Type
<a name="access-denied-due-to-blocked-encryption-type"></a>

To limit the server-side encryption types you can use in your general purpose buckets, you can choose to block SSE-C write requests by updating your default encryption configuration for your buckets. This bucket-level configuration blocks requests to upload objects that specify SSE-C. When SSE-C is blocked for a bucket, any `PutObject`, `CopyObject`, `PostObject`, or Multipart Upload or replication requests that specify SSE-C encryption will be rejected with an HTTP 403 `AccessDenied` error.

This setting is a parameter on the `PutBucketEncryption` API and can also be updated using the S3 Console, AWS CLI, and AWS SDKs, if you have the `s3:PutEncryptionConfiguration` permission. Valid values are `SSE-C`, which blocks SSE-C encryption for the general purpose bucket, and `NONE`, which allows the use SSE-C for writes to the bucket.

For example, when access is denied for a `PutObject` request because the `BlockedEncryptionTypes` setting blocks write requests specifying SSE-C, you receive the following message:

```
An error occurred (AccessDenied) when calling the PutObject operation:   
User: arn:aws:iam::123456789012:user/MaryMajor  is not   
authorized to perform: s3:PutObject on resource:   
"arn:aws:s3:::amzn-s3-demo-bucket1/object-name" because this   
bucket has blocked upload requests that specify   
Server Side Encryption with Customer provided keys (SSE-C).   
Please specify a different server-side encryption type
```

For more information about this setting, see [Blocking or unblocking SSE-C for a general purpose bucket](blocking-unblocking-s3-c-encryption-gpb.md).

### Access denied due to a resource control policy – explicit denial
<a name="access-denied-rcp-examples-explicit"></a>

1. Check for a `Deny` statement for the action in your resource control policies (RCPs). For the following example, the action is `s3:GetObject`.

1. Update your RCP by removing the `Deny` statement. For more information, see [Update a resource control policy (RCP)](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_policies_update.html#update_policy-rcp) in the *AWS Organizations User Guide*. 

```
An error occurred (AccessDenied) when calling the GetObject operation: 
User: arn:aws:iam::777788889999:user/MaryMajor is not authorized to perform: 
s3:GetObject on resource: "arn:aws:s3:::amzn-s3-demo-bucket1/object-name" 
with an explicit deny in a resource control policy
```

### Access denied due to a Service Control Policy – implicit denial
<a name="access-denied-scp-examples-implicit"></a>

1. Check for a missing `Allow` statement for the action in your service control policies (SCPs). For the following example, the action is `s3:GetObject`.

1. Update your SCP by adding the `Allow` statement. For more information, see [Updating an SCP](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_create.html#update_policy) in the *AWS Organizations User Guide*.

```
User: arn:aws:iam::777788889999:user/MaryMajor is not authorized to perform:
s3:GetObject because no service control policy allows the s3:GetObject action
```

### Access denied due to a Service Control Policy – explicit denial
<a name="access-denied-scp-examples-explicit"></a>

1. Check for a `Deny` statement for the action in your Service Control Policies (SCPs). For the following example, the action is `s3:GetObject`.

1. Update your SCP by changing the `Deny` statement to allow the user the necessary access. For an example of how you can do this, see [Prevent IAM users and roles from making specified changes, with an exception for a specified admin role](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_examples_general.html#example-scp-restricts-with-exception) in the *AWS Organizations User Guide*. For more information about updating your SCP, see [Updating an SCP](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_create.html#update_policy) in the *AWS Organizations User Guide*.

```
User: arn:aws:iam::777788889999:user/MaryMajor is not authorized to perform: 
s3:GetObject with an explicit deny in a service control policy
```

### Access denied due to a VPC endpoint policy – implicit denial
<a name="access-denied-vpc-endpoint-examples-implicit"></a>

1. Check for a missing `Allow` statement for the action in your virtual private cloud (VPC) endpoint policies. For the following example, the action is `s3:GetObject`.

1. Update your VPC endpoint policy by adding the `Allow` statement. For more information, see [Update a VPC endpoint policy](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-access.html#update-vpc-endpoint-policy) in the *AWS PrivateLink Guide*.

```
User: arn:aws:iam::123456789012:user/MaryMajor is not authorized to perform: 
s3:GetObject because no VPC endpoint policy allows the s3:GetObject action
```

### Access denied due to a VPC endpoint policy – explicit denial
<a name="access-denied-vpc-endpoint-examples-explicit"></a>

1. Check for an explicit `Deny` statement for the action in your virtual private cloud (VPC) endpoint policies. For the following example, the action is `s3:GetObject`.

1. Update your VPC endpoint policy by changing the `Deny` statement to allow the user the necessary access. For example, you can update your `Deny` statement to use the `aws:PrincipalAccount` condition key with the `StringNotEquals` condition operator to allow the specific principal access, as shown in [Example 7: Excluding certain principals from a `Deny` statement](amazon-s3-policy-keys.md#example-exclude-principal-from-deny-statement). For more information about updating your VPC endpoint policy, see [Update a VPC endpoint policy](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-access.html#update-vpc-endpoint-policy) in the *AWS PrivateLink Guide*.

```
User: arn:aws:iam::123456789012:user/MaryMajor is not authorized to perform: 
s3:GetObject on resource: "arn:aws:s3:::amzn-s3-demo-bucket1/object-name" with 
an explicit deny in a VPC endpoint policy
```

### Access denied due to a permissions boundary – implicit denial
<a name="access-denied-permissions-boundary-examples-implicit"></a>

1. Check for a missing `Allow` statement for the action in your permissions boundary. For the following example, the action is `s3:GetObject`.

1. Update your permissions boundary by adding the `Allow` statement to your IAM policy. For more information, see [Permissions boundaries for IAM entities](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html) and [Editing IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-edit.html) in the *IAM User Guide*.

```
User: arn:aws:iam::123456789012:user/MaryMajor is not authorized to perform: 
s3:GetObject on resource: "arn:aws:s3:::amzn-s3-demo-bucket1/object-name" 
because no permissions boundary allows the s3:GetObject action
```

### Access denied due to a permissions boundary – explicit denial
<a name="access-denied-permissions-boundary-examples-explicit"></a>

1. Check for an explicit `Deny` statement for the action in your permissions boundary. For the following example, the action is `s3:GetObject`.

1. Update your permissions boundary by changing the `Deny` statement in your IAM policy to allow the user the necessary access. For example, you can update your `Deny` statement to use the `aws:PrincipalAccount` condition key with the `StringNotEquals` condition operator to allow the specific principal access, as shown in [https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-principalaccount](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-principalaccount) in the *IAM User Guide*. For more information, see [Permissions boundaries for IAM entities](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html) and [Editing IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-edit.html) in the *IAM User Guide*.

```
User: arn:aws:iam::777788889999:user/MaryMajor is not authorized to perform: 
s3:GetObject with an explicit deny in a permissions boundary
```

### Access denied due to session policies – implicit denial
<a name="access-denied-session-policy-examples-implicit"></a>

1. Check for a missing `Allow` statement for the action in your session policies. For the following example, the action is `s3:GetObject`.

1. Update your session policy by adding the `Allow` statement. For more information, see [Session policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#policies_session) and [Editing IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-edit.html) in the *IAM User Guide*.

```
User: arn:aws:iam::123456789012:user/MaryMajor is not authorized to perform: 
s3:GetObject because no session policy allows the s3:GetObject action
```

### Access denied due to session policies – explicit denial
<a name="access-denied-session-policy-examples-explicit"></a>

1. Check for an explicit `Deny` statement for the action in your session policies. For the following example, the action is `s3:GetObject`.

1. Update your session policy by changing the `Deny` statement to allow the user the necessary access. For example, you can update your `Deny` statement to use the `aws:PrincipalAccount` condition key with the `StringNotEquals` condition operator to allow the specific principal access, as shown in [Example 7: Excluding certain principals from a `Deny` statement](amazon-s3-policy-keys.md#example-exclude-principal-from-deny-statement). For more information about updating your session policy, see [Session policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#policies_session) and [Editing IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-edit.html) in the *IAM User Guide*.

```
User: arn:aws:iam::123456789012:user/MaryMajor is not authorized to perform: 
s3:GetObject on resource: "arn:aws:s3:::amzn-s3-demo-bucket1/object-name" with 
an explicit deny in a session policy
```

### Access denied due to resource-based policies – implicit denial
<a name="access-denied-resource-based-policy-examples-implicit"></a>

**Note**  
*Resource-based policies* means policies such as bucket policies and access point policies.

1. Check for a missing `Allow` statement for the action in your resource-based policy. Also check whether the `IgnorePublicAcls` S3 Block Public Access setting is applied on the bucket, access point, or account level. For the following example, the action is `s3:GetObject`.

1. Update your policy by adding the `Allow` statement. For more information, see [Resource-based policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#policies_resource-based) and [Editing IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-edit.html) in the *IAM User Guide*.

   You might also need to adjust your `IgnorePublicAcls` block public access setting for the bucket, access point, or account. For more information, see [Access denied due to Block Public Access settings](#access-denied-bpa-examples) and [Configuring block public access settings for your S3 buckets](configuring-block-public-access-bucket.md).

```
User: arn:aws:iam::123456789012:user/MaryMajor is not authorized to perform: 
s3:GetObject because no resource-based policy allows the s3:GetObject action
```

### Access denied due to resource-based policies – explicit denial
<a name="access-denied-resource-based-policy-examples-explicit"></a>

**Note**  
*Resource-based policies* means policies such as bucket policies and access point policies.

1. Check for an explicit `Deny` statement for the action in your resource-based policy. Also check whether the `RestrictPublicBuckets` S3 Block Public Access setting is applied on the bucket, access point, or account level. For the following example, the action is `s3:GetObject`.

1. Update your policy by changing the `Deny` statement to allow the user the necessary access. For example, you can update your `Deny` statement to use the `aws:PrincipalAccount` condition key with the `StringNotEquals` condition operator to allow the specific principal access, as shown in [Example 7: Excluding certain principals from a `Deny` statement](amazon-s3-policy-keys.md#example-exclude-principal-from-deny-statement). For more information about updating your resource-based policy, see [Resource-based policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#policies_resource-based) and [Editing IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-edit.html) in the *IAM User Guide*.

   You might also need to adjust your `RestrictPublicBuckets` block public access setting for the bucket, access point, or account. For more information, see [Access denied due to Block Public Access settings](#access-denied-bpa-examples) and [Configuring block public access settings for your S3 buckets](configuring-block-public-access-bucket.md).

```
User: arn:aws:iam::123456789012:user/MaryMajor is not authorized to perform: 
s3:GetObject on resource: "arn:aws:s3:::amzn-s3-demo-bucket1/object-name" with 
an explicit deny in a resource-based policy
```

### Access denied due to identity-based policies – implicit denial
<a name="access-denied-identity-based-policy-examples-implicit"></a>

1. Check for a missing `Allow` statement for the action in identity-based policies attached to the identity. For the following example, the action is `s3:GetObject` attached to the user `MaryMajor`.

1. Update your policy by adding the `Allow` statement. For more information, see [Identity-based policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#policies_id-based) and [Editing IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-edit.html) in the *IAM User Guide*.

```
User: arn:aws:iam::123456789012:user/MaryMajor is not authorized to perform: 
s3:GetObject because no identity-based policy allows the s3:GetObject action
```

### Access denied due to identity-based policies – explicit denial
<a name="access-denied-identity-based-policy-examples-explicit"></a>

1. Check for an explicit `Deny` statement for the action in identity-based policies attached to the identity. For the following example, the action is `s3:GetObject` attached to the user `MaryMajor`.

1. Update your policy by changing the `Deny` statement to allow the user the necessary access. For example, you can update your `Deny` statement to use the `aws:PrincipalAccount` condition key with the `StringNotEquals` condition operator to allow the specific principal access, as shown in [https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-principalaccount](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-principalaccount) in the *IAM User Guide*. For more information, see [Identity-based policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#policies_id-based) and [Editing IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-edit.html) in the *IAM User Guide*.

```
User: arn:aws:iam::123456789012:user/MaryMajor is not authorized to perform: 
s3:GetObject on resource: "arn:aws:s3:::amzn-s3-demo-bucket1/object-name" with 
an explicit deny in an identity-based policy
```

### Access denied due to Block Public Access settings
<a name="access-denied-bpa-examples"></a>

The Amazon S3 Block Public Access feature provides settings for access points, buckets, and accounts to help you manage public access to Amazon S3 resources. For more information about how Amazon S3 defines "public," see [The meaning of "public"](access-control-block-public-access.md#access-control-block-public-access-policy-status). 

By default, new buckets, access points, and objects don't allow public access. However, users can modify bucket policies, access point policies, IAM user policies, object permissions, or access control lists (ACLs) to allow public access. S3 Block Public Access settings override these policies, permissions, and ACLs. Since April 2023, all Block Public Access settings are enabled by default for new buckets. 

When Amazon S3 receives a request to access a bucket or an object, it determines whether the bucket or the bucket owner's account has a block public access setting applied. If the request was made through an access point, Amazon S3 also checks for block public access settings for the access point. If there is an existing block public access setting that prohibits the requested access, Amazon S3 rejects the request.

Amazon S3 Block Public Access provides four settings. These settings are independent and can be used in any combination. Each setting can be applied to an access point, a bucket, or an entire AWS account. If the block public access settings for the access point, bucket, or account differ, then Amazon S3 applies the most restrictive combination of the access point, bucket, and account settings.

When Amazon S3 evaluates whether an operation is prohibited by a block public access setting, it rejects any request that violates an access point, bucket, or account setting.

The four settings provided by Amazon S3 Block Public Access are as follows: 
+ `BlockPublicAcls` – This setting applies to `PutBucketAcl`, `PutObjectAcl`, `PutObject`, `CreateBucket`, `CopyObject`, and `POST Object` requests. The `BlockPublicAcls` setting causes the following behavior: 
  + `PutBucketAcl` and `PutObjectAcl` calls fail if the specified access control list (ACL) is public.
  + `PutObject` calls fail if the request includes a public ACL.
  + If this setting is applied to an account, then `CreateBucket` calls fail with an HTTP `400` (`Bad Request`) response if the request includes a public ACL.

  For example, when access is denied for a `CopyObject` request because of the `BlockPublicAcls` setting, you receive the following message: 

  ```
  An error occurred (AccessDenied) when calling the CopyObject operation: 
  User: arn:aws:sts::123456789012:user/MaryMajor is not authorized to 
  perform: s3:CopyObject on resource: "arn:aws:s3:::amzn-s3-demo-bucket1/object-name" 
  because public ACLs are prevented by the BlockPublicAcls setting in S3 Block Public Access.
  ```
+ `IgnorePublicAcls` – The `IgnorePublicAcls` setting causes Amazon S3 to ignore all public ACLs on a bucket and any objects that it contains. If your request's permission is granted only by a public ACL, then the `IgnorePublicAcls` setting rejects the request.

  Any denial resulting from the `IgnorePublicAcls` setting is implicit. For example, if `IgnorePublicAcls` denies a `GetObject` request because of a public ACL, you receive the following message: 

  ```
  User: arn:aws:iam::123456789012:user/MaryMajor is not authorized to perform: 
  s3:GetObject because no resource-based policy allows the s3:GetObject action
  ```
+ `BlockPublicPolicy` – This setting applies to `PutBucketPolicy` and `PutAccessPointPolicy` requests. 

  Setting `BlockPublicPolicy` for a bucket causes Amazon S3 to reject calls to `PutBucketPolicy` if the specified bucket policy allows public access. This setting also causes Amazon S3 to reject calls to `PutAccessPointPolicy` for all of the bucket's same-account access points if the specified policy allows public access.

  Setting `BlockPublicPolicy` for an access point causes Amazon S3 to reject calls to `PutAccessPointPolicy` and `PutBucketPolicy` that are made through the access point if the specified policy (for either the access point or the underlying bucket) allows public access.

  For example, when access is denied on a `PutBucketPolicy` request because of the `BlockPublicPolicy` setting, you receive the following message: 

  ```
  An error occurred (AccessDenied) when calling the PutBucketPolicy operation: 
  User: arn:aws:sts::123456789012:user/MaryMajor is not authorized to 
  perform: s3:PutBucketPolicy on resource: "arn:aws:s3:::amzn-s3-demo-bucket1/object-name" 
  because public policies are prevented by the BlockPublicPolicy setting in S3 Block Public Access.
  ```
+ `RestrictPublicBuckets` – The `RestrictPublicBuckets` setting restricts access to an access point or bucket with a public policy to only AWS service principals and authorized users within the bucket owner's account and the access point owner's account. This setting blocks all cross-account access to the access point or bucket (except by AWS service principals), while still allowing users within the account to manage the access point or bucket. This setting also rejects all anonymous (or unsigned) calls.

  Any denial resulting from the `RestrictPublicBuckets` setting is explicit. For example, if `RestrictPublicBuckets` denies a `GetObject` request because of a public bucket or access point policy, you receive the following message: 

  ```
  User: arn:aws:iam::123456789012:user/MaryMajor is not authorized to perform: 
  s3:GetObject on resource: "arn:aws:s3:::amzn-s3-demo-bucket1/object-name" with 
  an explicit deny in a resource-based policy
  ```

For more information about these settings, see [Block public access settings](access-control-block-public-access.md#access-control-block-public-access-options). To review and update these settings, see [Configuring block public access](access-control-block-public-access.md#configuring-block-public-access).

## Access denied due to Requester Pays settings
<a name="access-denied-requester-pays"></a>

If the Amazon S3 bucket you are trying to access has the Requester Pays feature enabled, you need to make sure you are passing the correct request parameters when making requests to that bucket. The Requester Pays feature in Amazon S3 allows the requester, instead of the bucket owner, to pay the data transfer and request costs for accessing objects in the bucket. When Requester Pays is enabled for a bucket, the bucket owner is not charged for requests made by other AWS accounts.

If you make a request to a Requester Pays-enabled bucket without passing the necessary parameters, you will receive an Access Denied (403 Forbidden) error. To access objects in a Requester Pays-enabled bucket, you must do the following: 

1. When making requests using the AWS CLI, you must include the `--request-payer requester` parameter. For example, to copy an object with the key `object.txt` located in the `s3://amzn-s3-demo-bucket/` S3 bucket to a location on your local machine, you must also pass the parameter `--request-payer requester` if this bucket has Requester Pays enabled. 

   ```
   aws s3 cp s3://amzn-s3-demo-bucket/object.txt /local/path \
   --request-payer requester
   ```

1. When making programmatic requests using an AWS SDK, set the `x-amz-request-payer` header to the value `requester`. For an example, see [Downloading objects from Requester Pays buckets](ObjectsinRequesterPaysBuckets.md).

1. Make sure that the IAM user or role making the request has the necessary permissions to access the Requester Pays bucket, such as the `s3:GetObject` and `s3:ListBucket` permissions.

By including the `--request-payer requester` parameter or setting the `x-amz-request-payer` header, you are informing Amazon S3 that you, the requester, will pay the costs associated with accessing the objects in the Requester Pays-enabled bucket. This will prevent the Access Denied (403 Forbidden) error.

## Bucket policies and IAM policies
<a name="bucket-iam-policies"></a>

### Bucket-level operations
<a name="troubleshoot-403-bucket-level-ops"></a>

If there is no bucket policy in place, then the bucket implicitly allows requests from any AWS Identity and Access Management (IAM) identity in the bucket-owner's account. The bucket also implicitly denies requests from any other IAM identities from any other accounts, and anonymous (unsigned) requests. However, if there is no IAM user policy in place, the requester (unless they're the AWS account root user) is implicitly denied from making any requests. For more information about this evaluation logic, see [Determining whether a request is denied or allowed within an account](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_evaluation-logic.html#policy-eval-denyallow) in the *IAM User Guide*.

### Object-level operations
<a name="troubleshoot-403-object-level-ops"></a>

If the object is owned by the bucket-owning account, the bucket policy and IAM user policy will function in the same way for object-level operations as they do for bucket-level operations. For example, if there is no bucket policy in place, then the bucket implicitly allows object requests from any IAM identity in the bucket-owner's account. The bucket also implicitly denies object requests from any other IAM identities from any other accounts, and anonymous (unsigned) requests. However, if there is no IAM user policy in place, the requester (unless they're the AWS account root user) is implicitly denied from making any object requests.

If the object is owned by an external account, then access to the object can be granted only through object access control lists (ACLs). The bucket policy and IAM user policy can still be used to deny object requests. 

Therefore, to ensure that your bucket policy or IAM user policy isn't causing an Access Denied (403 Forbidden) error, make sure that the following requirements are met:
+ For same-account access, there must not be an explicit `Deny` statement against the requester you are trying to grant permissions to, in either the bucket policy or the IAM user policy. If you want to grant permissions by using only the bucket policy and the IAM user policy, there must be at least one explicit `Allow` statement in one of these policies.
+ For cross-account access, there must not be an explicit `Deny` statement against the requester that you're trying to grant permissions to, in either the bucket policy or the IAM user policy. To grant cross-account permissions by using only the bucket policy and IAM user policy, make sure that both the bucket policy and the IAM user policy of the requester include an explicit `Allow` statement.

**Note**  
`Allow` statements in a bucket policy apply only to objects that are [owned by the same bucket-owning account](https://docs.aws.amazon.com/AmazonS3/latest/userguide/about-object-ownership.html). However, `Deny` statements in a bucket policy apply to all objects regardless of object ownership. 

**To review or edit your bucket policy**
**Note**  
To view or edit a bucket policy, you must have the `s3:GetBucketPolicy` permission.

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Buckets**.

1. From the **Buckets** list, choose the name of the bucket that you want to view or edit a bucket policy for.

1. Choose the **Permissions** tab.

1. Under **Bucket policy**, choose **Edit**. The **Edit bucket policy** page appears.

To review or edit your bucket policy by using the AWS Command Line Interface (AWS CLI), use the [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-bucket-policy.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-bucket-policy.html) command.

**Note**  
If you get locked out of a bucket because of an incorrect bucket policy, [sign in to the AWS Management Console by using your AWS account root user credentials.](https://docs.aws.amazon.com/signin/latest/userguide/introduction-to-root-user-sign-in-tutorial.html) To regain access to your bucket, make sure to delete the incorrect bucket policy by using your AWS account root user credentials.

### Tips for checking permissions
<a name="troubleshoot-403-tips"></a>

To check whether the requester has proper permissions to perform an Amazon S3 operation, try the following:
+ Identify the requester. If it’s an unsigned request, then it's an anonymous request without an IAM user policy. If it’s a request that uses a presigned URL, then the user policy is the same as the one for the IAM user or role that signed the request.
+ Verify that you're using the correct IAM user or role. You can verify your IAM user or role by checking the upper-right corner of the AWS Management Console or by using the [https://docs.aws.amazon.com/cli/latest/reference/sts/get-caller-identity.html](https://docs.aws.amazon.com/cli/latest/reference/sts/get-caller-identity.html) command.
+ Check the IAM policies that are related to the IAM user or role. You can use one of the following methods:
  + [Test IAM policies with the IAM policy simulator](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_testing-policies.html).
  + Review the different [IAM policy types](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html).
+ If needed, [edit your IAM user policy](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-edit.html).
+ Review the following examples of policies that explicitly deny or allow access:
  + Explicit allow IAM user policy: [IAM: Allows and denies access to multiple services programmatically and in the console](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_iam_multiple-services-console.html)
  + Explicit allow bucket policy: [Granting permissions to multiple accounts to upload objects or set object ACLs for public access](https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-bucket-policies.html#example-bucket-policies-acl-1)
  + Explicit deny IAM user policy: [AWS: Denies access to AWS based on the requested AWS Region](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_aws_deny-requested-region.html)
  + Explicit deny bucket policy: [Require SSE-KMS for all objects written to a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-bucket-policies.html#example-bucket-policies-encryption-1)

## Amazon S3 ACL settings
<a name="troubleshoot-403-acl-settings"></a>

When checking your ACL settings, first [review your Object Ownership setting](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-ownership-retrieving.html) to check whether ACLs are enabled on the bucket. Be aware that ACL permissions can be used only to grant permissions and can't be used to reject requests. ACLs also can't be used to grant access to requesters that are rejected by explicit denials in bucket policies or IAM user policies.

### The Object Ownership setting is set to Bucket owner enforced
<a name="troubleshoot-403-object-ownership-1"></a>

If the **Bucket owner enforced** setting is enabled, then ACL settings are unlikely to cause an Access Denied (403 Forbidden) error because this setting disables all ACLs that apply to bucket and objects. **Bucket owner enforced** is the default (and recommended) setting for Amazon S3 buckets.

### The Object Ownership setting is set to Bucket owner preferred or Object writer
<a name="troubleshoot-403-object-ownership-2"></a>

ACL permissions are still valid with the **Bucket owner preferred** setting or the **Object writer** setting. There are two kinds of ACLs: bucket ACLs and object ACLs. For the differences between these two types of ACLs, see [Mapping of ACL permissions and access policy permissions](https://docs.aws.amazon.com/AmazonS3/latest/userguide/acl-overview.html#acl-access-policy-permission-mapping).

Depending on the action of the rejected request, [check the ACL permissions for your bucket or the object](https://docs.aws.amazon.com/AmazonS3/latest/userguide/managing-acls.html):
+ If Amazon S3 rejected a `LIST`, `PUT` object, `GetBucketAcl`, or `PutBucketAcl` request, then [review the ACL permissions for your bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/managing-acls.html).
**Note**  
You can't grant `GET` object permissions with bucket ACL settings.
+ If Amazon S3 rejected a `GET` request on an S3 object, or a [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectAcl.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectAcl.html) request, then [review the ACL permissions for the object](https://docs.aws.amazon.com/AmazonS3/latest/userguide/managing-acls.html).
**Important**  
If the account that owns the object is different from the account that owns the bucket, then access to the object isn't controlled by the bucket policy.

### Troubleshooting an Access Denied (403 Forbidden) error from a `GET` object request during cross-account object ownership
<a name="troubleshoot-403-object-ownership-tips"></a>

Review the bucket's [Object Ownership settings](https://docs.aws.amazon.com/AmazonS3/latest/userguide/about-object-ownership.html#object-ownership-overview) to determine the object owner. If you have access to the [object ACLs](https://docs.aws.amazon.com/AmazonS3/latest/userguide/managing-acls.html), then you can also check the object owner's account. (To view the object owner's account, review the object ACL setting in the Amazon S3 console.) Alternatively, you can also make a `GetObjectAcl` request to find the object owner’s [canonical ID](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectAcl.html) to verify the object owner account. By default, ACLs grant explicit allow permissions for `GET` requests to the object owner’s account.

After you've confirmed that the object owner is different from the bucket owner, then depending on your use case and access level, choose one of the following methods to help address the Access Denied (403 Forbidden) error:
+ **Disable ACLs (recommended)** – This method will apply to all objects and can be performed by the bucket owner. This method automatically gives the bucket owner ownership and full control over every object in the bucket. Before you implement this method, check the [prerequisites for disabling ACLs](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-ownership-migrating-acls-prerequisites.html). For information about how to set your bucket to **Bucket owner enforced** (recommended) mode, see [Setting Object Ownership on an existing bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-ownership-existing-bucket.html).
**Important**  
To prevent an Access Denied (403 Forbidden) error, be sure to migrate the ACL permissions to a bucket policy before you disable ACLs. For more information, see [Bucket policy examples for migrating from ACL permissions](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-ownership-migrating-acls-prerequisites.html#migrate-acl-permissions-bucket-policies).
+ **Change the object owner to the bucket owner** – This method can be applied to individual objects, but only the object owner (or a user with the appropriate permissions) can change an object's ownership. Additional `PUT` costs might apply. (For more information, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/).) This method grants the bucket owner full ownership of the object, allowing the bucket owner to control access to the object through a bucket policy. 

  To change the object's ownership, do one of the following:
  + You (the bucket owner) can [copy the object](https://docs.aws.amazon.com/AmazonS3/latest/userguide/copy-object.html#CopyingObjectsExamples) back to the bucket. 
  + You can change the Object Ownership setting of the bucket to **Bucket owner preferred**. If versioning is disabled, the objects in the bucket are overwritten. If versioning is enabled, duplicate versions of the same object will appear in the bucket, which the bucket owner can [set a lifecycle rule to expire](https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-expire-general-considerations.html). For instructions on how to change your Object Ownership setting, see [Setting Object Ownership on an existing bucket](object-ownership-existing-bucket.md).
**Note**  
When you update your Object Ownership setting to **Bucket owner preferred**, the setting is applied only to new objects that are uploaded to the bucket.
  + You can have the object owner upload the object again with the `bucket-owner-full-control` canned object ACL. 
**Note**  
For cross-account uploads, you can also require the `bucket-owner-full-control` canned object ACL in your bucket policy. For an example bucket policy, see [Grant cross-account permissions to upload objects while ensuring that the bucket owner has full control](https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-bucket-policies.html#example-bucket-policies-acl-2).
+ **Keep the object writer as the object owner** – This method doesn't change the object owner, but it does allow you to grant access to objects individually. To grant access to an object, you must have the `PutObjectAcl` permission for the object. Then, to fix the Access Denied (403 Forbidden) error, add the requester as a [grantee](https://docs.aws.amazon.com/AmazonS3/latest/userguide/acl-overview.html#specifying-grantee) to access the object in the object's ACLs. For more information, see [Configuring ACLs](managing-acls.md).

## S3 Block Public Access settings
<a name="troubleshoot-403-bpa"></a>

If the failed request involves public access or public policies, then check the S3 Block Public Access settings on your account, bucket, or access point. For more information about troubleshooting access denied errors related to S3 Block Public Access settings, see [Access denied due to Block Public Access settings](#access-denied-bpa-examples).

## Amazon S3 encryption settings
<a name="troubleshoot-403-encryption"></a>

Amazon S3 supports server-side encryption on your bucket. Server-side encryption is the encryption of data at its destination by the application or service that receives it. Amazon S3 encrypts your data at the object level as it writes it to disks in AWS data centers and decrypts it for you when you access it. 

By default, Amazon S3 now applies server-side encryption with Amazon S3 managed keys (SSE-S3) as the base level of encryption for every bucket in Amazon S3. Amazon S3 also allows you to specify the server-side encryption method when uploading objects.

**To review your bucket's server-side encryption status and encryption settings**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Buckets**.

1. From the **Buckets** list, choose the bucket that you want to check the encryption settings for.

1. Choose the **Properties** tab.

1. Scroll down to the **Default encryption** section and view the **Encryption type** settings.

To check your encryption settings by using the AWS CLI, use the [https://docs.aws.amazon.com/cli/latest/reference/s3api/get-bucket-encryption.html](https://docs.aws.amazon.com/cli/latest/reference/s3api/get-bucket-encryption.html) command.

**To check the encryption status of an object**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Buckets**.

1. From the **Buckets** list, choose the name of the bucket that contains the object.

1. From the **Objects** list, choose the name of the object that you want to add or change encryption for. 

   The object's details page appears.

1. Scroll down to the **Server-side encryption settings** section to view the object's server-side encryption settings.

To check your object encryption status by using the AWS CLI, use the [https://docs.aws.amazon.com/cli/latest/reference/s3api/head-object.html#examples](https://docs.aws.amazon.com/cli/latest/reference/s3api/head-object.html#examples) command.

### Encryption and permissions requirements
<a name="troubleshoot-403-encryption-requirements"></a>

Amazon S3 supports three types of server-side encryption:
+ Server-side encryption with Amazon S3 managed keys (SSE-S3)
+ Server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS)
+ Server-side encryption with customer-provided keys (SSE-C)

Based on your encryption settings, make sure that the following permissions requirements are met:
+ **SSE-S3** – No extra permissions are required.
+ **SSE-KMS (with a customer managed key)** – To upload objects, the `kms:GenerateDataKey` permission on the AWS KMS key is required. To download objects and perform multipart uploads of objects, the `kms:Decrypt` permission on the KMS key is required.
+ **SSE-KMS (with an AWS managed key)** – The requester must be from the same account that owns the `aws/s3` KMS key. The requester must also have the correct Amazon S3 permissions to access the object.
+ **SSE-C (with a customer provided key)** – No additional permissions are required. You can configure the bucket policy to [require and restrict server-side encryption with customer-provided encryption keys](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ServerSideEncryptionCustomerKeys.html#ssec-require-condition-key) for objects in your bucket.

If the object is encrypted with a customer managed key, make sure that the KMS key policy allows you to perform the `kms:GenerateDataKey` or `kms:Decrypt` actions. For instructions on checking your KMS key policy, see [Viewing a key policy](https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-viewing.html) in the *AWS Key Management Service Developer Guide*.

## S3 Object Lock settings
<a name="troubleshoot-403-object-lock"></a>

If your bucket has [S3 Object Lock](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock.html) enabled and the object is protected by a [retention period](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-overview.html#object-lock-retention-periods) or [legal hold](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-overview.html#object-lock-legal-holds) and you try to delete an object, Amazon S3 returns one of the following responses, depending on how you tried to delete the object:
+ **Permanent `DELETE` request** – If you issued a permanent `DELETE` request (a request that specifies a version ID), Amazon S3 returns an Access Denied (`403 Forbidden`) error when you try to delete the object.
+ **Simple `DELETE` request** – If you issued a simple `DELETE` request (a request that doesn't specify a version ID), Amazon S3 returns a `200 OK` response and inserts a [delete marker](DeleteMarker.md) in the bucket, and that marker becomes the current version of the object with a new ID.

**To check whether the bucket has Object Lock enabled**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Buckets**.

1. From the **Buckets** list, choose the name of the bucket that you want to review.

1. Choose the **Properties** tab.

1. Scroll down to the **Object Lock** section. Verify whether the **Object Lock** setting is **Enabled** or **Disabled**.

To determine whether the object is protected by a retention period or legal hold, [view the lock information](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-managing.html#object-lock-managing-view) for your object. 

If the object is protected by a retention period or legal hold, check the following:
+ If the object version is protected by the compliance retention mode, there is no way to permanently delete it. A permanent `DELETE` request from any requester, including the AWS account root user, will result in an Access Denied (403 Forbidden) error. Also, be aware that when you submit a `DELETE` request for an object that's protected by the compliance retention mode, Amazon S3 creates a [delete marker](https://docs.aws.amazon.com/AmazonS3/latest/userguide/DeleteMarker.html) for the object.
+ If the object version is protected with governance retention mode and you have the `s3:BypassGovernanceRetention` permission, you can bypass the protection and permanently delete the version. For more information, see [Bypassing governance mode](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-managing.html#object-lock-managing-bypass).
+ If the object version is protected by a legal hold, then a permanent `DELETE` request can result in an Access Denied (403 Forbidden) error. To permanently delete the object version, you must remove the legal hold on the object version. To remove a legal hold, you must have the `s3:PutObjectLegalHold` permission. For more information about removing a legal hold, see [Configuring S3 Object Lock](object-lock-configure.md).

## VPC endpoint policies
<a name="troubleshoot-403-vpc"></a>

If you're accessing Amazon S3 by using a virtual private cloud (VPC) endpoint, make sure that the VPC endpoint policy isn't blocking you from accessing your Amazon S3 resources. By default, the VPC endpoint policy allows all requests to Amazon S3. You can also configure the VPC endpoint policy to restrict certain requests. For information about how to check your VPC endpoint policy, see the following resources: 
+ [Access denied due to a VPC endpoint policy – implicit denial](#access-denied-vpc-endpoint-examples-implicit)
+ [Access denied due to a VPC endpoint policy – explicit denial](#access-denied-vpc-endpoint-examples-explicit)
+ [Control access to VPC endpoints by using endpoint policies](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-access.html) in the *AWS PrivateLink Guide*

## AWS Organizations policies
<a name="troubleshoot-403-orgs"></a>

If your AWS account belongs to an organization, AWS Organizations policies can block you from accessing Amazon S3 resources. By default, AWS Organizations policies don't block any requests to Amazon S3. However, make sure that your AWS Organizations policies haven't been configured to block access to S3 buckets. For instructions on how to check your AWS Organizations policies, see the following resources: 
+ [Access denied due to a Service Control Policy – implicit denial](#access-denied-scp-examples-implicit)
+ [Access denied due to a Service Control Policy – explicit denial](#access-denied-scp-examples-explicit)
+ [Access denied due to a resource control policy – explicit denial](#access-denied-rcp-examples-explicit)
+ [Listing all policies](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_info-operations.html#list-all-pols-in-org) in the *AWS Organizations User Guide*

Additionally, if you incorrectly configured your bucket policy for a member account to deny all users access to your S3 bucket, you can unlock the bucket by launching a privileged session for the member account in IAM. After you launch a privileged session, you can delete the misconfigured bucket policy to regain access to the bucket. For more information, see [Perform a privileged task on an AWS Organizations member account](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user-privileged-task.html) in the *AWS Identity and Access Management User Guide*. 

## CloudFront distribution access
<a name="troubleshoot-403-cloudfront"></a>

If you receive a Access Denied (403 Forbidden) error when trying to access your S3 static website through CloudFront, check these common issues:
+ **Do you have the correct origin domain name format?**
  + Make sure you're using the S3 website endpoint format (bucket-name.s3-website-region.amazonaws.com) rather than the REST API endpoint
  + Verify that static website hosting is enabled on your bucket
+ **Does your bucket policy allow CloudFront access?**
  + Ensure your bucket policy includes permissions for your CloudFront distribution's Origin Access Identity (OAI) or Origin Access Control (OAC)
  + Verify the policy includes the required s3:GetObject permissions

For additional troubleshooting steps and configurations, including error page setups and protocol settings, see [Why do I get a "403 access denied" error when I use an Amazon S3 website endpoint as the origin of my CloudFront distribution?](https://repost.aws/knowledge-center/s3-website-cloudfront-error-403) in the AWS re:Post Knowledge Center.

**Note**  
This error is different from 403 errors you might receive when accessing S3 directly. For CloudFront-specific issues, make sure to check both your CloudFront distribution settings and S3 configurations.

## Access point settings
<a name="troubleshoot-403-access-points"></a>

If you receive an Access Denied (403 Forbidden) error while making requests through Amazon S3 access points, you might need to check the following: 
+ The configurations for your access points
+ The IAM user policy that's used for your access points
+ The bucket policy that's used to manage or configure your cross-account access points

**Access point configurations and policies**
+ When you create an access point, you can choose to designate **Internet** or **VPC** as the network origin. If the network origin is set to VPC only, Amazon S3 will reject any requests made to the access point that don't originate from the specified VPC. To check the network origin of your access point, see [Creating access points restricted to a virtual private cloud](access-points-vpc.md).
+ With access points, you can also configure custom Block Public Access settings, which work similarly to the Block Public Access settings at the bucket or account level. To check your custom Block Public Access settings, see [Managing public access to access points for general purpose buckets](access-points-bpa-settings.md).
+ To make successful requests to Amazon S3 by using access points, make sure that the requester has the necessary IAM permissions. For more information, see [Configuring IAM policies for using access points](access-points-policies.md).
+ If the request involves cross-account access points, make sure that the bucket owner has updated the bucket policy to authorize requests from the access point. For more information, see [Granting permissions for cross-account access points](access-points-policies.md#access-points-cross-account).

If the Access Denied (403 Forbidden) error still persists after checking all the items in this topic, [retrieve your Amazon S3 request ID](https://docs.aws.amazon.com/AmazonS3/latest/userguide/get-request-ids.html) and contact Support for additional guidance.

## Additional resources
<a name="troubleshoot-403-additional-resources"></a>

For more guidance on Access Denied (403 Forbidden) errors you can check the following resources:
+ [How do I troubleshoot 403 Access Denied errors from Amazon S3?](https://repost.aws/knowledge-center/s3-troubleshoot-403) in the AWS re:Post Knowledge Center.
+ [Why do I get a 403 Forbidden error when I try to access an Amazon S3 bucket or object?](https://repost.aws/knowledge-center/s3-403-forbidden-error) in the AWS re:Post Knowledge Center.
+ [Why do I get an Access Denied error when I try to access an Amazon S3 resource in the same AWS account?](https://repost.aws/knowledge-center/s3-troubleshoot-403-resource-same-account) in the AWS re:Post Knowledge Center.
+ [Why do I get an Access Denied error when I try to access an Amazon S3 bucket with public read access?](https://repost.aws/knowledge-center/s3-troubleshoot-403-public-read) in the AWS re:Post Knowledge Center.
+ [Why do I get a "signature mismatch" error when I try to use a presigned URL to upload an object to Amazon S3?](https://repost.aws/knowledge-center/s3-presigned-url-signature-mismatch) in the AWS re:Post Knowledge Center.
+ [Why do I get an Access Denied error for ListObjectsV2 when I run the sync command on my Amazon S3 bucket?](https://repost.aws/knowledge-center/s3-access-denied-listobjects-sync) in the AWS re:Post Knowledge Center.
+ [Why do I get a "403 access denied" error when I use an Amazon S3 website endpoint as the origin of my CloudFront distribution?](https://repost.aws/knowledge-center/s3-website-cloudfront-error-403) in the AWS re:Post Knowledge Center.

# AWS managed policies for Amazon S3
<a name="security-iam-awsmanpol"></a>

An AWS managed policy is a standalone policy that is created and administered by AWS. AWS managed policies are designed to provide permissions for many common use cases so that you can start assigning permissions to users, groups, and roles.

Keep in mind that AWS managed policies might not grant least-privilege permissions for your specific use cases because they're available for all AWS customers to use. We recommend that you reduce permissions further by defining [ customer managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html#customer-managed-policies) that are specific to your use cases.

You cannot change the permissions defined in AWS managed policies. If AWS updates the permissions defined in an AWS managed policy, the update affects all principal identities (users, groups, and roles) that the policy is attached to. AWS is most likely to update an AWS managed policy when a new AWS service is launched or new API operations become available for existing services.

For more information, see [AWS managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html#aws-managed-policies) in the *IAM User Guide*.

## AWS managed policy: AmazonS3FullAccess
<a name="security-iam-awsmanpol-amazons3fullaccess"></a>

You can attach the `AmazonS3FullAccess` policy to your IAM identities. This policy grants permissions that allow full access to Amazon S3.

To view the permissions for this policy, see [https://console.aws.amazon.com/iam/home?#/policies/arn:aws:iam::aws:policy/AmazonS3FullAccess$jsonEditor](https://console.aws.amazon.com/iam/home?#/policies/arn:aws:iam::aws:policy/AmazonS3FullAccess$jsonEditor) in the AWS Management Console.

## AWS managed policy: AmazonS3ReadOnlyAccess
<a name="security-iam-awsmanpol-amazons3readonlyaccess"></a>

You can attach the `AmazonS3ReadOnlyAccess` policy to your IAM identities. This policy grants permissions that allow read-only access to Amazon S3.

To view the permissions for this policy, see [https://console.aws.amazon.com/iam/home?#/policies/arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess$jsonEditor](https://console.aws.amazon.com/iam/home?#/policies/arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess$jsonEditor) in the AWS Management Console.

## AWS managed policy: AmazonS3ObjectLambdaExecutionRolePolicy
<a name="security-iam-awsmanpol-amazons3objectlambdaexecutionrolepolicy"></a>

Provides AWS Lambda functions the required permissions to send data to S3 Object Lambda when requests are made to an S3 Object Lambda access point. Also grants Lambda permissions to write to Amazon CloudWatch logs.

To view the permissions for this policy, see [https://console.aws.amazon.com/iam/home?#/policies/arn:aws:iam::aws:policy/service-role/AmazonS3ObjectLambdaExecutionRolePolicy$jsonEditor](https://console.aws.amazon.com/iam/home?#/policies/arn:aws:iam::aws:policy/service-role/AmazonS3ObjectLambdaExecutionRolePolicy$jsonEditor) in the AWS Management Console.

## AWS managed policy: S3UnlockBucketPolicy
<a name="security-iam-awsmanpol-S3UnlockBucketPolicy"></a>

If you incorrectly configured your bucket policy for a member account to deny all users access to your S3 bucket, you can use this AWS managed policy (`S3UnlockBucketPolicy`) to unlock the bucket. For more information on how to remove a misconfigured bucket policy that denies all principals from accessing an Amazon S3 bucket, see [Perform a privileged task on an AWS Organizations member account](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user-privileged-task.html) in the *AWS Identity and Access Management User Guide*. 

## Amazon S3 updates to AWS managed policies
<a name="security-iam-awsmanpol-updates"></a>

View details about updates to AWS managed policies for Amazon S3 since this service began tracking these changes.


| Change | Description | Date | 
| --- | --- | --- | 
|  Amazon S3 added `S3UnlockBucketPolicy`  |  Amazon S3 added a new AWS-managed policy called `S3UnlockBucketPolicy` to unlock a bucket and remove a misconfigured bucket policy that denies all principals from accessing an Amazon S3 bucket.  | November 1, 2024 | 
|  Amazon S3 added Describe permissions to `AmazonS3ReadOnlyAccess`  |  Amazon S3 added `s3:Describe*` permissions to `AmazonS3ReadOnlyAccess`.  | August 11, 2023 | 
|  Amazon S3 added S3 Object Lambda permissions to `AmazonS3FullAccess` and `AmazonS3ReadOnlyAccess`  |  Amazon S3 updated the `AmazonS3FullAccess` and `AmazonS3ReadOnlyAccess` policies to include permissions for S3 Object Lambda.  | September 27, 2021 | 
|  Amazon S3 added `AmazonS3ObjectLambdaExecutionRolePolicy`  |  Amazon S3 added a new AWS-managed policy called `AmazonS3ObjectLambdaExecutionRolePolicy` that provides Lambda functions permissions to interact with S3 Object Lambda and write to CloudWatch logs.  | August 18, 2021 | 
|  Amazon S3 started tracking changes  |  Amazon S3 started tracking changes for its AWS managed policies.  | August 18, 2021 | 