

# Transforming objects with S3 Object Lambda
<a name="transforming-objects"></a>

**Note**  
As of November 7th, 2025, S3 Object Lambda is available only to existing customers that are currently using the service as well as to select AWS Partner Network (APN) partners. For capabilities similar to S3 Object Lambda, learn more here - [Amazon S3 Object Lambda availability change](https://docs.aws.amazon.com/AmazonS3/latest/userguide/amazons3-ol-change.html).

With Amazon S3 Object Lambda, you can add your own code to Amazon S3 `GET`, `LIST`, and `HEAD` requests to modify and process data as it is returned to an application. You can use custom code to modify the data returned by S3 `GET` requests to filter rows, dynamically resize and watermark images, redact confidential data, and more. You can also use S3 Object Lambda to modify the output of S3 `LIST` requests to create a custom view of all objects in a bucket and S3 `HEAD` requests to modify object metadata such as object name and size. You can use S3 Object Lambda as an origin for your Amazon CloudFront distribution to tailor data for end users, such as automatically resizing images, transcoding older formats (like from JPEG to WebP), or stripping metadata. For more information, see the AWS Blog post [Use Amazon S3 Object Lambda with Amazon CloudFront](https://aws.amazon.com/blogs/aws/new-use-amazon-s3-object-lambda-with-amazon-cloudfront-to-tailor-content-for-end-users/). Powered by AWS Lambda functions, your code runs on infrastructure that is fully managed by AWS. Using S3 Object Lambda reduces the need to create and store derivative copies of your data or to run proxies, all with no need to change your applications.

**How S3 Object Lambda works**  
S3 Object Lambda uses AWS Lambda functions to automatically process the output of standard S3 `GET`, `LIST`, or `HEAD` requests. AWS Lambda is a serverless compute service that runs customer-defined code without requiring management of underlying compute resources. You can author and run your own custom Lambda functions, tailoring the data transformation to your specific use cases. 

After you configure a Lambda function, you attach it to an S3 Object Lambda service endpoint, known as an *Object Lambda Access Point*. The Object Lambda Access Point uses a standard S3 access point, known as a *supporting access point*, to access data.

When you send a request to your Object Lambda Access Point, Amazon S3 automatically calls your Lambda function. Any data retrieved by using an S3 `GET`, `LIST`, or `HEAD` request through the Object Lambda Access Point returns a transformed result back to the application. All other requests are processed as normal, as illustrated in the following diagram. 



![\[Diagram, showing how S3 Object Lambda works.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/ObjectLamdaDiagram.png)


The topics in this section describe how to work with S3 Object Lambda.

**Topics**
+ [

# Creating Object Lambda Access Points
](olap-create.md)
+ [

# Using Amazon S3 Object Lambda Access Points
](olap-use.md)
+ [

# Security considerations for S3 Object Lambda Access Points
](olap-security.md)
+ [

# Writing Lambda functions for S3 Object Lambda Access Points
](olap-writing-lambda.md)
+ [

# Using AWS built Lambda functions
](olap-examples.md)
+ [

# Best practices and guidelines for S3 Object Lambda
](olap-best-practices.md)
+ [

# S3 Object Lambda tutorials
](olap-tutorials.md)
+ [

# Debugging and troubleshooting S3 Object Lambda
](olap-debugging-lambda.md)

# Creating Object Lambda Access Points
<a name="olap-create"></a>

**Note**  
As of November 7th, 2025, S3 Object Lambda is available only to existing customers that are currently using the service as well as to select AWS Partner Network (APN) partners. For capabilities similar to S3 Object Lambda, learn more here - [Amazon S3 Object Lambda availability change](https://docs.aws.amazon.com/AmazonS3/latest/userguide/amazons3-ol-change.html).

An Object Lambda Access Point is associated with exactly one standard access point, which you specify during creation. To create an Object Lambda Access Point, you need the following resources:
+ **A standard S3 access point.** When you're working with Object Lambda Access Points, this standard access point is known as a *supporting access point* and is attached to an S3 bucket or Amazon FSx for OpenZFS volume. For information about creating standard access points, see [Creating an access point](creating-access-points.md).
+ **An AWS Lambda function.** You can either create your own Lambda function, or you can use a prebuilt function. For more information about creating Lambda functions, see [Writing Lambda functions for S3 Object Lambda Access Points](olap-writing-lambda.md). For more information about prebuilt functions, see [Using AWS built Lambda functions](olap-examples.md).
+ **(Optional) An AWS Identity and Access Management (IAM) policy.** Amazon S3 access points support IAM resource policies that you can use to control the use of the access point by resource, user, or other conditions. For more information about creating these policies, see [Configuring IAM policies for Object Lambda Access Points](olap-policies.md).

The following sections describe how to create an Object Lambda Access Point by using:
+ The AWS Management Console
+ The AWS Command Line Interface (AWS CLI)
+ An AWS CloudFormation template
+ The AWS Cloud Development Kit (AWS CDK)

For information about how to create an Object Lambda Access Point by using the REST API, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateAccessPointForObjectLambda.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateAccessPointForObjectLambda.html) in the *Amazon Simple Storage Service API Reference*.

## Create an Object Lambda Access Point
<a name="create-olap"></a>

Use one of the following procedures to create your Object Lambda Access Point. 

### Using the S3 console
<a name="olap-create-console"></a>

**To create an Object Lambda Access Point by using the console**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar, choose the name of the currently displayed AWS Region. Next, choose the Region that you want to switch to. 

1. In the left navigation pane, choose **Object Lambda Access Points**.

1. On the **Object Lambda Access Points** page, choose **Create Object Lambda Access Point**.

1. For **Object Lambda Access Point name**, enter the name that you want to use for the access point. 

   As with standard access points, there are rules for naming Object Lambda Access Points. For more information, see [Naming rules for access points](access-points-restrictions-limitations-naming-rules.md#access-points-names).

1. For **Supporting Access Point**, enter or browse to the standard access point that you want to use. The access point must be in the same AWS Region as the objects that you want to transform. For information about creating standard access points, see [Creating an access point](creating-access-points.md).

1. Under **Transformation configuration**, you can add a function that transforms your data for your Object Lambda Access Point. Do one of the following:
   + If you already have a AWS Lambda function in your account you can choose it under **Invoke Lambda function**. Here you may enter the Amazon Resource Name (ARN) of an Lambda function in your AWS account or choose a Lambda function from the drop-down menu.
   + If you wish to use a AWS built function choose the function name under **AWS built function** and select **Create Lambda function**. This will take you to the Lambda console where you can deploy a built function into your AWS account. For more information about built functions, see [Using AWS built Lambda functions](olap-examples.md).

   Under **S3 APIs**, choose one or more API operations to invoke. For each API selected you must specify a Lambda function to invoke. 

1. (Optional) Under **Payload**, add JSON text that you want to provide to your Lambda function as input. You can configure payloads with different parameters for different Object Lambda Access Points that invoke the same Lambda function, thereby extending the flexibility of your Lambda function.
**Important**  
When you're using Object Lambda Access Points, make sure that the payload does not contain any confidential information.

1. (Optional) For **Range and part number**, you must enable this option if you want to process `GET` and `HEAD` requests with range and part number headers. Enabling this option confirms that your Lambda function can recognize and process these requests. For more information about range headers and part numbers, see [Working with Range and partNumber headers](range-get-olap.md).

1. (Optional) For **Request metrics**, choose **Enable** or **Disable** to add Amazon S3 monitoring to your Object Lambda Access Point. Request metrics are billed at the standard Amazon CloudWatch rate.

1. (Optional) Under **Object Lambda Access Point policy**, set a resource policy. Resource policies grant permissions for the specified Object Lambda Access Point and can control the use of the access point by resource, user, or other conditions. For more information about Object Lambda Access Point resource policies see, [Configuring IAM policies for Object Lambda Access Points](olap-policies.md).

1. Under **Block Public Access settings for this Object Lambda Access Point**, select the block public access settings that you want to apply. All block public access settings are enabled by default for new Object Lambda Access Points, and we recommend that you leave default settings enabled. Amazon S3 currently doesn't support changing an Object Lambda Access Point's block public access settings after the Object Lambda Access Points has been created.

   For more information about using Amazon S3 Block Public Access, see [Managing public access to access points for general purpose buckets](access-points-bpa-settings.md).

1. Choose **Create Object Lambda Access Point**.

### Using the AWS CLI
<a name="olap-create-cli"></a>

**To create an Object Lambda Access Point by using an AWS CloudFormation template**
**Note**  
To use the following commands, replace the `user input placeholders` with your own information.

1. Download the AWS Lambda function deployment package `s3objectlambda_deployment_package.zip` at [S3 Object Lambda default configuration](https://github.com/aws-samples/amazon-s3-object-lambda-default-configuration).

1. Run the following `put-object` command to upload the package to an Amazon S3 bucket.

   ```
   aws s3api put-object --bucket Amazon S3 bucket name --key s3objectlambda_deployment_package.zip --body release/s3objectlambda_deployment_package.zip
   ```

1. Download the AWS CloudFormation template `s3objectlambda_defaultconfig.yaml` at [S3 Object Lambda default configuration](https://github.com/aws-samples/amazon-s3-object-lambda-default-configuration).

1. Run the following `deploy` command to deploy the template to your AWS account.

   ```
   aws cloudformation deploy --template-file s3objectlambda_defaultconfig.yaml \
    --stack-name CloudFormation stack name \ 
    --parameter-overrides ObjectLambdaAccessPointName=Object Lambda Access Point name \
     SupportingAccessPointName=Amazon S3 access point S3BucketName=Amazon S3 bucket \
     LambdaFunctionS3BucketName=Amazon S3 bucket containing your Lambda package \ 
     LambdaFunctionS3Key=Lambda object key LambdaFunctionS3ObjectVersion=Lambda object version \ 
     LambdaFunctionRuntime=Lambda function runtime --capabilities capability_IAM
   ```

You can configure this AWS CloudFormation template to invoke Lambda for `GET`, `HEAD`, and `LIST` API operations. For more information about modifying the template's default configuration, see [Automate S3 Object Lambda setup with a CloudFormation template](olap-using-cfn-template.md).<a name="olap-create-cli-specific"></a>

**To create an Object Lambda Access Point by using the AWS CLI**
**Note**  
To use the following commands, replace the `user input placeholders` with your own information.

The following example creates an Object Lambda Access Point named *`my-object-lambda-ap`* for the bucket *`amzn-s3-demo-bucket1`* in the account *`111122223333`*. This example assumes that a standard access point named *`example-ap`* has already been created. For information about creating a standard access point, see [Creating an access point](creating-access-points.md).

This example uses the AWS prebuilt function `decompress`. For more information about prebuilt functions, see [Using AWS built Lambda functions](olap-examples.md).

1. Create a bucket. In this example, we will use *`amzn-s3-demo-bucket1`*. For information about creating buckets, see [Creating a general purpose bucket](create-bucket-overview.md).

1. Create a standard access point and attach it to your bucket. In this example, we will use *`example-ap`*. For information about creating standard access points, see [Creating an access point](creating-access-points.md).

1. Do one of the following: 
   + Create a Lambda function in your account that you would like to use to transform your Amazon S3 object. For more information about creating Lambda functions, see [Writing Lambda functions for S3 Object Lambda Access Points](olap-writing-lambda.md). To use your custom function with the AWS CLI, see [Using Lambda with the AWS CLI](https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-awscli.html) in the *AWS Lambda Developer Guide*.
   + Use an AWS prebuilt Lambda function. For more information about prebuilt functions, see [Using AWS built Lambda functions](olap-examples.md).

1. Create a JSON configuration file named `my-olap-configuration.json`. In this configuration, provide the supporting access point and the Amazon Resource Name (ARN) for the Lambda function that you created in the previous steps or the ARN for the prebuilt function that you're using.  
**Example**  

   

   ```
   {
       "SupportingAccessPoint" : "arn:aws:s3:us-east-1:111122223333:accesspoint/example-ap",
       "TransformationConfigurations": [{
           "Actions" : ["GetObject", "HeadObject", "ListObjects", "ListObjectsV2"],
           "ContentTransformation" : {
               "AwsLambda": {
                   "FunctionPayload" : "{\"compressionType\":\"gzip\"}",
                   "FunctionArn" : "arn:aws:lambda:us-east-1:111122223333:function/compress"
               }
           }
       }]
   }
   ```

1. Run the `create-access-point-for-object-lambda` command to create your Object Lambda Access Point.

   ```
   aws s3control create-access-point-for-object-lambda --account-id 111122223333 --name my-object-lambda-ap --configuration file://my-olap-configuration.json
   ```

1. (Optional) Create a JSON policy file named `my-olap-policy.json`.

   Adding an Object Lambda Access Point resource policy can control the use of the access point by resource, user, or other conditions. This resource policy grants the `GetObject` permission for account *`444455556666`* to the specified Object Lambda Access Point.  
**Example**  

   

   ```
   {
       "Version": "2008-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "Grant account 444455556666 GetObject access",
               "Effect": "Allow",
               "Action": "s3-object-lambda:GetObject",
               "Principal": {
                   "AWS": "arn:aws:iam::444455556666:root"
               },
               "Resource": "your-object-lambda-access-point-arn"
           }
       ]
   }
   ```

1. (Optional) Run the `put-access-point-policy-for-object-lambda` command to set your resource policy.

   ```
   aws s3control put-access-point-policy-for-object-lambda --account-id 111122223333 --name my-object-lambda-ap --policy file://my-olap-policy.json
   ```

1. (Optional) Specify a payload.

   A payload is optional JSON that you can provide to your AWS Lambda function as input. You can configure payloads with different parameters for different Object Lambda Access Points that invoke the same Lambda function, thereby extending the flexibility of your Lambda function.

   The following Object Lambda Access Point configuration shows a payload with two parameters.

   ```
   {
   	"SupportingAccessPoint": "AccessPointArn",
   	"CloudWatchMetricsEnabled": false,
   	"TransformationConfigurations": [{
   		"Actions": ["GetObject", "HeadObject", "ListObjects", "ListObjectsV2"],
   		"ContentTransformation": {
   			"AwsLambda": {
   				"FunctionArn": "FunctionArn",
   				"FunctionPayload": "{\"res-x\": \"100\",\"res-y\": \"100\"}"
   			}
   		}
   	}]
   }
   ```

   The following Object Lambda Access Point configuration shows a payload with one parameter, and with `GetObject-Range`, `GetObject-PartNumber`, `HeadObject-Range`, and `HeadObject-PartNumber` enabled.

   ```
   {
       "SupportingAccessPoint":"AccessPointArn",
       "CloudWatchMetricsEnabled": false,
       "AllowedFeatures": ["GetObject-Range", "GetObject-PartNumber", "HeadObject-Range", "HeadObject-PartNumber"],        
       "TransformationConfigurations": [{
           "Action": ["GetObject", "HeadObject", "ListObjects", "ListObjectsV2"],
           "ContentTransformation": {
               "AwsLambda": {
                   "FunctionArn":"FunctionArn",
                   "FunctionPayload": "{\"compression-amount\": \"5\"}"
               }
           }
       }]
   }
   ```
**Important**  
When you're using Object Lambda Access Points, make sure that the payload does not contain any confidential information.

### Using the AWS CloudFormation console and template
<a name="olap-create-cfn-console"></a>

You can create an Object Lambda Access Point by using the default configuration provided by Amazon S3. You can download an AWS CloudFormation template and Lambda function source code from the [GitHub repository](https://github.com/aws-samples/amazon-s3-object-lambda-default-configuration) and deploy these resources to set up a functional Object Lambda Access Point.

For information about modifying the AWS CloudFormation template's default configuration, see [Automate S3 Object Lambda setup with a CloudFormation template](olap-using-cfn-template.md).

For information about configuring Object Lambda Access Points by using CloudFormation without the template, see [https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-s3objectlambda-accesspoint.html](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-s3objectlambda-accesspoint.html) in the *AWS CloudFormation User Guide*.

**To upload the Lambda function deployment package**

1. Download the AWS Lambda function deployment package `s3objectlambda_deployment_package.zip` at [S3 Object Lambda default configuration](https://github.com/aws-samples/amazon-s3-object-lambda-default-configuration).

1. Upload the package to an Amazon S3 bucket.

**To create an Object Lambda Access Point by using the AWS CloudFormation console**

1. Download the AWS CloudFormation template `s3objectlambda_defaultconfig.yaml` at [S3 Object Lambda default configuration](https://github.com/aws-samples/amazon-s3-object-lambda-default-configuration).

1. Sign in to the AWS Management Console and open the AWS CloudFormation console at [https://console.aws.amazon.com/cloudformation](https://console.aws.amazon.com/cloudformation/).

1. Do one of the following: 
   + If you've never used AWS CloudFormation before, on the AWS CloudFormation home page, choose **Create stack**.
   + If you have used AWS CloudFormation before, in the left navigation pane, choose **Stacks**. Choose **Create stack**, then choose **With new resources (standard)**.

1. For **Prerequisite - Prepare template**, choose **Template is ready**.

1. For **Specify template**, choose **Upload a template file** and upload `s3objectlambda_defaultconfig.yaml`.

1. Choose **Next**.

1. On the **Specify stack details** page, enter a name for the stack.

1. In the **Parameters** section, specify the following parameters that are defined in the stack template:

   1. For **CreateNewSupportingAccessPoint**, do one of the following: 
      + If you already have a supporting access point for the S3 bucket where you uploaded the template, choose **false**.
      + If you want to create a new access point for this bucket, choose **true**. 

   1. For **EnableCloudWatchMonitoring**, choose **true** or **false**, depending on whether you want to enable Amazon CloudWatch request metrics and alarms. 

   1. (Optional) For **LambdaFunctionPayload**, add JSON text that you want to provide to your Lambda function as input. You can configure payloads with different parameters for different Object Lambda Access Points that invoke the same Lambda function, thereby extending the flexibility of your Lambda function.
**Important**  
When you're using Object Lambda Access Points, make sure that the payload does not contain any confidential information.

   1. For **LambdaFunctionRuntime**, enter your preferred runtime for the Lambda function. The available choices are `nodejs14.x`, `python3.9`, `java11`.

   1. For **LambdaFunctionS3BucketName**, enter the Amazon S3 bucket name where you uploaded the deployment package.

   1. For **LambdaFunctionS3Key**, enter the Amazon S3 object key where you uploaded the deployment package.

   1. For **LambdaFunctionS3ObjectVersion**, enter the Amazon S3 object version where you uploaded the deployment package.

   1. For **ObjectLambdaAccessPointName**, enter a name for your Object Lambda Access Point.

   1. For **S3BucketName**, enter the Amazon S3 bucket name that will be associated with your Object Lambda Access Point.

   1. For **SupportingAccessPointName**, enter the name of your supporting access point.
**Note**  
This is an access point that is associated with the Amazon S3 bucket that you chose in the previous step. If you do not have any access points associated with your Amazon S3 bucket, you can configure the template to create one for you by choosing **true** for **CreateNewSupportingAccessPoint**.

1. Choose **Next**.

1. On the **Configure stack options page**, choose **Next**.

   For more information about the optional settings on this page, see [Setting AWS CloudFormation stack options](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-add-tags.html) in the *AWS CloudFormation User Guide*.

1. On the **Review** page, choose **Create stack**.

### Using the AWS Cloud Development Kit (AWS CDK)
<a name="olap-create-cdk"></a>

For more information about configuring Object Lambda Access Points by using the AWS CDK, see [`AWS::S3ObjectLambda` Construct Library](https://docs.aws.amazon.com/cdk/api/latest/docs/aws-s3objectlambda-readme.html) in the *AWS Cloud Development Kit (AWS CDK) API Reference*.

# Automate S3 Object Lambda setup with a CloudFormation template
<a name="olap-using-cfn-template"></a>

**Note**  
As of November 7th, 2025, S3 Object Lambda is available only to existing customers that are currently using the service as well as to select AWS Partner Network (APN) partners. For capabilities similar to S3 Object Lambda, learn more here - [Amazon S3 Object Lambda availability change](https://docs.aws.amazon.com/AmazonS3/latest/userguide/amazons3-ol-change.html).

You can use an AWS CloudFormation template to quickly create an Amazon S3 Object Lambda Access Point. The CloudFormation template automatically creates relevant resources, configures AWS Identity and Access Management (IAM) roles, and sets up an AWS Lambda function that automatically handles requests through the Object Lambda Access Point. With the CloudFormation template, you can implement best practices, improve your security posture, and reduce errors caused by manual processes.

This [GitHub repository](https://github.com/aws-samples/amazon-s3-object-lambda-default-configuration) contains the CloudFormation template and Lambda function source code. For instructions on how to use the template, see [Creating Object Lambda Access Points](olap-create.md).

The Lambda function provided in the template does not run any transformation. Instead, it returns your objects as-is from the underlying data source. You can clone the function and add your own transformation code to modify and process data as it is returned to an application. For more information about modifying your function, see [Modifying the Lambda function](#modifying-lambda-function) and [Writing Lambda functions for S3 Object Lambda Access Points](olap-writing-lambda.md). 

## Modifying the template
<a name="modifying-cfn-template"></a>

**Creating a new supporting access point**  
S3 Object Lambda uses two access points, an Object Lambda Access Point and a standard S3 access point, which is referred to as the *supporting access point*. When you make a request to an Object Lambda Access Point, S3 either invokes Lambda on your behalf, or it delegates the request to the supporting access point, depending upon the S3 Object Lambda configuration. You can create a new supporting access point by passing the following parameter as part of the `aws cloudformation deploy` command when deploying the template.

```
CreateNewSupportingAccessPoint=true
```

**Configuring a function payload**  
You can configure a payload to provide supplemental data to the Lambda function by passing the following parameter as part of the `aws cloudformation deploy` command when deploying the template.

```
LambdaFunctionPayload="format=json"
```

**Enabling Amazon CloudWatch monitoring**  
You can enable CloudWatch monitoring by passing the following parameter as part of the `aws cloudformation deploy` command when deploying the template.

```
EnableCloudWatchMonitoring=true
```

This parameter enables your Object Lambda Access Point for Amazon S3 request metrics and creates two CloudWatch alarms to monitor client-side and server-side errors.

**Note**  
Amazon CloudWatch usage will incur additional costs. For more information about Amazon S3 request metrics, see [Monitoring and logging access points](access-points-monitoring-logging.md).  
For pricing details, see [CloudWatch pricing](https://aws.amazon.com/cloudwatch/pricing/). 

**Configuring provisioned concurrency**  
To reduce latency, you can configure provisioned concurrency for the Lambda function that's backing the Object Lambda Access Point by editing the template to include the following lines under `Resources`.

```
LambdaFunctionVersion:
      Type: AWS::Lambda::Version
      Properties:
        FunctionName: !Ref LambdaFunction
        ProvisionedConcurrencyConfig:
            ProvisionedConcurrentExecutions: Integer
```

**Note**  
You will incur additional charges for provisioning concurrency. For more information about provisioned concurrency, see [Managing Lambda provisioned concurrency](https://docs.aws.amazon.com/lambda/latest/dg/provisioned-concurrency.html) in the *AWS Lambda Developer Guide*.  
For pricing details, see [AWS Lambda pricing](https://aws.amazon.com/lambda/pricing/).

## Modifying the Lambda function
<a name="modifying-lambda-function"></a>

**Changing header values for a `GetObject` request**  
By default, the Lambda function forwards all headers, except `Content-Length` and `ETag`, from the presigned URL request to the `GetObject` client. Based on your transformation code in the Lambda function, you can choose to send new header values to the `GetObject` client.

You can update your Lambda function to send new header values by passing them in the `WriteGetObjectResponse` API operation.

For example, if your Lambda function translates text in Amazon S3 objects to a different language, you can pass a new value in the `Content-Language` header. You can do this by modifying the `writeResponse` function as follows:

```
async function writeResponse (s3Client: S3, requestContext: GetObjectContext, transformedObject: Buffer,
  headers: Headers): Promise<PromiseResult<{}, AWSError>> {
  const { algorithm, digest } = getChecksum(transformedObject);

  return s3Client.writeGetObjectResponse({
    RequestRoute: requestContext.outputRoute,
    RequestToken: requestContext.outputToken,
    Body: transformedObject,
    Metadata: {
      'body-checksum-algorithm': algorithm,
      'body-checksum-digest': digest
    },
    ...headers,
    ContentLanguage: 'my-new-language'
  }).promise();
}
```

For a full list of supported headers, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_WriteGetObjectResponse.html#API_WriteGetObjectResponse_RequestSyntax](https://docs.aws.amazon.com/AmazonS3/latest/API/API_WriteGetObjectResponse.html#API_WriteGetObjectResponse_RequestSyntax) in the *Amazon Simple Storage Service API Reference*.

**Returning metadata headers**  
You can update your Lambda function to send new header values by passing them in the [https://docs.aws.amazon.com/AmazonS3/latest/API/API_WriteGetObjectResponse.html#API_WriteGetObjectResponse_RequestSyntax](https://docs.aws.amazon.com/AmazonS3/latest/API/API_WriteGetObjectResponse.html#API_WriteGetObjectResponse_RequestSyntax) API operation request.

```
async function writeResponse (s3Client: S3, requestContext: GetObjectContext, transformedObject: Buffer,
  headers: Headers): Promise<PromiseResult<{}, AWSError>> {
  const { algorithm, digest } = getChecksum(transformedObject);

  return s3Client.writeGetObjectResponse({
    RequestRoute: requestContext.outputRoute,
    RequestToken: requestContext.outputToken,
    Body: transformedObject,
    Metadata: {
      'body-checksum-algorithm': algorithm,
      'body-checksum-digest': digest,
      'my-new-header': 'my-new-value' 
    },
    ...headers
  }).promise();
}
```

**Returning a new status code**  
You can return a custom status code to the `GetObject` client by passing it in the [https://docs.aws.amazon.com/AmazonS3/latest/API/API_WriteGetObjectResponse.html#API_WriteGetObjectResponse_RequestSyntax](https://docs.aws.amazon.com/AmazonS3/latest/API/API_WriteGetObjectResponse.html#API_WriteGetObjectResponse_RequestSyntax) API operation request.

```
async function writeResponse (s3Client: S3, requestContext: GetObjectContext, transformedObject: Buffer,
  headers: Headers): Promise<PromiseResult<{}, AWSError>> {
  const { algorithm, digest } = getChecksum(transformedObject);

  return s3Client.writeGetObjectResponse({
    RequestRoute: requestContext.outputRoute,
    RequestToken: requestContext.outputToken,
    Body: transformedObject,
    Metadata: {
      'body-checksum-algorithm': algorithm,
      'body-checksum-digest': digest
    },
    ...headers,
    StatusCode: Integer
  }).promise();
}
```

For a full list of supported status codes, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_WriteGetObjectResponse.html#API_WriteGetObjectResponse_RequestSyntax](https://docs.aws.amazon.com/AmazonS3/latest/API/API_WriteGetObjectResponse.html#API_WriteGetObjectResponse_RequestSyntax) in the *Amazon Simple Storage Service API Reference*.

**Applying `Range` and `partNumber` parameters to the source object**  
By default, the Object Lambda Access Point created by the CloudFormation template can handle the `Range` and `partNumber` parameters. The Lambda function applies the range or part number requested to the transformed object. To do so, the function must download the whole object and run the transformation. In some cases, your transformed object ranges might map exactly to your source object ranges. This means that requesting byte range A-B on your source object and running the transformation might produce the same result as requesting the whole object, running the transformation, and returning byte range A-B on the transformed object.

In such cases, you can change the Lambda function implementation to apply the range or part number directly to the source object. This approach reduces the overall function latency and memory required. For more information, see [Working with Range and partNumber headers](range-get-olap.md).

**Disabling `Range` and `partNumber` handling**  
By default, the Object Lambda Access Point created by the CloudFormation template can handle the `Range` and `partNumber` parameters. If you don't need this behavior, you can disable it by removing the following lines from the template:

```
AllowedFeatures:
  - GetObject-Range
  - GetObject-PartNumber
  - HeadObject-Range 
  - HeadObject-PartNumber
```

**Transforming large objects**  
By default, the Lambda function processes the entire object in memory before it can start streaming the response to S3 Object Lambda. You can modify the function to stream the response as it performs the transformation. Doing so helps reduce the transformation latency and the Lambda function memory size. For an example implementation, see the [Stream compressed content example](olap-writing-lambda.md#olap-getobject-response).

# Using Amazon S3 Object Lambda Access Points
<a name="olap-use"></a>

**Note**  
As of November 7th, 2025, S3 Object Lambda is available only to existing customers that are currently using the service as well as to select AWS Partner Network (APN) partners. For capabilities similar to S3 Object Lambda, learn more here - [Amazon S3 Object Lambda availability change](https://docs.aws.amazon.com/AmazonS3/latest/userguide/amazons3-ol-change.html).

Making requests through Amazon S3 Object Lambda Access Points works the same as making requests through other access points. For more information about how to make requests through an access point, see [Using Amazon S3 access points for general purpose buckets](using-access-points.md). You can make requests through Object Lambda Access Points by using the Amazon S3 console, AWS Command Line Interface (AWS CLI), AWS SDKs, or Amazon S3 REST API.

**Important**  
The Amazon Resource Names (ARNs) for Object Lambda Access Points use a service name of `s3-object-lambda`. Thus, Object Lambda Access Point ARNs begin with `arn:aws::s3-object-lambda`, instead of `arn:aws::s3`, which is used with other access points.

## How to find the ARN for your Object Lambda Access Point
<a name="olap-find-arn"></a>

To use an Object Lambda Access Point with the AWS CLI or AWS SDKs, you need to know the Amazon Resource Name (ARN) of the Object Lambda Access Point. The following examples show how to find the ARN for an Object Lambda Access Point by using the Amazon S3 console or AWS CLI. 

### Using the S3 console
<a name="olap-use-arn-console"></a>

**To find the ARN for your Object Lambda Access Point by using the console**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Object Lambda Access Points**.

1. Choose the option button next to the Object Lambda Access Point whose ARN you want to copy.

1. Choose **Copy ARN**.

### Using the AWS CLI
<a name="olap-use-arn-cli"></a>

**To find the ARN for your Object Lambda Access Point by using the AWS CLI**

1. To retrieve a list of the Object Lambda Access Points that are associated with your AWS account, run the following command. Before running the command, replace the account ID *`111122223333`* with your AWS account ID.

   ```
   aws s3control list-access-points-for-object-lambda --account-id 111122223333
   ```

1. Review the command output to find the Object Lambda Access Point ARN that you want to use. The output of the previous command should look similar to the following example.

   ```
   {
       "ObjectLambdaAccessPointList": [
           {
               "Name": "my-object-lambda-ap",
               "ObjectLambdaAccessPointArn": "arn:aws:s3-object-lambda:us-east-1:111122223333:accesspoint/my-object-lambda-ap"
           },
           ...
       ]
   }
   ```

## How to use a bucket-style alias for your S3 bucket Object Lambda Access Point
<a name="ol-access-points-alias"></a>

When you create an Object Lambda Access Point, Amazon S3 automatically generates a unique alias for your Object Lambda Access Point. You can use this alias instead of an Amazon S3 bucket name or the Object Lambda Access Point Amazon Resource Name (ARN) in a request for access point data plane operations. For a list of these operations, see [Access point compatibility](access-points-service-api-support.md).

An Object Lambda Access Point alias name is created within the same namespace as an Amazon S3 bucket. This alias name is automatically generated and cannot be changed. For an existing Object Lambda Access Point, an alias is automatically assigned for use. An Object Lambda Access Point alias name meets all the requirements of a valid Amazon S3 bucket name and consists of the following parts:

`Object Lambda Access Point name prefix-metadata--ol-s3`

**Note**  
The `--ol-s3` suffix is reserved for Object Lambda Access Point alias names and can't be used for bucket or Object Lambda Access Point names. For more information about Amazon S3 bucket-naming rules, see [General purpose bucket naming rules](bucketnamingrules.md).

The following examples show the ARN and the Object Lambda Access Point alias for an Object Lambda Access Point named `my-object-lambda-access-point`:
+ **ARN** – `arn:aws:s3-object-lambda:region:account-id:accesspoint/my-object-lambda-access-point`
+ **Object Lambda Access Point alias** – `my-object-lambda-acc-1a4n8yjrb3kda96f67zwrwiiuse1a--ol-s3`

When you use an Object Lambda Access Point, you can use the Object Lambda Access Point alias name without requiring extensive code changes.

When you delete an Object Lambda Access Point, the Object Lambda Access Point alias name becomes inactive and unprovisioned.

### How to find the alias for your Object Lambda Access Point
<a name="olap-find-alias"></a>

#### Using the S3 console
<a name="olap-use-alias-console"></a>

**To find the alias for your Object Lambda Access Point by using the console**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Object Lambda Access Points**.

1. For the Object Lambda Access Point that you want to use, copy the **Object Lambda Access Point alias** value.

#### Using the AWS CLI
<a name="olap-use-alias-cli"></a>

When you create an Object Lambda Access Point, Amazon S3 automatically generates an Object Lambda Access Point alias name, as shown in the following example command. To run this command, replace the `user input placeholders` with your own information. For information about how to create an Object Lambda Access Point by using the AWS CLI, see [To create an Object Lambda Access Point by using the AWS CLI](olap-create.md#olap-create-cli-specific).

```
aws s3control create-access-point-for-object-lambda --account-id 111122223333 --name my-object-lambda-access-point --configuration file://my-olap-configuration.json
{
    "ObjectLambdaAccessPointArn": "arn:aws:s3:region:111122223333:accesspoint/my-access-point",
    "Alias": {
        "Value": "my-object-lambda-acc-1a4n8yjrb3kda96f67zwrwiiuse1a--ol-s3",
        "Status": "READY"
    }
}
```

The generated Object Lambda Access Point alias name has two fields: 
+ The `Value` field is the alias value of the Object Lambda Access Point. 
+ The `Status` field is the status of the Object Lambda Access Point alias. If the status is `PROVISIONING`, Amazon S3 is provisioning the Object Lambda Access Point alias, and the alias is not yet ready for use. If the status is `READY`, the Object Lambda Access Point alias has been successfully provisioned and is ready for use.

For more information about the `ObjectLambdaAccessPointAlias` data type in the REST API, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateAccessPointForObjectLambda.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateAccessPointForObjectLambda.html) and [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ObjectLambdaAccessPointAlias.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ObjectLambdaAccessPointAlias.html) in the *Amazon Simple Storage Service API Reference*.

### How to use the Object Lambda Access Point alias
<a name="use-olap-alias"></a>

You can use an Object Lambda Access Point alias instead of an Amazon S3 bucket name for the operations listed in [Access point compatibility](access-points-service-api-support.md).

The following AWS CLI example for the `get-bucket-location` command uses the bucket's access point alias to return the AWS Region that the bucket is in. To run this command, replace the `user input placeholders` with your own information.

```
aws s3api get-bucket-location --bucket my-object-lambda-acc-w7i37nq6xuzgax3jw3oqtifiusw2a--ol-s3
            
{
    "LocationConstraint": "us-west-2"
}
```

If the Object Lambda Access Point alias in a request isn't valid, the error code `InvalidAccessPointAliasError` is returned. For more information about `InvalidAccessPointAliasError`, see [List of Error Codes](https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html#ErrorCodeList) in the *Amazon Simple Storage Service API Reference*.

The limitations of an Object Lambda Access Point alias are the same as those of an access point alias. For more information about the limitations of an access point alias, see [Access point alias limitations](access-points-naming.md#use-ap-alias-limitations).

# Security considerations for S3 Object Lambda Access Points
<a name="olap-security"></a>

**Note**  
As of November 7th, 2025, S3 Object Lambda is available only to existing customers that are currently using the service as well as to select AWS Partner Network (APN) partners. For capabilities similar to S3 Object Lambda, learn more here - [Amazon S3 Object Lambda availability change](https://docs.aws.amazon.com/AmazonS3/latest/userguide/amazons3-ol-change.html).

With Amazon S3 Object Lambda, you can perform custom transformations on data as it leaves Amazon S3 by using the scale and flexibility of AWS Lambda as a compute platform. S3 and Lambda remain secure by default, but to maintain this security, special consideration by the Lambda function author is required. S3 Object Lambda requires that all access be made by authenticated principals (no anonymous access) and over HTTPS.

To mitigate security risks, we recommend the following: 
+ Scope the Lambda execution role to the smallest set of permissions possible.
+ Whenever possible, make sure your Lambda function accesses Amazon S3 through the provided presigned URL. 

## Configuring IAM policies
<a name="olap-iam-policies"></a>

S3 access points support AWS Identity and Access Management (IAM) resource policies that allow you to control the use of the access point by resource, user, or other conditions. For more information, see [Configuring IAM policies for Object Lambda Access Points](olap-policies.md).

## Encryption behavior
<a name="olap-encryption"></a>

Because Object Lambda Access Points use both Amazon S3 and AWS Lambda, there are differences in encryption behavior. For more information about default S3 encryption behavior, see [Setting default server-side encryption behavior for Amazon S3 buckets](bucket-encryption.md).
+ When you're using S3 server-side encryption with Object Lambda Access Points, the object is decrypted before being sent to Lambda. After the object is sent to Lambda, it is processed unencrypted (in the case of a `GET` or `HEAD` request).
+ To prevent the encryption key from being logged, S3 rejects `GET` and `HEAD` requests for objects that are encrypted by using server-side encryption with customer-provided keys (SSE-C). However, the Lambda function might still retrieve these objects if it has access to the client-provided key.
+ When using S3 client-side encryption with Object Lambda Access Points, make sure that Lambda has access to the encryption key so that it can decrypt and re-encrypt the object.

## Access points security
<a name="olap-access-points-security"></a>

S3 Object Lambda uses two access points, an Object Lambda Access Point and a standard S3 access point, which is referred to as the *supporting access point*. When you make a request to an Object Lambda Access Point, S3 either invokes Lambda on your behalf, or it delegates the request to the supporting access point, depending upon the S3 Object Lambda configuration. When Lambda is invoked for a request S3 generates a presigned URL to your object on your behalf through the supporting access point. Your Lambda function receives this URL as input when the function is invoked.

You can set your Lambda function to use this presigned URL to retrieve the original object, instead of invoking S3 directly. By using this model, you can apply better security boundaries to your objects. You can limit direct object access through S3 buckets or S3 access points to a limited set of IAM roles or users. This approach also protects your Lambda functions from being subject to the [confused deputy problem](https://docs.aws.amazon.com/IAM/latest/UserGuide/confused-deputy.html), where a misconfigured function with different permissions than the invoker could allow or deny access to objects when it should not.

## Object Lambda Access Point public access
<a name="olap-public-access"></a>

S3 Object Lambda does not allow anonymous or public access because Amazon S3 must authorize your identity to complete any S3 Object Lambda request. When invoking requests through an Object Lambda Access Point, you must have the `lambda:InvokeFunction` permission for the configured Lambda function. Similarly, when invoking other API operations through an Object Lambda Access Point, you must have the required `s3:*` permissions. 

Without these permissions, requests to invoke Lambda or delegate to S3 will fail with HTTP 403 (Forbidden) errors. All access must be made by authenticated principals. If you require public access, you can use Lambda@Edge as a possible alternative. For more information, see [Customizing at the edge with Lambda@Edge](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-at-the-edge.html) in the *Amazon CloudFront Developer Guide*.

## Object Lambda Access Point IP addresses
<a name="olap-ips"></a>

The `describe-managed-prefix-lists` subnets support gateway virtual private cloud (VPC) endpoints and are related to the routing table of VPC endpoints. Since Object Lambda Access Point does not support gateway VPC its IP ranges are missing. The missing ranges belong to Amazon S3, but are not supported by gateway VPC endpoints. For more information about `describe-managed-prefix-lists`, see [DescribeManagedPrefixLists](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeManagedPrefixLists.html) in the *Amazon EC2 API Reference* and [AWS IP address ranges](https://docs.aws.amazon.com/general/latest/gr/aws-ip-ranges.html) in the *AWS General Reference*.

# Configuring IAM policies for Object Lambda Access Points
<a name="olap-policies"></a>

**Note**  
As of November 7th, 2025, S3 Object Lambda is available only to existing customers that are currently using the service as well as to select AWS Partner Network (APN) partners. For capabilities similar to S3 Object Lambda, learn more here - [Amazon S3 Object Lambda availability change](https://docs.aws.amazon.com/AmazonS3/latest/userguide/amazons3-ol-change.html).

Amazon S3 access points support AWS Identity and Access Management (IAM) resource policies that you can use to control the use of the access point by resource, user, or other conditions. You can control access through an optional resource policy on your Object Lambda Access Point, or a resource policy on supporting access point. For step-by-step examples, see [Tutorial: Transforming data for your application with S3 Object Lambda](tutorial-s3-object-lambda-uppercase.md) and [Tutorial: Detecting and redacting PII data with S3 Object Lambda and Amazon Comprehend](tutorial-s3-object-lambda-redact-pii.md). 

The following four resources must have permissions granted to work with Object Lambda Access Points:
+ The IAM identity, such as user or role. For more information about IAM identities and best practices, see [IAM identities (users, user groups, and roles)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id.html) in the *IAM User Guide*.
+ The standard access point connected to an underlying data source such as an S3 bucket or Amazon FSx for OpenZFS volume. When you're working with Object Lambda Access Points, this standard access point is known as a *supporting access point*.
+ The Object Lambda Access Point.
+ The AWS Lambda function.

**Important**  
Before you save your policy, make sure to resolve security warnings, errors, general warnings, and suggestions from AWS Identity and Access Management Access Analyzer. IAM Access Analyzer runs policy checks to validate your policy against IAM [policy grammar](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_grammar.html) and [best practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html). These checks generate findings and provide actionable recommendations to help you author policies that are functional and conform to security best practices.   
To learn more about validating policies by using IAM Access Analyzer, see [IAM Access Analyzer policy validation](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-policy-validation.html) in the *IAM User Guide*. To view a list of the warnings, errors, and suggestions that are returned by IAM Access Analyzer, see [IAM Access Analyzer policy check reference](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-reference-policy-checks.html).

The following policy examples assume that you have the following resources:
+ An Amazon S3 bucket with the following Amazon Resource Name (ARN): 

  `arn:aws:s3:::amzn-s3-demo-bucket1`
+ An Amazon S3 standard access point on this bucket with the following ARN: 

  `arn:aws:s3:us-east-1:111122223333:accesspoint/my-access-point`
+ An Object Lambda Access Point with the following ARN: 

  `arn:aws:s3-object-lambda:us-east-1:111122223333:accesspoint/my-object-lambda-ap`
+ An AWS Lambda function with the following ARN: 

  `arn:aws:lambda:us-east-1:111122223333:function:MyObjectLambdaFunction`

**Note**  
If you're using a Lambda function from your account, you must include the specific function version in your policy statement. In the following example ARN, the version is indicated by *1*:  
`arn:aws:lambda:us-east-1:111122223333:function:MyObjectLambdaFunction:1`  
Lambda doesn't support adding IAM policies to the version `$LATEST`. For more information about Lambda function versions, see [Lambda function versions](https://docs.aws.amazon.com/lambda/latest/dg/configuration-versions.html) in the *AWS Lambda Developer Guide*.

**Example – Bucket policy that delegates access control to standard access points**  
The following S3 bucket policy example delegates access control for a bucket to the bucket's standard access points. This policy allows full access to all access points that are owned by the bucket owner's account. Thus, all access to this bucket is controlled by the policies that are attached to its access points. Users can read from the bucket only through an access point, which means that operations can be invoked only through access points. For more information, see [Delegating access control to access points](access-points-policies.md#access-points-delegating-control).     
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement" : [
    {
        "Effect": "Allow",
        "Principal" : { "AWS":"account-ARN"},
        "Action" : "*",
        "Resource" : [
            "arn:aws:s3:::amzn-s3-demo-bucket", 
            "arn:aws:s3:::amzn-s3-demo-bucket/*"
        ],
        "Condition": {
            "StringEquals" : { "s3:DataAccessPointAccount" : "Bucket owner's account ID" }
        }
    }]
}
```

**Example – IAM policy that grants a user the necessary permissions to use an Object Lambda Access Point**  
The following IAM policy grants a user permissions to the Lambda function, the standard access point, and the Object Lambda Access Point.    
****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "AllowLambdaInvocation",
      "Action": [
        "lambda:InvokeFunction"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:lambda:us-east-1:111122223333:function:MyObjectLambdaFunction:1",
      "Condition": {
        "ForAnyValue:StringEquals": {
          "aws:CalledVia": [
            "s3-object-lambda.amazonaws.com"
          ]
        }
      }
    },
    {
      "Sid": "AllowStandardAccessPointAccess",
      "Action": [
        "s3:Get*",
        "s3:List*"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:s3:us-east-1:111122223333:accesspoint/my-access-point/*",
      "Condition": {
        "ForAnyValue:StringEquals": {
          "aws:CalledVia": [
            "s3-object-lambda.amazonaws.com"
          ]
        }
      }
    },
    {
      "Sid": "AllowObjectLambdaAccess",
      "Action": [
        "s3-object-lambda:Get*",
        "s3-object-lambda:List*"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:s3-object-lambda:us-east-1:111122223333:accesspoint/my-object-lambda-ap"
    }
  ]
}
```

## Enable permissions for Lambda execution roles
<a name="olap-execution-role"></a>

When `GET` requests are made to an Object Lambda Access Point, your Lambda function needs permission to send data to S3 Object Lambda Access Point. This permission is provided by enabling the `s3-object-lambda:WriteGetObjectResponse` permission on your Lambda function's execution role. You can create a new execution role or update an existing one.

**Note**  
Your function needs the `s3-object-lambda:WriteGetObjectResponse` permission only if you're making a `GET` request.

**To create an execution role in the IAM console**

1. Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the left navigation pane, choose **Roles**. 

1. Choose **Create role**.

1. Under **Common use cases**, choose **Lambda**.

1. Choose **Next**.

1. On the **Add permissions** page, search for the AWS managed policy [https://console.aws.amazon.com/iam/home#/policies/arn:aws:iam::aws:policy/service-role/AmazonS3ObjectLambdaExecutionRolePolicy$serviceLevelSummary](https://console.aws.amazon.com/iam/home#/policies/arn:aws:iam::aws:policy/service-role/AmazonS3ObjectLambdaExecutionRolePolicy$serviceLevelSummary), and then select the check box beside the policy name. 

   This policy should contain the `s3-object-lambda:WriteGetObjectResponse` Action.

1. Choose **Next**.

1. On the **Name, review, and create** page, for **Role name**, enter **s3-object-lambda-role**.

1. (Optional) Add a description and tags for this role. 

1. Choose **Create role**.

1. Apply the newly created **s3-object-lambda-role** as your Lambda function's execution role. This can be done during or after Lambda function creation in the Lambda console.

For more information about execution roles, see [Lambda execution role](https://docs.aws.amazon.com/lambda/latest/dg/lambda-intro-execution-role.html) in the *AWS Lambda Developer Guide*.

## Using context keys with Object Lambda Access Points
<a name="olap-keys"></a>

S3 Object Lambda will evaluate context keys such as `s3-object-lambda:TlsVersion` or `s3-object-lambda:AuthType` that are related to the connection or signing of the request. All other context keys, such as `s3:prefix`, are evaluated by Amazon S3. 

## Object Lambda Access Point CORS support
<a name="olap-cors"></a>

When S3 Object Lambda receives a request from a browser or the request includes an `Origin` header, S3 Object Lambda always adds an `"AllowedOrigins":"*"` header field.

For more information, see [Using cross-origin resource sharing (CORS)](cors.md).

# Writing Lambda functions for S3 Object Lambda Access Points
<a name="olap-writing-lambda"></a>

**Note**  
As of November 7th, 2025, S3 Object Lambda is available only to existing customers that are currently using the service as well as to select AWS Partner Network (APN) partners. For capabilities similar to S3 Object Lambda, learn more here - [Amazon S3 Object Lambda availability change](https://docs.aws.amazon.com/AmazonS3/latest/userguide/amazons3-ol-change.html).

This section details how to write AWS Lambda functions for use with Amazon S3 Object Lambda Access Points.

To learn about complete end-to-end procedures for some S3 Object Lambda tasks, see the following:
+ [Tutorial: Transforming data for your application with S3 Object Lambda](tutorial-s3-object-lambda-uppercase.md)
+ [Tutorial: Detecting and redacting PII data with S3 Object Lambda and Amazon Comprehend](tutorial-s3-object-lambda-redact-pii.md)
+ [Tutorial: Using S3 Object Lambda to dynamically watermark images as they are retrieved](https://aws.amazon.com/getting-started/hands-on/amazon-s3-object-lambda-to-dynamically-watermark-images/?ref=docs_gateway/amazons3/olap-writing-lambda.html)

**Topics**
+ [

## Working with `GetObject` requests in Lambda
](#olap-getobject-response)
+ [

## Working with `HeadObject` requests in Lambda
](#olap-headobject)
+ [

## Working with `ListObjects` requests in Lambda
](#olap-listobjects)
+ [

## Working with `ListObjectsV2` requests in Lambda
](#olap-listobjectsv2)
+ [

# Event context format and usage
](olap-event-context.md)
+ [

# Working with Range and partNumber headers
](range-get-olap.md)

## Working with `GetObject` requests in Lambda
<a name="olap-getobject-response"></a>

This section assumes that your Object Lambda Access Point is configured to call the Lambda function for `GetObject`. S3 Object Lambda includes the Amazon S3 API operation, `WriteGetObjectResponse`, which enables the Lambda function to provide customized data and response headers to the `GetObject` caller. 

`WriteGetObjectResponse` gives you extensive control over the status code, response headers, and response body, based on your processing needs. You can use `WriteGetObjectResponse` to respond with the whole transformed object, portions of the transformed object, or other responses based on the context of your application. The following section shows unique examples of using the `WriteGetObjectResponse` API operation.
+ **Example 1:** Respond with HTTP status code 403 (Forbidden) 
+ **Example 2:** Respond with a transformed image
+ **Example 3:** Stream compressed content

**Example 1: Respond with HTTP status code 403 (Forbidden) **

You can use `WriteGetObjectResponse` to respond with the HTTP status code 403 (Forbidden) based on the content of the object.

------
#### [ Java ]



```
package com.amazon.s3.objectlambda;

import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.events.S3ObjectLambdaEvent;
import software.amazon.awssdk.core.sync.RequestBody;
import software.amazon.awssdk.services.s3.S3Client;
import software.amazon.awssdk.services.s3.model.WriteGetObjectResponseRequest;

import java.io.ByteArrayInputStream;
import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;

public class Example1 {

    public void handleRequest(S3ObjectLambdaEvent event, Context context) throws Exception {
        S3Client s3Client = S3Client.builder().build();

        // Check to see if the request contains all of the necessary information.
        // If it does not, send a 4XX response and a custom error code and message.
        // Otherwise, retrieve the object from S3 and stream it
        // to the client unchanged.
        var tokenIsNotPresent = !event.getUserRequest().getHeaders().containsKey("requiredToken");
        if (tokenIsNotPresent) {
            s3Client.writeGetObjectResponse(WriteGetObjectResponseRequest.builder()
                    .requestRoute(event.outputRoute())
                    .requestToken(event.outputToken())
                    .statusCode(403)
                    .contentLength(0L)
                    .errorCode("MissingRequiredToken")
                    .errorMessage("The required token was not present in the request.")
                    .build(),
                    RequestBody.fromInputStream(new ByteArrayInputStream(new byte[0]), 0L));
            return;
        }

        // Prepare the presigned URL for use and make the request to S3.
        HttpClient httpClient = HttpClient.newBuilder().build();
        var presignedResponse = httpClient.send(
                HttpRequest.newBuilder(new URI(event.inputS3Url())).GET().build(),
                HttpResponse.BodyHandlers.ofInputStream());

        // Stream the original bytes back to the caller.
        s3Client.writeGetObjectResponse(WriteGetObjectResponseRequest.builder()
                .requestRoute(event.outputRoute())
                .requestToken(event.outputToken())
                .build(),
                RequestBody.fromInputStream(presignedResponse.body(),
                    presignedResponse.headers().firstValueAsLong("content-length").orElse(-1L)));
    }
}
```

------
#### [ Python ]



```
import boto3
import requests 

def handler(event, context):
    s3 = boto3.client('s3')

    """
    Retrieve the operation context object from the event. This object indicates where the WriteGetObjectResponse request
    should be delivered and contains a presigned URL in 'inputS3Url' where we can download the requested object from.
    The 'userRequest' object has information related to the user who made this 'GetObject' request to 
    S3 Object Lambda.
    """
    get_context = event["getObjectContext"]
    user_request_headers = event["userRequest"]["headers"]

    route = get_context["outputRoute"]
    token = get_context["outputToken"]
    s3_url = get_context["inputS3Url"]

    # Check for the presence of a 'CustomHeader' header and deny or allow based on that header.
    is_token_present = "SuperSecretToken" in user_request_headers

    if is_token_present:
        # If the user presented our custom 'SuperSecretToken' header, we send the requested object back to the user.
        response = requests.get(s3_url)
        s3.write_get_object_response(RequestRoute=route, RequestToken=token, Body=response.content)
    else:
        # If the token is not present, we send an error back to the user. 
        s3.write_get_object_response(RequestRoute=route, RequestToken=token, StatusCode=403,
        ErrorCode="NoSuperSecretTokenFound", ErrorMessage="The request was not secret enough.")

    # Gracefully exit the Lambda function.
    return { 'status_code': 200 }
```

------
#### [ Node.js ]



```
const { S3 } = require('aws-sdk');
const axios = require('axios').default;

exports.handler = async (event) => {
    const s3 = new S3();

    // Retrieve the operation context object from the event. This object indicates where the WriteGetObjectResponse request
    // should be delivered and contains a presigned URL in 'inputS3Url' where we can download the requested object from.
    // The 'userRequest' object has information related to the user who made this 'GetObject' request to S3 Object Lambda.
    const { userRequest, getObjectContext } = event;
    const { outputRoute, outputToken, inputS3Url } = getObjectContext;

    // Check for the presence of a 'CustomHeader' header and deny or allow based on that header.
    const isTokenPresent = Object
        .keys(userRequest.headers)
        .includes("SuperSecretToken");

    if (!isTokenPresent) {
        // If the token is not present, we send an error back to the user. The 'await' in front of the request
        // indicates that we want to wait for this request to finish sending before moving on. 
        await s3.writeGetObjectResponse({
            RequestRoute: outputRoute,
            RequestToken: outputToken,
            StatusCode: 403,
            ErrorCode: "NoSuperSecretTokenFound",
            ErrorMessage: "The request was not secret enough.",
        }).promise();
    } else {
        // If the user presented our custom 'SuperSecretToken' header, we send the requested object back to the user.
        // Again, note the presence of 'await'.
        const presignedResponse = await axios.get(inputS3Url);
        await s3.writeGetObjectResponse({
            RequestRoute: outputRoute,
            RequestToken: outputToken,
            Body: presignedResponse.data,
        }).promise();
    }

    // Gracefully exit the Lambda function.
    return { statusCode: 200 };
}
```

------

**Example 2: Respond with a transformed image**

When performing an image transformation, you might find that you need all the bytes of the source object before you can start processing them. In this case, your `WriteGetObjectResponse` request returns the whole object to the requesting application in one call.

------
#### [ Java ]



```
package com.amazon.s3.objectlambda;

import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.events.S3ObjectLambdaEvent;
import software.amazon.awssdk.core.sync.RequestBody;
import software.amazon.awssdk.services.s3.S3Client;
import software.amazon.awssdk.services.s3.model.WriteGetObjectResponseRequest;

import javax.imageio.ImageIO;
import java.awt.image.BufferedImage;
import java.awt.Image;
import java.io.ByteArrayOutputStream;
import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;

public class Example2V2 {

    private static final int HEIGHT = 250;
    private static final int WIDTH = 250;

    public void handleRequest(S3ObjectLambdaEvent event, Context context) throws Exception {
        S3Client s3Client = S3Client.builder().build();
        HttpClient httpClient = HttpClient.newBuilder().build();

        // Prepare the presigned URL for use and make the request to S3.
        var presignedResponse = httpClient.send(
                HttpRequest.newBuilder(new URI(event.inputS3Url())).GET().build(),
                HttpResponse.BodyHandlers.ofInputStream());

        // The entire image is loaded into memory here so that we can resize it.
        // Once the resizing is completed, we write the bytes into the body
        // of the WriteGetObjectResponse request.
        var originalImage = ImageIO.read(presignedResponse.body());
        var resizingImage = originalImage.getScaledInstance(WIDTH, HEIGHT, Image.SCALE_DEFAULT);
        var resizedImage = new BufferedImage(WIDTH, HEIGHT, BufferedImage.TYPE_INT_RGB);
        resizedImage.createGraphics().drawImage(resizingImage, 0, 0, WIDTH, HEIGHT, null);

        var baos = new ByteArrayOutputStream();
        ImageIO.write(resizedImage, "png", baos);

        // Stream the bytes back to the caller.
        s3Client.writeGetObjectResponse(WriteGetObjectResponseRequest.builder()
                .requestRoute(event.outputRoute())
                .requestToken(event.outputToken())
                .build(), RequestBody.fromBytes(baos.toByteArray()));
    }
}
```

------
#### [ Python ]



```
import boto3
import requests 
import io
from PIL import Image

def handler(event, context):
    """
    Retrieve the operation context object from the event. This object indicates where the WriteGetObjectResponse request
    should be delivered and has a presigned URL in 'inputS3Url' where we can download the requested object from.
    The 'userRequest' object has information related to the user who made this 'GetObject' request to 
    S3 Object Lambda.
    """
    get_context = event["getObjectContext"]
    route = get_context["outputRoute"]
    token = get_context["outputToken"]
    s3_url = get_context["inputS3Url"]

    """
    In this case, we're resizing .png images that are stored in S3 and are accessible through the presigned URL
    'inputS3Url'.
    """
    image_request = requests.get(s3_url)
    image = Image.open(io.BytesIO(image_request.content))
    image.thumbnail((256,256), Image.ANTIALIAS)

    transformed = io.BytesIO()
    image.save(transformed, "png")

    # Send the resized image back to the client.
    s3 = boto3.client('s3')
    s3.write_get_object_response(Body=transformed.getvalue(), RequestRoute=route, RequestToken=token)

    # Gracefully exit the Lambda function.
    return { 'status_code': 200 }
```

------
#### [ Node.js ]



```
const { S3 } = require('aws-sdk');
const axios = require('axios').default;
const sharp = require('sharp');

exports.handler = async (event) => {
    const s3 = new S3();

    // Retrieve the operation context object from the event. This object indicates where the WriteGetObjectResponse request
    // should be delivered and has a presigned URL in 'inputS3Url' where we can download the requested object from.
    const { getObjectContext } = event;
    const { outputRoute, outputToken, inputS3Url } = getObjectContext;

    // In this case, we're resizing .png images that are stored in S3 and are accessible through the presigned URL
    // 'inputS3Url'.
    const { data } = await axios.get(inputS3Url, { responseType: 'arraybuffer' });

    // Resize the image.
    const resized = await sharp(data)
        .resize({ width: 256, height: 256 })
        .toBuffer();

    // Send the resized image back to the client.
    await s3.writeGetObjectResponse({
        RequestRoute: outputRoute,
        RequestToken: outputToken,
        Body: resized,
    }).promise();

    // Gracefully exit the Lambda function.
    return { statusCode: 200 };
}
```

------

**Example 3: Stream compressed content**

When you're compressing objects, compressed data is produced incrementally. Consequently, you can use your `WriteGetObjectResponse` request to return the compressed data as soon as it's ready. As shown in this example, you don't need to know the length of the completed transformation.

------
#### [ Java ]



```
package com.amazon.s3.objectlambda;

import com.amazonaws.services.lambda.runtime.events.S3ObjectLambdaEvent;
import com.amazonaws.services.lambda.runtime.Context;
import software.amazon.awssdk.core.sync.RequestBody;
import software.amazon.awssdk.services.s3.S3Client;
import software.amazon.awssdk.services.s3.model.WriteGetObjectResponseRequest;

import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;

public class Example3 {

    public void handleRequest(S3ObjectLambdaEvent event, Context context) throws Exception {
        S3Client s3Client = S3Client.builder().build();
        HttpClient httpClient = HttpClient.newBuilder().build();

        // Request the original object from S3.
        var presignedResponse = httpClient.send(
                HttpRequest.newBuilder(new URI(event.inputS3Url())).GET().build(),
                HttpResponse.BodyHandlers.ofInputStream());

        // Consume the incoming response body from the presigned request,
        // apply our transformation on that data, and emit the transformed bytes
        // into the body of the WriteGetObjectResponse request as soon as they're ready.
        // This example compresses the data from S3, but any processing pertinent
        // to your application can be performed here.
        var bodyStream = new GZIPCompressingInputStream(presignedResponse.body());

        // Stream the bytes back to the caller.
        s3Client.writeGetObjectResponse(WriteGetObjectResponseRequest.builder()
                .requestRoute(event.outputRoute())
                .requestToken(event.outputToken())
                .build(),
                RequestBody.fromInputStream(bodyStream,
                    presignedResponse.headers().firstValueAsLong("content-length").orElse(-1L)));
    }
}
```

------
#### [ Python ]



```
import boto3
import requests
import zlib
from botocore.config import Config


"""
A helper class to work with content iterators. Takes an interator and compresses the bytes that come from it. It
implements 'read' and '__iter__' so that the SDK can stream the response. 
"""
class Compress:
    def __init__(self, content_iter):
        self.content = content_iter
        self.compressed_obj = zlib.compressobj()

    def read(self, _size):
        for data in self.__iter__()
            return data

    def __iter__(self):
        while True:
            data = next(self.content)
            chunk = self.compressed_obj.compress(data)
            if not chunk:
                break

            yield chunk

        yield self.compressed_obj.flush()


def handler(event, context):
    """
    Setting the 'payload_signing_enabled' property to False allows us to send a streamed response back to the client.
    in this scenario, a streamed response means that the bytes are not buffered into memory as we're compressing them,
    but instead are sent straight to the user.
    """
    my_config = Config(
        region_name='eu-west-1',
        signature_version='s3v4',
        s3={
            "payload_signing_enabled": False
        }
    )
    s3 = boto3.client('s3', config=my_config)

    """
    Retrieve the operation context object from the event. This object indicates where the WriteGetObjectResponse request
    should be delivered and has a presigned URL in 'inputS3Url' where we can download the requested object from.
    The 'userRequest' object has information related to the user who made this 'GetObject' request to S3 Object Lambda.
    """
    get_context = event["getObjectContext"]
    route = get_context["outputRoute"]
    token = get_context["outputToken"]
    s3_url = get_context["inputS3Url"]

    # Compress the 'get' request stream.
    with requests.get(s3_url, stream=True) as r:
        compressed = Compress(r.iter_content())

        # Send the stream back to the client.
        s3.write_get_object_response(Body=compressed, RequestRoute=route, RequestToken=token, ContentType="text/plain",
                                     ContentEncoding="gzip")

    # Gracefully exit the Lambda function.
    return {'status_code': 200}
```

------
#### [ Node.js ]



```
const { S3 } = require('aws-sdk');
const axios = require('axios').default;
const zlib = require('zlib');

exports.handler = async (event) => {
    const s3 = new S3();

    // Retrieve the operation context object from the event. This object indicates where the WriteGetObjectResponse request
    // should be delivered and has a presigned URL in 'inputS3Url' where we can download the requested object from.
    const { getObjectContext } = event;
    const { outputRoute, outputToken, inputS3Url } = getObjectContext;

    // Download the object from S3 and process it as a stream, because it might be a huge object and we don't want to
    // buffer it in memory. Note the use of 'await' because we want to wait for 'writeGetObjectResponse' to finish 
    // before we can exit the Lambda function. 
    await axios({
        method: 'GET',
        url: inputS3Url,
        responseType: 'stream',
    }).then(
        // Gzip the stream.
        response => response.data.pipe(zlib.createGzip())
    ).then(
        // Finally send the gzip-ed stream back to the client.
        stream => s3.writeGetObjectResponse({
            RequestRoute: outputRoute,
            RequestToken: outputToken,
            Body: stream,
            ContentType: "text/plain",
            ContentEncoding: "gzip",
        }).promise()
    );

    // Gracefully exit the Lambda function.
    return { statusCode: 200 };
}
```

------

**Note**  
Although S3 Object Lambda allows up to 60 seconds to send a complete response to the caller through the `WriteGetObjectResponse` request, the actual amount of time available might be less. For example, your Lambda function timeout might be less than 60 seconds. In other cases, the caller might have more stringent timeouts. 

For the original caller to receive a response other than HTTP status code 500 (Internal Server Error), the `WriteGetObjectResponse` call must be completed. If the Lambda function returns, with an exception or otherwise, before the `WriteGetObjectResponse` API operation is called, the original caller receives a 500 (Internal Server Error) response. Exceptions thrown during the time it takes to complete the response result in truncated responses to the caller. If the Lambda function receives an HTTP status code 200 (OK) response from the `WriteGetObjectResponse` API call, then the original caller has sent the complete request. The Lambda function's response, whether an exception is thrown or not, is ignored by S3 Object Lambda.

When calling the `WriteGetObjectResponse` API operation, Amazon S3 requires the route and request token from the event context. For more information, see [Event context format and usage](olap-event-context.md).

The route and request token parameters are required to connect the `WriteGetObjectResult` response with the original caller. Even though it is always appropriate to retry 500 (Internal Server Error) responses, because the request token is a single-use token, subsequent attempts to use it might result in HTTP status code 400 (Bad Request) responses. Although the call to `WriteGetObjectResponse` with the route and request tokens doesn't need to be made from the invoked Lambda function, it must be made by an identity in the same account. The call also must be completed before the Lambda function finishes execution.

## Working with `HeadObject` requests in Lambda
<a name="olap-headobject"></a>

This section assumes that your Object Lambda Access Point is configured to call the Lambda function for `HeadObject`. Lambda will receive a JSON payload that contains a key called `headObjectContext`. Inside the context, there is a single property called `inputS3Url`, which is a presigned URL for the supporting access point for `HeadObject`.

The presigned URL will include the following properties if they're specified: 
+ `versionId` (in the query parameters)
+ `requestPayer` (in the `x-amz-request-payer` header)
+ `expectedBucketOwner` (in the `x-amz-expected-bucket-owner` header)

Other properties won't be presigned, and thus won't be included. Non-signed options sent as headers can be added manually to the request when calling the presigned URL that's found in the `userRequest` headers. Server-side encryption options are not supported for `HeadObject`.

For the request syntax URI parameters, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html) in the *Amazon Simple Storage Service API Reference*.

The following example shows a Lambda JSON input payload for `HeadObject`.

```
{
  "xAmzRequestId": "requestId",
  "**headObjectContext**": {
    "**inputS3Url**": "https://my-s3-ap-111122223333.s3-accesspoint.us-east-1.amazonaws.com/example?X-Amz-Security-Token=<snip>"
  },
  "configuration": {
       "accessPointArn": "arn:aws:s3-object-lambda:us-east-1:111122223333:accesspoint/example-object-lambda-ap",
       "supportingAccessPointArn": "arn:aws:s3:us-east-1:111122223333:accesspoint/example-ap",
       "payload": "{}"
  },
  "userRequest": {
       "url": "https://object-lambda-111122223333.s3-object-lambda.us-east-1.amazonaws.com/example",
       "headers": {
           "Host": "object-lambda-111122223333.s3-object-lambda.us-east-1.amazonaws.com",
           "Accept-Encoding": "identity",
           "X-Amz-Content-SHA256": "e3b0c44298fc1example"
       }
   },
   "userIdentity": {
       "type": "AssumedRole",
       "principalId": "principalId",
       "arn": "arn:aws:sts::111122223333:assumed-role/Admin/example",       
       "accountId": "111122223333",
       "accessKeyId": "accessKeyId",
       "sessionContext": {
            "attributes": {
            "mfaAuthenticated": "false",
            "creationDate": "Wed Mar 10 23:41:52 UTC 2021"
       },
       "sessionIssuer": {
            "type": "Role",
            "principalId": "principalId",
            "arn": "arn:aws:iam::111122223333:role/Admin",
            "accountId": "111122223333",
            "userName": "Admin"
            }
       }
    },
  "protocolVersion": "1.00"
}
```

Your Lambda function should return a JSON object that contains the headers and values that will be returned for the `HeadObject` call.

The following example shows the structure of the Lambda response JSON for `HeadObject`.

```
{
    "statusCode": <number>; // Required
    "errorCode": <string>;
    "errorMessage": <string>;
    "headers": {
        "Accept-Ranges": <string>,
        "x-amz-archive-status": <string>,
        "x-amz-server-side-encryption-bucket-key-enabled": <boolean>,
        "Cache-Control": <string>,
        "Content-Disposition": <string>,
        "Content-Encoding": <string>,
        "Content-Language": <string>,
        "Content-Length": <number>, // Required
        "Content-Type": <string>,
        "x-amz-delete-marker": <boolean>,
        "ETag": <string>,
        "Expires": <string>,
        "x-amz-expiration": <string>,
        "Last-Modified": <string>,
        "x-amz-missing-meta": <number>,
        "x-amz-object-lock-mode": <string>,
        "x-amz-object-lock-legal-hold": <string>,
        "x-amz-object-lock-retain-until-date": <string>,
        "x-amz-mp-parts-count": <number>,
        "x-amz-replication-status": <string>,
        "x-amz-request-charged": <string>,
        "x-amz-restore": <string>,
        "x-amz-server-side-encryption": <string>,
        "x-amz-server-side-encryption-customer-algorithm": <string>,
        "x-amz-server-side-encryption-aws-kms-key-id": <string>,
        "x-amz-server-side-encryption-customer-key-MD5": <string>,
        "x-amz-storage-class": <string>,
        "x-amz-tagging-count": <number>,
        "x-amz-version-id": <string>,
        <x-amz-meta-headers>: <string>, // user-defined metadata 
        "x-amz-meta-meta1": <string>, // example of the user-defined metadata header, it will need the x-amz-meta prefix
        "x-amz-meta-meta2": <string>
        ...
    };
}
```

The following example shows how to use the presigned URL to populate your response by modifying the header values as needed before returning the JSON.

------
#### [ Python ]



```
import requests

def lambda_handler(event, context):
    print(event)
    
    # Extract the presigned URL from the input.
    s3_url = event["headObjectContext"]["inputS3Url"]

    # Get the head of the object from S3.     
    response = requests.head(s3_url)
    
    # Return the error to S3 Object Lambda (if applicable).           
    if (response.status_code >= 400):
        return {
            "statusCode": response.status_code,
            "errorCode": "RequestFailure",                         
            "errorMessage": "Request to S3 failed"    
    }
    
    # Store the headers in a dictionary.
    response_headers = dict(response.headers)

    # This obscures Content-Type in a transformation, it is optional to add
    response_headers["Content-Type"] = "" 

    # Return the headers to S3 Object Lambda.     
    return {
        "statusCode": response.status_code,
        "headers": response_headers     
        }
```

------

## Working with `ListObjects` requests in Lambda
<a name="olap-listobjects"></a>

This section assumes that your Object Lambda Access Point is configured to call the Lambda function for `ListObjects`. Lambda will receive the JSON payload with a new object named `listObjectsContext`. `listObjectsContext`contains a single property, `inputS3Url`, which is a presigned URL for the supporting access point for `ListObjects`.

Unlike `GetObject` and `HeadObject`, the presigned URL will include the following properties if they're specified:
+ All the query parameters
+ `requestPayer` (in the `x-amz-request-payer` header) 
+ `expectedBucketOwner` (in the `x-amz-expected-bucket-owner` header)

For the request syntax URI parameters, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjects.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjects.html) in the *Amazon Simple Storage Service API Reference*.

**Important**  
We recommend that you use the newer version, [ListObjectsV2](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html), when developing applications. For backward compatibility, Amazon S3 continues to support `ListObjects`.

The following example shows the Lambda JSON input payload for `ListObjects`.

```
{
    "xAmzRequestId": "requestId",
     "**listObjectsContext**": {
     "**inputS3Url**": "https://my-s3-ap-111122223333.s3-accesspoint.us-east-1.amazonaws.com/?X-Amz-Security-Token=<snip>",
     },
    "configuration": {
        "accessPointArn": "arn:aws:s3-object-lambda:us-east-1:111122223333:accesspoint/example-object-lambda-ap",
        "supportingAccessPointArn": "arn:aws:s3:us-east-1:111122223333:accesspoint/example-ap",
        "payload": "{}"
    },
    "userRequest": {
        "url": "https://object-lambda-111122223333.s3-object-lambda.us-east-1.amazonaws.com/example",
        "headers": {
            "Host": "object-lambda-111122223333.s3-object-lambda.us-east-1.amazonaws.com",
            "Accept-Encoding": "identity",
            "X-Amz-Content-SHA256": "e3b0c44298fc1example"
        }
    },
    "userIdentity": {
        "type": "AssumedRole",
        "principalId": "principalId",
        "arn": "arn:aws:sts::111122223333:assumed-role/Admin/example",
        "accountId": "111122223333",
        "accessKeyId": "accessKeyId",
        "sessionContext": {
            "attributes": {
                "mfaAuthenticated": "false",
                "creationDate": "Wed Mar 10 23:41:52 UTC 2021"
            },
            "sessionIssuer": {
                "type": "Role",
                "principalId": "principalId",
                "arn": "arn:aws:iam::111122223333:role/Admin",
                "accountId": "111122223333",
                "userName": "Admin"
            }
        }
    },
  "protocolVersion": "1.00"
}
```

Your Lambda function should return a JSON object that contains the status code, list XML result, or error information that will be returned from S3 Object Lambda.

S3 Object Lambda does not process or validate `listResultXml`, but instead forwards it to `ListObjects` caller. For `listBucketResult`, S3 Object Lambda expects certain properties to be of a specific type and will throw exceptions if it cannot parse them. `listResultXml` and `listBucketResult` can not be provided at the same time.

The following example demonstrates how to use the presigned URL to call Amazon S3 and use the result to populate a response, including error checking.

------
#### [ Python ]

```
import requests 
import xmltodict

def lambda_handler(event, context):
    # Extract the presigned URL from the input.
    s3_url = event["listObjectsContext"]["inputS3Url"]


    # Get the head of the object from Amazon S3.
    response = requests.get(s3_url)

    # Return the error to S3 Object Lambda (if applicable).
    if (response.status_code >= 400):
        error = xmltodict.parse(response.content)
        return {
            "statusCode": response.status_code,
            "errorCode": error["Error"]["Code"],
            "errorMessage": error["Error"]["Message"]
        }

    # Store the XML result in a dict.
    response_dict = xmltodict.parse(response.content)

    # This obscures StorageClass in a transformation, it is optional to add
    for item in response_dict['ListBucketResult']['Contents']:
        item['StorageClass'] = ""

    # Convert back to XML.
    listResultXml = xmltodict.unparse(response_dict)
    
    # Create response with listResultXml.
    response_with_list_result_xml = {
        'statusCode': 200,
        'listResultXml': listResultXml
    }

    # Create response with listBucketResult.
    response_dict['ListBucketResult'] = sanitize_response_dict(response_dict['ListBucketResult'])
    response_with_list_bucket_result = {
        'statusCode': 200,
        'listBucketResult': response_dict['ListBucketResult']
    }

    # Return the list to S3 Object Lambda.
    # Can return response_with_list_result_xml or response_with_list_bucket_result
    return response_with_list_result_xml

# Converting the response_dict's key to correct casing
def sanitize_response_dict(response_dict: dict):
    new_response_dict = dict()
    for key, value in response_dict.items():
        new_key = key[0].lower() + key[1:] if key != "ID" else 'id'
        if type(value) == list:
            newlist = []
            for element in value:
                if type(element) == type(dict()):
                    element = sanitize_response_dict(element)
                newlist.append(element)
            value = newlist
        elif type(value) == dict:
            value = sanitize_response_dict(value)
        new_response_dict[new_key] = value
    return new_response_dict
```

------

The following example shows the structure of the Lambda response JSON for `ListObjects`.

```
{ 
  "statusCode": <number>; // Required
  "errorCode": <string>;
  "errorMessage": <string>;
  "listResultXml": <string>; // This can also be Error XML string in case S3 returned error response when calling the pre-signed URL

  "listBucketResult": {  // listBucketResult can be provided instead of listResultXml, however they can not both be provided in the JSON response  
        "name": <string>,  // Required for 'listBucketResult'
        "prefix": <string>,  
        "marker": <string>, 
        "nextMarker": <string>, 
        "maxKeys": <int>,   // Required for 'listBucketResult'
        "delimiter": <string>, 
        "encodingType": <string>  
        "isTruncated": <boolean>,  // Required for 'listBucketResult'
        "contents": [  { 
            "key": <string>,  // Required for 'content'
            "lastModified": <string>,  
            "eTag": <string>,  
            "checksumAlgorithm": <string>,   // CRC32,  CRC32C,  SHA1,  SHA256
            "size": <int>,   // Required for 'content'
            "owner": {  
                "displayName": <string>,  // Required for 'owner'
                "id": <string>,  // Required for 'owner'
            },  
            "storageClass": <string>  
            },  
        ...  
        ],  
        "commonPrefixes": [  {  
            "prefix": <string>   // Required for 'commonPrefix'
        },  
        ...  
        ],  
    }
}
```

## Working with `ListObjectsV2` requests in Lambda
<a name="olap-listobjectsv2"></a>

This section assumes that your Object Lambda Access Point is configured to call the Lambda function for `ListObjectsV2`. Lambda will receive the JSON payload with a new object named `listObjectsV2Context`. `listObjectsV2Context` contains a single property, `inputS3Url`, which is a presigned URL for the supporting access point for `ListObjectsV2`.

Unlike `GetObject` and `HeadObject`, the presigned URL will include the following properties, if they're specified: 
+ All the query parameters
+ `requestPayer` (in the `x-amz-request-payer` header) 
+ `expectedBucketOwner` (in the `x-amz-expected-bucket-owner` header)

For the request syntax URI parameters, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html) in the *Amazon Simple Storage Service API Reference*.

The following example shows the Lambda JSON input payload for `ListObjectsV2`.

```
{
    "xAmzRequestId": "requestId",
     "**listObjectsV2Context**": {
     "**inputS3Url**": "https://my-s3-ap-111122223333.s3-accesspoint.us-east-1.amazonaws.com/?list-type=2&X-Amz-Security-Token=<snip>",
     },
    "configuration": {
        "accessPointArn": "arn:aws:s3-object-lambda:us-east-1:111122223333:accesspoint/example-object-lambda-ap",
        "supportingAccessPointArn": "arn:aws:s3:us-east-1:111122223333:accesspoint/example-ap",
        "payload": "{}"
    },
    "userRequest": {
        "url": "https://object-lambda-111122223333.s3-object-lambda.us-east-1.amazonaws.com/example",
        "headers": {
            "Host": "object-lambda-111122223333.s3-object-lambda.us-east-1.amazonaws.com",
            "Accept-Encoding": "identity",
            "X-Amz-Content-SHA256": "e3b0c44298fc1example"
        }
    },
    "userIdentity": {
        "type": "AssumedRole",
        "principalId": "principalId",
        "arn": "arn:aws:sts::111122223333:assumed-role/Admin/example",
        "accountId": "111122223333",
        "accessKeyId": "accessKeyId",
        "sessionContext": {
            "attributes": {
                "mfaAuthenticated": "false",
                "creationDate": "Wed Mar 10 23:41:52 UTC 2021"
            },
            "sessionIssuer": {
                "type": "Role",
                "principalId": "principalId",
                "arn": "arn:aws:iam::111122223333:role/Admin",
                "accountId": "111122223333",
                "userName": "Admin"
            }
        }
    },
  "protocolVersion": "1.00" 
}
```

Your Lambda function should return a JSON object that contains the status code, list XML result, or error information that will be returned from S3 Object Lambda.

S3 Object Lambda does not process or validate `listResultXml`, but instead forwards it to `ListObjectsV2` caller. For `listBucketResult`, S3 Object Lambda expects certain properties to be of a specific type and will throw exceptions if it cannot parse them. `listResultXml` and `listBucketResult` can not be provided at the same time.

The following example demonstrates how to use the presigned URL to call Amazon S3 and use the result to populate a response, including error checking.

------
#### [ Python ]

```
import requests 
import xmltodict

def lambda_handler(event, context):
    # Extract the presigned URL from the input.
    s3_url = event["listObjectsV2Context"]["inputS3Url"]


    # Get the head of the object from Amazon S3.
    response = requests.get(s3_url)

    # Return the error to S3 Object Lambda (if applicable).
    if (response.status_code >= 400):
        error = xmltodict.parse(response.content)
        return {
            "statusCode": response.status_code,
            "errorCode": error["Error"]["Code"],
            "errorMessage": error["Error"]["Message"]
        }

    # Store the XML result in a dict.
    response_dict = xmltodict.parse(response.content)

    # This obscures StorageClass in a transformation, it is optional to add
    for item in response_dict['ListBucketResult']['Contents']:
        item['StorageClass'] = ""

    # Convert back to XML.
    listResultXml = xmltodict.unparse(response_dict)
    
    # Create response with listResultXml.
    response_with_list_result_xml = {
        'statusCode': 200,
        'listResultXml': listResultXml
    }

    # Create response with listBucketResult.
    response_dict['ListBucketResult'] = sanitize_response_dict(response_dict['ListBucketResult'])
    response_with_list_bucket_result = {
        'statusCode': 200,
        'listBucketResult': response_dict['ListBucketResult']
    }

    # Return the list to S3 Object Lambda.
    # Can return response_with_list_result_xml or response_with_list_bucket_result
    return response_with_list_result_xml

# Converting the response_dict's key to correct casing
def sanitize_response_dict(response_dict: dict):
    new_response_dict = dict()
    for key, value in response_dict.items():
        new_key = key[0].lower() + key[1:] if key != "ID" else 'id'
        if type(value) == list:
            newlist = []
            for element in value:
                if type(element) == type(dict()):
                    element = sanitize_response_dict(element)
                newlist.append(element)
            value = newlist
        elif type(value) == dict:
            value = sanitize_response_dict(value)
        new_response_dict[new_key] = value
    return new_response_dict
```

------

The following example shows the structure of the Lambda response JSON for `ListObjectsV2`.

```
{  
    "statusCode": <number>; // Required  
    "errorCode": <string>;  
    "errorMessage": <string>;  
    "listResultXml": <string>; // This can also be Error XML string in case S3 returned error response when calling the pre-signed URL  
  
    "listBucketResult": {  // listBucketResult can be provided instead of listResultXml, however they can not both be provided in the JSON response 
        "name": <string>, // Required for 'listBucketResult'  
        "prefix": <string>,  
        "startAfter": <string>,  
        "continuationToken": <string>,  
        "nextContinuationToken": <string>,
        "keyCount": <int>, // Required for 'listBucketResult'  
        "maxKeys": <int>, // Required for 'listBucketResult'  
        "delimiter": <string>,  
        "encodingType": <string>  
        "isTruncated": <boolean>, // Required for 'listBucketResult'  
        "contents": [ {  
            "key": <string>, // Required for 'content'  
            "lastModified": <string>,  
            "eTag": <string>,  
            "checksumAlgorithm": <string>, // CRC32, CRC32C, SHA1, SHA256  
            "size": <int>, // Required for 'content'  
            "owner": {  
                "displayName": <string>, // Required for 'owner'  
                "id": <string>, // Required for 'owner'  
            },  
            "storageClass": <string>  
            },  
            ...  
        ],  
        "commonPrefixes": [ {  
            "prefix": <string> // Required for 'commonPrefix'  
            },  
        ...  
        ],  
    }  
}
```

# Event context format and usage
<a name="olap-event-context"></a>

**Note**  
As of November 7th, 2025, S3 Object Lambda is available only to existing customers that are currently using the service as well as to select AWS Partner Network (APN) partners. For capabilities similar to S3 Object Lambda, learn more here - [Amazon S3 Object Lambda availability change](https://docs.aws.amazon.com/AmazonS3/latest/userguide/amazons3-ol-change.html).

Amazon S3 Object Lambda provides context about the request that's being made in the event that's passed to your AWS Lambda function. The following shows an example request. Descriptions of the fields are included after the example.

```
{
    "xAmzRequestId": "requestId",
    "getObjectContext": {
        "inputS3Url": "https://my-s3-ap-111122223333.s3-accesspoint.us-east-1.amazonaws.com/example?X-Amz-Security-Token=<snip>",
        "outputRoute": "io-use1-001",
        "outputToken": "OutputToken"
    },
    "configuration": {
        "accessPointArn": "arn:aws:s3-object-lambda:us-east-1:111122223333:accesspoint/example-object-lambda-ap",
        "supportingAccessPointArn": "arn:aws:s3:us-east-1:111122223333:accesspoint/example-ap",
        "payload": "{}"
    },
    "userRequest": {
        "url": "https://object-lambda-111122223333.s3-object-lambda.us-east-1.amazonaws.com/example",
        "headers": {
            "Host": "object-lambda-111122223333.s3-object-lambda.us-east-1.amazonaws.com",
            "Accept-Encoding": "identity",
            "X-Amz-Content-SHA256": "e3b0c44298fc1example"
        }
    },
    "userIdentity": {
        "type": "AssumedRole",
        "principalId": "principalId",
        "arn": "arn:aws:sts::111122223333:assumed-role/Admin/example",
        "accountId": "111122223333",
        "accessKeyId": "accessKeyId",
        "sessionContext": {
            "attributes": {
                "mfaAuthenticated": "false",
                "creationDate": "Wed Mar 10 23:41:52 UTC 2021"
            },
            "sessionIssuer": {
                "type": "Role",
                "principalId": "principalId",
                "arn": "arn:aws:iam::111122223333:role/Admin",
                "accountId": "111122223333",
                "userName": "Admin"
            }
        }
    },
    "protocolVersion": "1.00"
}
```

The following fields are included in the request:
+ `xAmzRequestId` – The Amazon S3 request ID for this request. We recommend that you log this value to help with debugging.
+ `getObjectContext` – The input and output details for connections to Amazon S3 and S3 Object Lambda.
  + `inputS3Url` – A presigned URL that can be used to fetch the original object from Amazon S3. The URL is signed by using the original caller's identity, and that user's permissions will apply when the URL is used. If there are signed headers in the URL, the Lambda function must include these headers in the call to Amazon S3, except for the `Host` header.
  + `outputRoute` – A routing token that is added to the S3 Object Lambda URL when the Lambda function calls `WriteGetObjectResponse`.
  + `outputToken` – An opaque token that's used by S3 Object Lambda to match the `WriteGetObjectResponse` call with the original caller.
+ `configuration` – Configuration information about the Object Lambda Access Point.
  + `accessPointArn` – The Amazon Resource Name (ARN) of the Object Lambda Access Point that received this request.
  + `supportingAccessPointArn` – The ARN of the supporting access point that is specified in the Object Lambda Access Point configuration.
  + `payload` – Custom data that is applied to the Object Lambda Access Point configuration. S3 Object Lambda treats this data as an opaque string, so it might need to be decoded before use.
+ `userRequest` – Information about the original call to S3 Object Lambda.
  + `url` – The decoded URL of the request as received by S3 Object Lambda, excluding any authorization-related query parameters.
  + `headers` – A map of string to strings containing the HTTP headers and their values from the original call, excluding any authorization-related headers. If the same header appears multiple times, the values from each instance of the same header are combined into a comma-delimited list. The case of the original headers is retained in this map.
+ `userIdentity` – Details about the identity that made the call to S3 Object Lambda. For more information, see [Logging data events for trails](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html) in the *AWS CloudTrail User Guide*.
  + `type` – The type of identity.
  + `accountId` – The AWS account to which the identity belongs.
  + `userName` – The friendly name of the identity that made the call.
  + `principalId` – The unique identifier for the identity that made the call.
  + `arn` – The ARN of the principal who made the call. The last section of the ARN contains the user or role that made the call.
  + `sessionContext` – If the request was made with temporary security credentials, this element provides information about the session that was created for those credentials.
  + `invokedBy` – The name of the AWS service that made the request, such as Amazon EC2 Auto Scaling or AWS Elastic Beanstalk.
  + `sessionIssuer` – If the request was made with temporary security credentials, this element provides information about how the credentials were obtained.
+ `protocolVersion` – The version ID of the context provided. The format of this field is `{Major Version}.{Minor Version}`. The minor version numbers are always two-digit numbers. Any removal or change to the semantics of a field necessitates a major version bump and requires active opt-in. Amazon S3 can add new fields at any time, at which point you might experience a minor version bump. Because of the nature of software rollouts, you might see multiple minor versions in use at once.

# Working with Range and partNumber headers
<a name="range-get-olap"></a>

**Note**  
As of November 7th, 2025, S3 Object Lambda is available only to existing customers that are currently using the service as well as to select AWS Partner Network (APN) partners. For capabilities similar to S3 Object Lambda, learn more here - [Amazon S3 Object Lambda availability change](https://docs.aws.amazon.com/AmazonS3/latest/userguide/amazons3-ol-change.html).

When working with large objects in Amazon S3 Object Lambda, you can use the `Range` HTTP header to download a specified byte range from an object. To fetch different byte ranges from within the same object, you can use concurrent connections to Amazon S3. You can also specify the `partNumber` parameter (an integer between 1 and 10,000), which performs a ranged request for the specified part of the object.

Because there are multiple ways that you might want to handle a request that includes the `Range` or `partNumber` parameters, S3 Object Lambda doesn't apply these parameters to the transformed object. Instead, your AWS Lambda function must implement this functionality as needed for your application.

To use the `Range` and `partNumber` parameters with S3 Object Lambda, you do the following: 
+ Enable these parameters in your Object Lambda Access Point configuration.
+ Write a Lambda function that can handle requests that include these parameters.

The following steps describe how to accomplish this.

## Step 1: Configure your Object Lambda Access Point
<a name="range-get-olap-step-1"></a>

By default, Object Lambda Access Points respond with an HTTP status code 501 (Not Implemented) error to any `GetObject` or `HeadObject` request that contains a `Range` or `partNumber` parameter, either in the headers or query parameters. 

To enable an Object Lambda Access Point to accept such requests, you must include `GetObject-Range`, `GetObject-PartNumber`, `HeadObject-Range`, or `HeadObject-PartNumber` in the `AllowedFeatures` section of your Object Lambda Access Point configuration. For more information about updating your Object Lambda Access Point configuration, see [Creating Object Lambda Access Points](olap-create.md). 

## Step 2: Implement `Range` or `partNumber` handling in your Lambda function
<a name="range-get-olap-step-2"></a>

When your Object Lambda Access Point invokes your Lambda function with a ranged `GetObject` or `HeadObject` request, the `Range` or `partNumber` parameter is included in the event context. The location of the parameter in the event context depends on which parameter was used and how it was included in the original request to the Object Lambda Access Point, as explained in the following table. 


| Parameter | Event context location | 
| --- | --- | 
|  `Range` (header)  |  `userRequest.headers.Range`  | 
|  `Range` (query parameter)  |  `userRequest.url` (query parameter `Range`)  | 
|  `partNumber`  |  `userRequest.url` (query parameter `partNumber`)  | 

**Important**  
The provided presigned URL for your Object Lambda Access Point doesn't contain the `Range` or `partNumber` parameter from the original request. See the following options on how to handle these parameters in your AWS Lambda function.

After you extract the `Range` or `partNumber` value, you can take one of the following approaches, based on your application's needs:

1. **Map the requested `Range` or `partNumber` to the transformed object (recommended).** 

   The most reliable way to handle `Range` or `partNumber` requests is to do the following: 
   + Retrieve the full object from Amazon S3.
   + Transform the object.
   + Apply the requested `Range` or `partNumber` parameters to the transformed object.

   To do this, use the provided presigned URL to fetch the entire object from Amazon S3 and then process the object as needed. For an example Lambda function that processes a `Range` parameter in this way, see [ this sample](https://github.com/aws-samples/amazon-s3-object-lambda-default-configuration/blob/main/function/nodejs_20_x/src/response/range_mapper.ts) in the AWS Samples GitHub repository.

1. **Map the requested `Range` to the presigned URL.**

   In some cases, your Lambda function can map the requested `Range` directly to the presigned URL to retrieve only part of the object from Amazon S3. This approach is appropriate only if your transformation meets both of the following criteria:

   1. Your transformation function can be applied to partial object ranges.

   1. Applying the `Range` parameter before or after the transformation function results in the same transformed object.

   For example, a transformation function that converts all characters in an ASCII-encoded object to uppercase meets both of the preceding criteria. The transformation can be applied to part of an object, and applying the `Range` parameter before the transformation achieves the same result as applying it after the transformation.

   By contrast, a function that reverses the characters in an ASCII-encoded object doesn't meet these criteria. Such a function meets criterion 1, because it can be applied to partial object ranges. However, it doesn't meet criterion 2, because applying the `Range` parameter before the transformation achieves different results than applying the parameter after the transformation. 

   Consider a request to apply the function to the first three characters of an object with the contents `abcdefg`. Applying the `Range` parameter before the transformation retrieves only `abc` and then reverses the data, returning `cba`. But if the parameter is applied after the transformation, the function retrieves the entire object, reverses it, and then applies the `Range` parameter, returning `gfe`. Because these results are different, this function should not apply the `Range` parameter when retrieving the object from Amazon S3. Instead, it should retrieve the entire object, perform the transformation, and only then apply the `Range` parameter. 
**Warning**  
In many cases, applying the `Range` parameter to the presigned URL will result in unexpected behavior by the Lambda function or the requesting client. Unless you are sure that your application will work properly when retrieving only a partial object from Amazon S3, we recommend that you retrieve and transform full objects as described earlier in approach A. 

   If your application meets the criteria described earlier in approach B, you can simplify your AWS Lambda function by fetching only the requested object range and then running your transformation on that range. 

   The following Java code example demonstrates how to do the following: 
   + Retrieve the `Range` header from the `GetObject` request.
   + Add the `Range` header to the presigned URL that Lambda can use to retrieve the requested range from Amazon S3.

   ```
   private HttpRequest.Builder applyRangeHeader(ObjectLambdaEvent event, HttpRequest.Builder presignedRequest) {
       var header = event.getUserRequest().getHeaders().entrySet().stream()
               .filter(e -> e.getKey().toLowerCase(Locale.ROOT).equals("range"))
               .findFirst();
   
       // Add check in the query string itself.
       header.ifPresent(entry -> presignedRequest.header(entry.getKey(), entry.getValue()));
       return presignedRequest;
   }
   ```

# Using AWS built Lambda functions
<a name="olap-examples"></a>

**Note**  
As of November 7th, 2025, S3 Object Lambda is available only to existing customers that are currently using the service as well as to select AWS Partner Network (APN) partners. For capabilities similar to S3 Object Lambda, learn more here - [Amazon S3 Object Lambda availability change](https://docs.aws.amazon.com/AmazonS3/latest/userguide/amazons3-ol-change.html).

AWS provides some prebuilt AWS Lambda functions that you can use with Amazon S3 Object Lambda to detect and redact personally identifiable information (PII) and decompress S3 objects. These Lambda functions are available in the AWS Serverless Application Repository. You can select these functions through the AWS Management Console when you create your Object Lambda Access Point. 

For more information about how to deploy serverless applications from the AWS Serverless Application Repository, see [Deploying Applications](https://docs.aws.amazon.com/serverlessrepo/latest/devguide/serverlessrepo-consuming-applications.html) in the *AWS Serverless Application Repository Developer Guide*.

**Note**  
The following examples can be used only with `GetObject` requests.

## Example 1: PII access control
<a name="olap-examples-1"></a>

This Lambda function uses Amazon Comprehend, a natural language processing (NLP) service that uses machine learning to find insights and relationships in text. This function automatically detects personally identifiable information (PII), such as names, addresses, dates, credit card numbers, and social security numbers in documents in your Amazon S3 bucket. If you have documents in your bucket that include PII, you can configure the PII Access Control function to detect these PII entity types and restrict access to unauthorized users.

To get started, deploy the following Lambda function in your account and add the Amazon Resource Name (ARN) for the function to your Object Lambda Access Point configuration.

The following is an example ARN for this function:

```
arn:aws:serverlessrepo:us-east-1:111122223333:applications/ComprehendPiiAccessControlS3ObjectLambda
```

You can add or view this function on the AWS Management Console by using the following AWS Serverless Application Repository link: [ComprehendPiiAccessControlS3ObjectLambda](https://console.aws.amazon.com/lambda/home#/create/app?applicationId=arn:aws:serverlessrepo:us-east-1:839782855223:applications/ComprehendPiiAccessControlS3ObjectLambda).

To view this function on GitHub, see [Amazon Comprehend S3 Object Lambda](https://github.com/aws-samples/amazon-comprehend-s3-object-lambdas).

## Example 2: PII redaction
<a name="olap-examples-2"></a>

This Lambda function uses Amazon Comprehend, a natural language processing (NLP) service that uses machine learning to find insights and relationships in text. This function automatically redacts personally identifiable information (PII), such as names, addresses, dates, credit card numbers, and social security numbers from documents in your Amazon S3 bucket. 

If you have documents in your bucket that include information such as credit card numbers or bank account information, you can configure the PII Redaction S3 Object Lambda function to detect PII and then return a copy of these documents in which PII entity types are redacted.

To get started, deploy the following Lambda function in your account and add the ARN for the function to your Object Lambda Access Point configuration.

The following is an example ARN for this function:

```
arn:aws:serverlessrepo:us-east-1:111122223333::applications/ComprehendPiiRedactionS3ObjectLambda
```

You can add or view this function on the AWS Management Console by using the following AWS Serverless Application Repository link: [ComprehendPiiRedactionS3ObjectLambda](https://console.aws.amazon.com/lambda/home#/create/app?applicationId=arn:aws:serverlessrepo:us-east-1:839782855223:applications/ComprehendPiiRedactionS3ObjectLambda).

To view this function on GitHub, see [Amazon Comprehend S3 Object Lambda](https://github.com/aws-samples/amazon-comprehend-s3-object-lambdas).

To learn about complete end-to-end procedures for some S3 Object Lambda tasks in PII redaction, see [Tutorial: Detecting and redacting PII data with S3 Object Lambda and Amazon Comprehend](tutorial-s3-object-lambda-redact-pii.md).

## Example 3: Decompression
<a name="olap-examples-3"></a>

The Lambda function `S3ObjectLambdaDecompression` can decompress objects that are stored in Amazon S3 in one of six compressed file formats: `bzip2`, `gzip`, `snappy`, `zlib`, `zstandard`, and `ZIP`. 

To get started, deploy the following Lambda function in your account and add the ARN for the function to your Object Lambda Access Point configuration.

The following is an example ARN for this function:

```
arn:aws:serverlessrepo:us-east-1:111122223333::applications/S3ObjectLambdaDecompression
```

You can add or view this function on the AWS Management Console by using the following AWS Serverless Application Repository link: [S3ObjectLambdaDecompression](https://eu-west-1.console.aws.amazon.com/lambda/home?region=eu-west-1#/create/app?applicationId=arn:aws:serverlessrepo:eu-west-1:123065155563:applications/S3ObjectLambdaDecompression).

To view this function on GitHub, see [S3 Object Lambda Decompression](https://github.com/aws-samples/amazon-s3-object-lambda-decompression).

# Best practices and guidelines for S3 Object Lambda
<a name="olap-best-practices"></a>

**Note**  
As of November 7th, 2025, S3 Object Lambda is available only to existing customers that are currently using the service as well as to select AWS Partner Network (APN) partners. For capabilities similar to S3 Object Lambda, learn more here - [Amazon S3 Object Lambda availability change](https://docs.aws.amazon.com/AmazonS3/latest/userguide/amazons3-ol-change.html).

When using S3 Object Lambda, follow these best practices and guidelines to optimize operations and performance.

**Topics**
+ [

## Working with S3 Object Lambda
](#olap-working-with)
+ [

## AWS services used in connection with S3 Object Lambda
](#olap-services)
+ [

## `Range` and `partNumber` headers
](#olap-managing-range-part)
+ [

## Transforming the `expiry-date`
](#olap-console-download)
+ [

## Working with the AWS CLI and AWS SDKs
](#olap-cli-sdk)

## Working with S3 Object Lambda
<a name="olap-working-with"></a>

S3 Object Lambda supports processing only `GET`, `LIST`, and `HEAD` requests. Any other requests don't invoke AWS Lambda and instead return standard, non-transformed API responses. You can create a maximum of 1,000 Object Lambda Access Points per AWS account per Region. The AWS Lambda function that you use must be in the same AWS account and Region as the Object Lambda Access Point.

S3 Object Lambda allows up to 60 seconds to stream a complete response to its caller. Your function is also subject to AWS Lambda default quotas. For more information, see [Lambda quotas](https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-limits.html) in the *AWS Lambda Developer Guide*. 

When S3 Object Lambda invokes your specified Lambda function, you are responsible for ensuring that any data that is overwritten or deleted from Amazon S3 by your specified Lambda function or application is intended and correct.

You can use S3 Object Lambda only to perform operations on objects. You cannot use S3 Object Lambda to perform other Amazon S3 operations, such as modifying or deleting buckets. For a complete list of S3 operations that support access points, see [Access points compatibility with S3 operations](access-points-service-api-support.md#access-points-operations-support).

In addition to this list, Object Lambda Access Points do not support the [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPOST.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPOST.html), [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html) (as the source), and [https://docs.aws.amazon.com/AmazonS3/latest/API/API_SelectObjectContent.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_SelectObjectContent.html) API operations.

## AWS services used in connection with S3 Object Lambda
<a name="olap-services"></a>

S3 Object Lambda connects Amazon S3, AWS Lambda, and optionally, other AWS services of your choosing to deliver objects relevant to the requesting applications. All AWS services used with S3 Object Lambda are governed by their respective Service Level Agreements (SLAs). For example, if any AWS service does not meet its Service Commitment, you are eligible to receive a Service Credit, as documented in the service's SLA.

## `Range` and `partNumber` headers
<a name="olap-managing-range-part"></a>

When working with large objects, you can use the `Range` HTTP header to download a specified byte-range from an object. When you use the `Range` header, your request fetches only the specified portion of the object. You can also use the `partNumber` header to perform a ranged request for the specified part from the object.

For more information, see [Working with Range and partNumber headers](range-get-olap.md).

## Transforming the `expiry-date`
<a name="olap-console-download"></a>

You can open or download transformed objects from your Object Lambda Access Point on the AWS Management Console. These objects must be non-expired. If your Lambda function transforms the `expiry-date` of your objects, you might see expired objects that cannot be opened or downloaded. This behavior applies only to S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive restored objects.

## Working with the AWS CLI and AWS SDKs
<a name="olap-cli-sdk"></a>

AWS Command Line Interface (AWS CLI) S3 subcommands (`cp`, `mv`, and `sync`) and the use of the AWS SDK for Java `TransferManager` class are not supported for use with S3 Object Lambda.

# S3 Object Lambda tutorials
<a name="olap-tutorials"></a>

**Note**  
As of November 7th, 2025, S3 Object Lambda is available only to existing customers that are currently using the service as well as to select AWS Partner Network (APN) partners. For capabilities similar to S3 Object Lambda, learn more here - [Amazon S3 Object Lambda availability change](https://docs.aws.amazon.com/AmazonS3/latest/userguide/amazons3-ol-change.html).

The following tutorials present complete end-to-end procedures for some S3 Object Lambda tasks.

With S3 Object Lambda, you can add your own code to process data retrieved from S3 before returning it to an application. Each of the following tutorials will modify data as it is retrieved from Amazon S3, without changing the existing object or maintaining multiple copies of the data. The first tutorial will walk through how to add an AWS Lambda function to a S3 GET request to modify an object retrieved from S3. The second tutorial demonstrates how to use a prebuilt Lambda function powered by Amazon Comprehend to protect personally identifiable information (PII) retrieved from S3 before returning it to an application. The third tutorial uses S3 Object Lambda to add a watermark to an image as it is retrieved from Amazon S3.
+ [Tutorial: Transforming data for your application with S3 Object Lambda](tutorial-s3-object-lambda-uppercase.md)
+ [Tutorial: Detecting and redacting PII data with S3 Object Lambda and Amazon Comprehend](tutorial-s3-object-lambda-redact-pii.md)
+ [Tutorial: Using S3 Object Lambda to dynamically watermark images as they are retrieved](https://aws.amazon.com/getting-started/hands-on/amazon-s3-object-lambda-to-dynamically-watermark-images/?ref=docs_gateway/amazons3/olap-tutorials.html)

# Tutorial: Transforming data for your application with S3 Object Lambda
<a name="tutorial-s3-object-lambda-uppercase"></a>

**Note**  
As of November 7th, 2025, S3 Object Lambda is available only to existing customers that are currently using the service as well as to select AWS Partner Network (APN) partners. For capabilities similar to S3 Object Lambda, learn more here - [Amazon S3 Object Lambda availability change](https://docs.aws.amazon.com/AmazonS3/latest/userguide/amazons3-ol-change.html).

When you store data in Amazon S3, you can easily share it for use by multiple applications. However, each application might have unique data format requirements, and might need modification or processing of your data for a specific use case. For example, a dataset created by an ecommerce application might include personally identifiable information (PII). When the same data is processed for analytics, this PII is not needed and should be redacted. However, if the same dataset is used for a marketing campaign, you might need to enrich the data with additional details, such as information from the customer loyalty database.

With [S3 Object Lambda](https://aws.amazon.com/s3/features/object-lambda), you can add your own code to process data retrieved from S3 before returning it to an application. Specifically, you can configure an AWS Lambda function and attach it to an S3 Object Lambda Access Point. When an application sends [standard S3 GET requests](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html) through the S3 Object Lambda Access Point, the specified Lambda function is invoked to process any data retrieved from the underlying data source through the supporting S3 access point. Then, the S3 Object Lambda Access Point returns the transformed result back to the application. You can author and execute your own custom Lambda functions, tailoring the S3 Object Lambda data transformation to your specific use case, all with no changes required to your applications.

![\[This is an S3 Object Lambda workflow diagram.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/ol-example-image-global.png)


**Objective**  
In this tutorial, you learn how to add custom code to standard S3 GET requests to modify the requested object retrieved from S3 so that the object suit the needs of the requesting client or application. Specifically, you learn how to transform all the text in the original object stored in an S3 bucket to uppercase through S3 Object Lambda. 

**Note**  
This tutorial uses Python code to transform the data, for examples using other AWS SDKs see [Transform data for your application with S3 Object Lambda ](https://docs.aws.amazon.com/code-library/latest/ug/lambda_example_cross_ServerlessS3DataTransformation_section.html) in the AWS SDK Code Examples Library. 

**Topics**
+ [

## Prerequisites
](#ol-upper-prerequisites)
+ [

## Step 1: Create an S3 bucket
](#ol-upper-step1)
+ [

## Step 2: Upload a file to the S3 bucket
](#ol-upper-step2)
+ [

## Step 3: Create an S3 access point
](#ol-upper-step3)
+ [

## Step 4: Create a Lambda function
](#ol-upper-step4)
+ [

## Step 5: Configure an IAM policy for your Lambda function's execution role
](#ol-upper-step5)
+ [

## Step 6: Create an S3 Object Lambda Access Point
](#ol-upper-step6)
+ [

## Step 7: View the transformed data
](#ol-upper-step7)
+ [

## Step 8: Clean up
](#ol-upper-step8)
+ [

## Next steps
](#ol-upper-next-steps)

## Prerequisites
<a name="ol-upper-prerequisites"></a>

Before you start this tutorial, you must have an AWS account that you can sign in to as an AWS Identity and Access Management (IAM) user with correct permissions. You also must install Python version 3.8 or later.

**Topics**
+ [

### Create an IAM user with permissions in your AWS account (console)
](#ol-upper-prerequisites-account)
+ [

### Install Python 3.8 or later on your local machine
](#ol-upper-prerequisites-python)

### Create an IAM user with permissions in your AWS account (console)
<a name="ol-upper-prerequisites-account"></a>

You can create an IAM user for the tutorial. To complete this tutorial, your IAM user must attach the following IAM policies to access relevant AWS resources and perform specific actions. For more information about how to create an IAM user, see [Creating IAM users (console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html#id_users_create_console) in the *IAM User Guide*.

Your IAM user requires the following policies:
+ [AmazonS3FullAccess](https://console.aws.amazon.com/iam/home?#/policies/arn:aws:iam::aws:policy/AmazonS3FullAccess$jsonEditor) – Grants permissions to all Amazon S3 actions, including permissions to create and use an Object Lambda Access Point. 
+ [AWSLambda\$1FullAccess](https://console.aws.amazon.com/iam/home#/policies/arn:aws:iam::aws:policy/AWSLambda_FullAccess$jsonEditor) – Grants permissions to all Lambda actions. 
+ [IAMFullAccess](https://console.aws.amazon.com/iam/home#/policies/arn:aws:iam::aws:policy/IAMFullAccess$jsonEditor) – Grants permissions to all IAM actions. 
+ [IAMAccessAnalyzerReadOnlyAccess](https://console.aws.amazon.com/iam/home#/policies/arn:aws:iam::aws:policy/IAMAccessAnalyzerReadOnlyAccess$jsonEditor) – Grants permissions to read all access information provided by IAM Access Analyzer. 
+ [CloudWatchLogsFullAccess](https://console.aws.amazon.com/iam/home#/policies/arn:aws:iam::aws:policy/CloudWatchLogsFullAccess$jsonEditor) – Grants full access to CloudWatch Logs. 

**Note**  
For simplicity, this tutorial creates and uses an IAM user. After completing this tutorial, remember to [Delete the IAM user](#ol-upper-step8-delete-user). For production use, we recommend that you follow the [Security best practices in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) in the *IAM User Guide*. A best practice requires human users to use federation with an identity provider to access AWS with temporary credentials. Another best practice is to require workloads to use temporary credentials with IAM roles to access AWS. To learn about using AWS IAM Identity Center to create users with temporary credentials, see [Getting started](https://docs.aws.amazon.com/singlesignon/latest/userguide/getting-started.html) in the *AWS IAM Identity Center User Guide*.   
This tutorial also uses full-access AWS managed policies. For production use, we recommend that you instead grant only the minimum permissions necessary for your use case, in accordance with [security best practices](security-best-practices.md#security-best-practices-prevent).

### Install Python 3.8 or later on your local machine
<a name="ol-upper-prerequisites-python"></a>

Use the following procedure to install Python 3.8 or later on your local machine. For more installation instructions, see the [Downloading Python](https://wiki.python.org/moin/BeginnersGuide/Download) page in the *Python Beginners Guide*.

1. Open your local terminal or shell and run the following command to determine whether Python is already installed, and if so, which version is installed. 

   ```
   python --version
   ```

1. If you don't have Python 3.8 or later, download the [official installer](https://www.python.org/downloads/) of Python 3.8 or later that's suitable for your local machine. 

1. Run the installer by double-clicking the downloaded file, and follow the steps to complete the installation. 

   For **Windows users**, choose **Add Python 3.X to PATH** in the installation wizard before choosing **Install Now**. 

1. Restart your terminal by closing and reopening it. 

1. Run the following command to verify that Python 3.8 or later is installed correctly. 

   For **macOS users**, run this command: 

   ```
   python3 --version
   ```

   For **Windows users**, run this command: 

   ```
   python --version
   ```

1. Run the following command to verify that the pip3 package manager is installed. If you see a pip version number and python 3.8 or later in the command response, that means the pip3 package manager is installed successfully.

   ```
   pip --version
   ```

## Step 1: Create an S3 bucket
<a name="ol-upper-step1"></a>

Create a bucket to store the original data that you plan to transform. 

**Note**  
Access points may be attached for another data source, such as an Amazon FSx for OpenZFS volume, however this tutorial uses a supporting access point attached to an S3 bucket.

**To create a bucket**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Buckets**.

1. Choose **Create bucket**. 

   The **Create bucket** page opens.

1. For **Bucket name**, enter a name (for example, **tutorial-bucket**) for your bucket. 

   For more information about naming buckets in Amazon S3, see [General purpose bucket naming rules](bucketnamingrules.md).

1. For **Region**, choose the AWS Region where you want the bucket to reside. 

   For more information about the bucket Region, see [General purpose buckets overview](UsingBucket.md).

1. For **Block Public Access settings for this bucket**, keep the default settings (**Block *all *public access** is enabled). 

   We recommend that you keep all Block Public Access settings enabled unless you need to turn off one or more of them for your use case. For more information about blocking public access, see [Blocking public access to your Amazon S3 storage](access-control-block-public-access.md).

1. For the remaining settings, keep the defaults. 

   (Optional) If you want to configure additional bucket settings for your specific use case, see [Creating a general purpose bucket](create-bucket-overview.md).

1. Choose **Create bucket**.

## Step 2: Upload a file to the S3 bucket
<a name="ol-upper-step2"></a>

Upload a text file to the S3 bucket. This text file contains the original data that you will transform to uppercase later in this tutorial. 

For example, you can upload a `tutorial.txt` file that contains the following text:

```
Amazon S3 Object Lambda Tutorial:
You can add your own code to process data retrieved from S3 before 
returning it to an application.
```

**To upload a file to a bucket**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Buckets**.

1. In the **Buckets** list, choose the name of the bucket that you created in [Step 1](#ol-upper-step1) (for example, **tutorial-bucket**) to upload your file to.

1. On the **Objects** tab for your bucket, choose **Upload**.

1. On the **Upload** page, under **Files and folders**, choose **Add files**.

1. Choose a file to upload, and then choose **Open**. For example, you can upload the `tutorial.txt` file example mentioned earlier.

1. Choose **Upload**.

## Step 3: Create an S3 access point
<a name="ol-upper-step3"></a>

To use an S3 Object Lambda Access Point to access and transform the original data, you must create an S3 access point and associate it with the S3 bucket that you created in [Step 1](#ol-upper-step1). The access point must be in the same AWS Region as the objects that you want to transform.

Later in this tutorial, you'll use this access point as a supporting access point for your Object Lambda Access Point. 

**To create an access point**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Access Points**.

1. On the **Access Points** page, choose **Create access point**.

1. In the **Access point name** field, enter the name (for example, **tutorial-access-point**) for the access point.

   For more information about naming access points, see [Naming rules for access points](access-points-restrictions-limitations-naming-rules.md#access-points-names).

1. In the **Data source** field, enter the name of the bucket that you created in [Step 1](#ol-upper-step1) (for example, **tutorial-bucket**). S3 attaches the access point to this bucket. 

   (Optional) You can choose **Browse S3** to browse and search the buckets in your account. If you choose **Browse S3**, choose the desired bucket, and then choose **Choose path** to populate the **Bucket name** field with that bucket's name.

1. For **Network origin**, choose **Internet**. 

   For more information about network origins for access points, see [Creating access points restricted to a virtual private cloud](access-points-vpc.md).

1. By default, all Block Public Access settings are turned on for your access point. We recommend that you keep **Block *all* public access** enabled.

   For more information, see [Managing public access to access points for general purpose buckets](access-points-bpa-settings.md).

1. For all other access point settings, keep the default settings.

   (Optional) You can modify the access point settings to support your use case. For this tutorial, we recommend keeping the default settings. 

   (Optional) If you need to manage access to your access point, you can specify an access point policy. For more information, see [Policy examples for access points](access-points-policies.md#access-points-policy-examples). 

1. Choose **Create access point**.

## Step 4: Create a Lambda function
<a name="ol-upper-step4"></a>

To transform original data, create a Lambda function for use with your S3 Object Lambda Access Point. 

**Topics**
+ [

### Write Lambda function code and create a deployment package with a virtual environment
](#ol-upper-step4-write-lambda)
+ [

### Create a Lambda function with an execution role (console)
](#ol-upper-step4-create-function)
+ [

### Deploy your Lambda function code with .zip file archives and configure the Lambda function (console)
](#ol-upper-step4-deploy-function)

### Write Lambda function code and create a deployment package with a virtual environment
<a name="ol-upper-step4-write-lambda"></a>

1. On your local machine, create a folder with the folder name `object-lambda` for the virtual environment to use later in this tutorial.

1. In the `object-lambda` folder, create a file with a Lambda function that changes all text in the original object to uppercase. For example, you can use the following function written in Python. Save this function in a file named `transform.py`. 

   ```
   import boto3
   import requests
   from botocore.config import Config
   
   # This function capitalizes all text in the original object
   def lambda_handler(event, context):
       object_context = event["getObjectContext"]
       # Get the presigned URL to fetch the requested original object 
       # from S3
       s3_url = object_context["inputS3Url"]
       # Extract the route and request token from the input context
       request_route = object_context["outputRoute"]
       request_token = object_context["outputToken"]
       
       # Get the original S3 object using the presigned URL
       response = requests.get(s3_url)
       original_object = response.content.decode("utf-8")
   
       # Transform all text in the original object to uppercase
       # You can replace it with your custom code based on your use case
       transformed_object = original_object.upper()
   
       # Write object back to S3 Object Lambda
       s3 = boto3.client('s3', config=Config(signature_version='s3v4'))
       # The WriteGetObjectResponse API sends the transformed data
       # back to S3 Object Lambda and then to the user
       s3.write_get_object_response(
           Body=transformed_object,
           RequestRoute=request_route,
           RequestToken=request_token)
   
       # Exit the Lambda function: return the status code  
       return {'status_code': 200}
   ```
**Note**  
The preceding example Lambda function loads the entire requested object into memory before transforming it and returning it to the client. Alternatively, you can stream the object from S3 to avoid loading the entire object into memory. This approach can be useful when working with large objects. For more information about streaming responses with Object Lambda Access Points, see the streaming examples in [Working with `GetObject` requests in Lambda](olap-writing-lambda.md#olap-getobject-response).

   When you're writing a Lambda function for use with an S3 Object Lambda Access Point, the function is based on the input event context that S3 Object Lambda provides to the Lambda function. The event context provides information about the request being made in the event passed from S3 Object Lambda to Lambda. It contains the parameters that you use to create the Lambda function.

   The fields used to create the preceding Lambda function are as follows: 

   The field of `getObjectContext` means the input and output details for connections to Amazon S3 and S3 Object Lambda. It has the following fields:
   + `inputS3Url` – A presigned URL that the Lambda function can use to download the original object from the supporting access point. By using a presigned URL, the Lambda function doesn't need to have Amazon S3 read permissions to retrieve the original object and can only access the object processed by each invocation.
   + `outputRoute` – A routing token that is added to the S3 Object Lambda URL when the Lambda function calls `WriteGetObjectResponse` to send back the transformed object.
   + `outputToken` – A token used by S3 Object Lambda to match the `WriteGetObjectResponse` call with the original caller when sending back the transformed object.

   For more information about all the fields in the event context, see [Event context format and usage](olap-event-context.md) and [Writing Lambda functions for S3 Object Lambda Access Points](olap-writing-lambda.md).

1. In your local terminal, enter the following command to install the `virtualenv` package:

   ```
   python -m pip install virtualenv
   ```

1. In your local terminal, open the `object-lambda` folder that you created earlier, and then enter the following command to create and initialize a virtual environment called `venv`.

   ```
   python -m virtualenv venv
   ```

1. To activate the virtual environment, enter the following command to execute the `activate` file from the environment's folder:

   For **macOS users**, run this command:

   ```
   source venv/bin/activate
   ```

   For **Windows users**, run this command:

   ```
   .\venv\Scripts\activate
   ```

   Now, your command prompt changes to show **(venv)**, indicating that the virtual environment is active.

1. To install the required libraries, run the following commands line by line in the `venv` virtual environment.

   These commands install updated versions of the dependencies of your `lambda_handler` Lambda function. These dependencies are the AWS SDK for Python (Boto3) and the requests module.

   ```
   pip3 install boto3
   ```

   ```
   pip3 install requests
   ```

1. To deactivate the virtual environment, run the following command:

   ```
   deactivate
   ```

1. To create a deployment package with the installed libraries as a `.zip` file named `lambda.zip` at the root of the `object-lambda` directory, run the following commands line by line in your local terminal.
**Tip**  
The following commands might need to be adjusted to work in your particular environment. For example, a library might appear in `site-packages` or `dist-packages`, and the first folder might be `lib` or `lib64`. Also, the `python` folder might be named with a different Python version. To locate a specific package, use the `pip show` command.

   For **macOS users**, run these commands:

   ```
   cd venv/lib/python3.8/site-packages 
   ```

   ```
   zip -r ../../../../lambda.zip .
   ```

   For **Windows users**, run these commands:

   ```
   cd .\venv\Lib\site-packages\ 
   ```

   ```
   powershell Compress-Archive * ../../../lambda.zip
   ```

   The last command saves the deployment package to the root of the `object-lambda` directory.

1. Add the function code file `transform.py` to the root of your deployment package.

   For **macOS users**, run these commands:

   ```
   cd ../../../../ 
   ```

   ```
   zip -g lambda.zip transform.py
   ```

   For **Windows users**, run these commands: 

   ```
   cd ..\..\..\
   ```

   ```
   powershell Compress-Archive -update transform.py lambda.zip
   ```

   After you complete this step, you should have the following directory structure:

   ```
   lambda.zip$
     │ transform.py
     │ __pycache__
     | boto3/
     │ certifi/
     │ pip/
     │ requests/
     ...
   ```

### Create a Lambda function with an execution role (console)
<a name="ol-upper-step4-create-function"></a>

1. Sign in to the AWS Management Console and open the AWS Lambda console at [https://console.aws.amazon.com/lambda/](https://console.aws.amazon.com/lambda/).

   

1. In the left navigation pane, choose **Functions**.

1. Choose **Create function**.

1. Choose **Author from scratch**.

1. Under **Basic information**, do the following:

   1. For **Function name**, enter **tutorial-object-lambda-function**.

   1. For **Runtime**, choose **Python 3.8** or a later version.

1. Expand the **Change default execution role** section. Under **Execution role**, choose **Create a new role with basic Lambda permissions**.

   In [Step 5](#ol-upper-step5) later in this tutorial, you attach the **AmazonS3ObjectLambdaExecutionRolePolicy** to this Lambda function's execution role. 

1. Keep the remaining settings set to the defaults.

1. Choose **Create function**.

### Deploy your Lambda function code with .zip file archives and configure the Lambda function (console)
<a name="ol-upper-step4-deploy-function"></a>

1. In the AWS Lambda console at [https://console.aws.amazon.com/lambda/](https://console.aws.amazon.com/lambda/), choose **Functions** in the left navigation pane. 

1. Choose the Lambda function that you created earlier (for example, **tutorial-object-lambda-function**). 

1. On the Lambda function's details page, choose the **Code** tab. In the **Code Source** section, choose **Upload from** and then **.zip file**.

1. Choose **Upload** to select your local `.zip` file.

1. Choose the `lambda.zip` file that you created earlier, and then choose **Open**.

1. Choose **Save**.

1. In the **Runtime settings** section, choose **Edit**. 

1. On the **Edit runtime settings** page, confirm that **Runtime** is set to **Python 3.8** or a later version. 

1. To tell the Lambda runtime which handler method in your Lambda function code to invoke, enter **transform.lambda\$1handler** for **Handler**.

   When you configure a function in Python, the value of the handler setting is the file name and the name of the handler module, separated by a dot. For example, `transform.lambda_handler` calls the `lambda_handler` method defined in the `transform.py` file.

1. Choose **Save**.

1. (Optional) On your Lambda function's details page, choose the **Configuration** tab. In the left navigation pane, choose **General configuration**, then choose **Edit**. In the **Timeout** field, enter **1** min **0** sec. Keep the remaining settings set to the defaults, and choose **Save**.

   **Timeout** is the amount of time that Lambda allows a function to run for an invocation before stopping it. The default is 3 seconds. The maximum duration for a Lambda function used by S3 Object Lambda is 60 seconds. Pricing is based on the amount of memory configured and the amount of time that your code runs.

## Step 5: Configure an IAM policy for your Lambda function's execution role
<a name="ol-upper-step5"></a>

To enable your Lambda function to provide customized data and response headers to the `GetObject` caller, your Lambda function's execution role must have IAM permissions to call the `WriteGetObjectResponse` API.

**To attach an IAM policy to your Lambda function role**



1. In the AWS Lambda console at [https://console.aws.amazon.com/lambda/](https://console.aws.amazon.com/lambda/), choose **Functions** in the left navigation pane. 

1. Choose the function that you created in [Step 4](#ol-upper-step4) (for example, **tutorial-object-lambda-function**).

1. On your Lambda function's details page, choose the **Configuration** tab, and then choose **Permissions** in the left navigation pane. 

1. Under **Execution role**, choose the link of the **Role name**. The IAM console opens. 

1. On the IAM console's **Summary** page for your Lambda function's execution role, choose the **Permissions** tab. Then, from the **Add Permissions** menu, choose **Attach policies**.

1. On the **Attach Permissions** page, enter **AmazonS3ObjectLambdaExecutionRolePolicy** in the search box to filter the list of policies. Select the check box next to the name of the **AmazonS3ObjectLambdaExecutionRolePolicy** policy. 

1. Choose **Attach policies**. 

## Step 6: Create an S3 Object Lambda Access Point
<a name="ol-upper-step6"></a>

An S3 Object Lambda Access Point provides the flexibility to invoke a Lambda function directly from an S3 GET request so that the function can process data retrieved from an S3 access point. When creating and configuring an S3 Object Lambda Access Point, you must specify the Lambda function to invoke and provide the event context in JSON format as custom parameters for Lambda to use.

**To create an S3 Object Lambda Access Point**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Object Lambda Access Points**.

1. On the **Object Lambda Access Points** page, choose **Create Object Lambda Access Point**.

1. For **Object Lambda Access Point name**, enter the name that you want to use for the Object Lambda Access Point (for example, **tutorial-object-lambda-accesspoint**). 

1. For **Supporting Access Point**, enter or browse to the standard access point that you created in [Step 3](#ol-upper-step3) (for example, **tutorial-access-point**), and then choose **Choose supporting Access Point**. 

1. For **S3 APIs**, to retrieve objects from the S3 bucket for Lambda function to process, select **GetObject**.

1. For **Invoke Lambda function**, you can choose either of the following two options for this tutorial. 
   + Choose **Choose from functions in your account**, and then choose the Lambda function that you created in [Step 4](#ol-upper-step4) (for example, **tutorial-object-lambda-function**) from the **Lambda function** dropdown list.
   + Choose **Enter ARN**, and then enter the Amazon Resource Name (ARN) of the Lambda function that you created in [Step 4](#ol-upper-step4).

1. For **Lambda function version**, choose **\$1LATEST** (the latest version of the Lambda function that you created in [Step 4](#ol-upper-step4)).

1. (Optional) If you need your Lambda function to recognize and process GET requests with range and part number headers, select **Lambda function supports requests using range** and **Lambda function supports requests using part numbers**. Otherwise, clear these two check boxes.

   For more information about how to use range or part numbers with S3 Object Lambda, see [Working with Range and partNumber headers](range-get-olap.md).

1. (Optional) Under **Payload - *optional***, add JSON text to provide your Lambda function with additional information.

   A payload is optional JSON text that you can provide to your Lambda function as input for all invocations coming from a specific S3 Object Lambda Access Point. To customize the behaviors for multiple Object Lambda Access Points that invoke the same Lambda function, you can configure payloads with different parameters, thereby extending the flexibility of your Lambda function.

   For more information about payload, see [Event context format and usage](olap-event-context.md).

1. (Optional) For **Request metrics - *optional***, choose **Disable** or **Enable** to add Amazon S3 monitoring to your Object Lambda Access Point. Request metrics are billed at the standard Amazon CloudWatch rate. For more information, see [CloudWatch pricing](https://aws.amazon.com/cloudwatch/pricing/).

1. Under **Object Lambda Access Point policy - *optional***, keep the default setting. 

   (Optional) You can set a resource policy. This resource policy grants the `GetObject` API permission to use the specified Object Lambda Access Point.

1. Keep the remaining settings set to the defaults, and choose **Create Object Lambda Access Point**.

## Step 7: View the transformed data
<a name="ol-upper-step7"></a>

Now, S3 Object Lambda is ready to transform your data for your use case. In this tutorial, S3 Object Lambda transforms all the text in your object to uppercase.

**Topics**
+ [

### View the transformed data in your S3 Object Lambda Access Point
](#ol-upper-step7-check-data)
+ [

### Run a Python script to print the original and transformed data
](#ol-upper-step7-python-print)

### View the transformed data in your S3 Object Lambda Access Point
<a name="ol-upper-step7-check-data"></a>

When you request to retrieve a file through your S3 Object Lambda Access Point, you make a `GetObject` API call to S3 Object Lambda. S3 Object Lambda invokes the Lambda function to transform your data, and then returns the transformed data as the response to the standard S3 `GetObject` API call.

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Object Lambda Access Points**.

1. On the **Object Lambda Access Points** page, choose the S3 Object Lambda Access Point that you created in [Step 6](#ol-upper-step6) (for example, **tutorial-object-lambda-accesspoint**).

1. On the **Objects** tab of your S3 Object Lambda Access Point, select the file that has the same name (for example, `tutorial.txt`) as the one that you uploaded to the S3 bucket in [Step 2](#ol-upper-step2). 

   This file should contain all the transformed data.

1. To view the transformed data, choose **Open** or **Download**.

### Run a Python script to print the original and transformed data
<a name="ol-upper-step7-python-print"></a>

You can use S3 Object Lambda with your existing applications. To do so, update your application configuration to use the new S3 Object Lambda Access Point ARN that you created in [Step 6](#ol-upper-step6) to retrieve data from S3.

The following example Python script prints both the original data from the S3 bucket and the transformed data from the S3 Object Lambda Access Point. 

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Object Lambda Access Points**.

1. On the **Object Lambda Access Points** page, choose the radio button to the left of the S3 Object Lambda Access Point that you created in [Step 6](#ol-upper-step6) (for example, **tutorial-object-lambda-accesspoint**).

1. Choose **Copy ARN**.

1. Save the ARN for use later.

1. Write a Python script on your local machine to print both the original data (for example, `tutorial.txt`) from your S3 Bucket and the transformed data (for example, `tutorial.txt`) from your S3 Object Lambda Access Point). You can use the following example script. 

   ```
   import boto3
   from botocore.config import Config
   
   s3 = boto3.client('s3', config=Config(signature_version='s3v4'))
   
   def getObject(bucket, key):
       objectBody = s3.get_object(Bucket = bucket, Key = key)
       print(objectBody["Body"].read().decode("utf-8"))
       print("\n")
   
   print('Original object from the S3 bucket:')
   # Replace the two input parameters of getObject() below with 
   # the S3 bucket name that you created in Step 1 and 
   # the name of the file that you uploaded to the S3 bucket in Step 2
   getObject("tutorial-bucket", 
             "tutorial.txt")
   
   print('Object transformed by S3 Object Lambda:')
   # Replace the two input parameters of getObject() below with 
   # the ARN of your S3 Object Lambda Access Point that you saved earlier and
   # the name of the file with the transformed data (which in this case is
   # the same as the name of the file that you uploaded to the S3 bucket 
   # in Step 2)
   getObject("arn:aws:s3-object-lambda:us-west-2:111122223333:accesspoint/tutorial-object-lambda-accesspoint",
             "tutorial.txt")
   ```

1. Save your Python script with a custom name (for example, `tutorial_print.py` ) in the folder (for example, `object-lambda`) that you created in [Step 4](#ol-upper-step4) on your local machine.

1. In your local terminal, run the following command from the root of the directory (for example, `object-lambda`) that you created in [Step 4](#ol-upper-step4).

   ```
   python3 tutorial_print.py
   ```

   You should see both the original data and the transformed data (all text as uppercase) through the terminal. For example, you should see something like the following text.

   ```
   Original object from the S3 bucket:
   Amazon S3 Object Lambda Tutorial:
   You can add your own code to process data retrieved from S3 before 
   returning it to an application.
   
   Object transformed by S3 Object Lambda:
   AMAZON S3 OBJECT LAMBDA TUTORIAL:
   YOU CAN ADD YOUR OWN CODE TO PROCESS DATA RETRIEVED FROM S3 BEFORE 
   RETURNING IT TO AN APPLICATION.
   ```

## Step 8: Clean up
<a name="ol-upper-step8"></a>

If you transformed your data through S3 Object Lambda only as a learning exercise, delete the AWS resources that you allocated so that you no longer accrue charges. 

**Topics**
+ [

### Delete the Object Lambda Access Point
](#ol-upper-step8-delete-olap)
+ [

### Delete the S3 access point
](#ol-upper-step8-delete-ap)
+ [

### Delete the execution role for your Lambda function
](#ol-upper-step8-delete-lambda-role)
+ [

### Delete the Lambda function
](#ol-upper-step8-delete-lambda-function)
+ [

### Delete the CloudWatch log group
](#ol-upper-step8-delete-cloudwatch)
+ [

### Delete the original file in the S3 source bucket
](#ol-upper-step8-delete-file)
+ [

### Delete the S3 source bucket
](#ol-upper-step8-delete-bucket)
+ [

### Delete the IAM user
](#ol-upper-step8-delete-user)

### Delete the Object Lambda Access Point
<a name="ol-upper-step8-delete-olap"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Object Lambda Access Points**.

1. On the **Object Lambda Access Points** page, choose the radio button to the left of the S3 Object Lambda Access Point that you created in [Step 6](#ol-upper-step6) (for example, **tutorial-object-lambda-accesspoint**).

1. Choose **Delete**.

1. Confirm that you want to delete your Object Lambda Access Point by entering its name in the text field that appears, and then choose **Delete**.

### Delete the S3 access point
<a name="ol-upper-step8-delete-ap"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Access Points**.

1. Navigate to the access point that you created in [Step 3](#ol-upper-step3) (for example, **tutorial-access-point**), and choose the radio button next to the name of the access point.

1. Choose **Delete**.

1. Confirm that you want to delete your access point by entering its name in the text field that appears, and then choose **Delete**.

### Delete the execution role for your Lambda function
<a name="ol-upper-step8-delete-lambda-role"></a>

1. Sign in to the AWS Management Console and open the AWS Lambda console at [https://console.aws.amazon.com/lambda/](https://console.aws.amazon.com/lambda/).

1. In the left navigation pane, choose **Functions**.

1. Choose the function that you created in [Step 4](#ol-upper-step4) (for example, **tutorial-object-lambda-function**).

1. On your Lambda function's details page, choose the **Configuration** tab, and then choose **Permissions** in the left navigation pane. 

1. Under **Execution role**, choose the link of the **Role name**. The IAM console opens.

1. On the IAM console's **Summary** page of your Lambda function's execution role, choose **Delete role**.

1. In the **Delete role** dialog box, choose **Yes, delete**.

### Delete the Lambda function
<a name="ol-upper-step8-delete-lambda-function"></a>

1. In the AWS Lambda console at [https://console.aws.amazon.com/lambda/](https://console.aws.amazon.com/lambda/), choose **Functions** in the left navigation pane. 

1. Select the check box to the left of the name of the function that you created in [Step 4](#ol-upper-step4) (for example, **tutorial-object-lambda-function**).

1. Choose **Actions**, and then choose **Delete**.

1. In the **Delete function** dialog box, choose **Delete**.

### Delete the CloudWatch log group
<a name="ol-upper-step8-delete-cloudwatch"></a>

1. Open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

1. In the left navigation pane, choose **Log groups**.

1. Find the log group whose name ends with the Lambda function that you created in [Step 4](#ol-upper-step4) (for example, **tutorial-object-lambda-function**).

1. Select the check box to the left of the name of the log group.

1. Choose **Actions**, and then choose **Delete log group(s)**.

1. In the **Delete log group(s)** dialog box, choose **Delete**.

### Delete the original file in the S3 source bucket
<a name="ol-upper-step8-delete-file"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Buckets**.

1. In the **Bucket name** list, choose the name of the bucket that you uploaded the original file to in [Step 2](#ol-upper-step2) (for example, **tutorial-bucket**).

1. Select the check box to the left of the name of the object that you want to delete (for example, `tutorial.txt`).

1. Choose **Delete**.

1. On the **Delete objects** page, in the **Permanently delete objects?** section, confirm that you want to delete this object by entering **permanently delete** in the text box.

1. Choose **Delete objects**.

### Delete the S3 source bucket
<a name="ol-upper-step8-delete-bucket"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Buckets**.

1. In the **Buckets** list, choose the radio button next to the name of the bucket that you created in [Step 1](#ol-upper-step1) (for example, **tutorial-bucket**).

1. Choose **Delete**.

1. On the **Delete bucket** page, confirm that you want to delete the bucket by entering the bucket name in the text field, and then choose **Delete bucket**.

### Delete the IAM user
<a name="ol-upper-step8-delete-user"></a>

1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the left navigation pane, choose **Users**, and then select the check box next to the user name that you want to delete.

1. At the top of the page, choose **Delete**.

1. In the **Delete *user name*?** dialog box, enter the user name in the text input field to confirm the deletion of the user. Choose **Delete**.

## Next steps
<a name="ol-upper-next-steps"></a>

After completing this tutorial, you can customize the Lambda function for your use case to modify the data returned by standard S3 GET requests.

The following is a list of common use cases for S3 Object Lambda:
+ Masking sensitive data for security and compliance.

  For more information, see [Tutorial: Detecting and redacting PII data with S3 Object Lambda and Amazon Comprehend](tutorial-s3-object-lambda-redact-pii.md).
+ Filtering certain rows of data to deliver specific information.
+ Augmenting data with information from other services or databases.
+ Converting across data formats, such as converting XML to JSON for application compatibility.
+ Compressing or decompressing files as they are being downloaded.
+ Resizing and watermarking images.

  For more information, see [Tutorial: Using S3 Object Lambda to dynamically watermark images as they are retrieved](https://aws.amazon.com/getting-started/hands-on/amazon-s3-object-lambda-to-dynamically-watermark-images/?ref=docs_gateway/amazons3/tutorial-s3-object-lambda-uppercase.html).
+ Implementing custom authorization rules to access data.

For more information about S3 Object Lambda, see [Transforming objects with S3 Object Lambda](transforming-objects.md).

# Tutorial: Detecting and redacting PII data with S3 Object Lambda and Amazon Comprehend
<a name="tutorial-s3-object-lambda-redact-pii"></a>

**Note**  
As of November 7th, 2025, S3 Object Lambda is available only to existing customers that are currently using the service as well as to select AWS Partner Network (APN) partners. For capabilities similar to S3 Object Lambda, learn more here - [Amazon S3 Object Lambda availability change](https://docs.aws.amazon.com/AmazonS3/latest/userguide/amazons3-ol-change.html).

When you're using Amazon S3 for shared datasets for multiple applications and users to access, it's important to restrict privileged information, such as personally identifiable information (PII), to only authorized entities. For example, when a marketing application uses some data containing PII, it might need to first mask PII data to meet data privacy requirements. Also, when an analytics application uses a production order inventory dataset, it might need to first redact customer credit card information to prevent unintended data leakage.

With [S3 Object Lambda](https://aws.amazon.com/s3/features/object-lambda) and a prebuilt AWS Lambda function powered by Amazon Comprehend, you can protect PII data retrieved from S3 before returning it to an application. Specifically, you can use the prebuilt [Lambda function](https://aws.amazon.com/lambda/) as a redacting function and attach it to an S3 Object Lambda Access Point. When an application (for example, an analytics application) sends [standard S3 GET requests](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html), these requests made through the S3 Object Lambda Access Point invoke the prebuilt redacting Lambda function to detect and redact PII data retrieved from an underlying data source through a supporting S3 access point. Then, the S3 Object Lambda Access Point returns the redacted result back to the application.

![\[This is an S3 Object Lambda workflow diagram.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/ol-comprehend-image-global.png)


In the process, the prebuilt Lambda function uses [Amazon Comprehend](https://aws.amazon.com/comprehend/), a natural language processing (NLP) service, to capture variations in how PII is represented, regardless of how PII exists in text (such as numerically or as a combination of words and numbers). Amazon Comprehend can even use context in the text to understand if a 4-digit number is a PIN, the last four numbers of a Social Security number (SSN), or a year. Amazon Comprehend processes any text file in UTF-8 format and can protect PII at scale without affecting accuracy. For more information, see [What is Amazon Comprehend?](https://docs.aws.amazon.com/comprehend/latest/dg/what-is.html) in the *Amazon Comprehend Developer Guide*.

**Objective**  
In this tutorial, you learn how to use S3 Object Lambda with the prebuilt Lambda function `ComprehendPiiRedactionS3ObjectLambda`. This function uses Amazon Comprehend to detect PII entities. It then redacts these entities by replacing them with asterisks. By redacting PII, you conceal sensitive data, which can help with security and compliance.

You also learn how to use and configure a prebuilt AWS Lambda function in the [AWS Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/) to work together with S3 Object Lambda for easy deployment. 

**Topics**
+ [

## Prerequisites: Create an IAM user with permissions
](#ol-pii-prerequisites)
+ [

## Step 1: Create an S3 bucket
](#ol-pii-step1)
+ [

## Step 2: Upload a file to the S3 bucket
](#ol-pii-step2)
+ [

## Step 3: Create an S3 access point
](#ol-pii-step3)
+ [

## Step 4: Configure and deploy a prebuilt Lambda function
](#ol-pii-step4)
+ [

## Step 5: Create an S3 Object Lambda Access Point
](#ol-pii-step5)
+ [

## Step 6: Use the S3 Object Lambda Access Point to retrieve the redacted file
](#ol-pii-step6)
+ [

## Step 7: Clean up
](#ol-pii-step7)
+ [

## Next steps
](#ol-pii-next-steps)

## Prerequisites: Create an IAM user with permissions
<a name="ol-pii-prerequisites"></a>

Before you start this tutorial, you must have an AWS account that you can sign in to as an AWS Identity and Access Management user (IAM user) with correct permissions.

You can create an IAM user for the tutorial. To complete this tutorial, your IAM user must attach the following IAM policies to access relevant AWS resources and perform specific actions. 

**Note**  
For simplicity, this tutorial creates and uses an IAM user. After completing this tutorial, remember to [Delete the IAM user](#ol-pii-step8-delete-user). For production use, we recommend that you follow the [Security best practices in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) in the *IAM User Guide*. A best practice requires human users to use federation with an identity provider to access AWS with temporary credentials. Another best practice is to require workloads to use temporary credentials with IAM roles to access AWS. To learn about using AWS IAM Identity Center to create users with temporary credentials, see [Getting started](https://docs.aws.amazon.com/singlesignon/latest/userguide/getting-started.html) in the *AWS IAM Identity Center User Guide*.   
This tutorial also uses full-access policies. For production use, we recommend that you instead grant only the minimum permissions necessary for your use case, in accordance with [security best practices](security-best-practices.md#security-best-practices-prevent).

Your IAM user requires the following AWS managed policies:
+ [AmazonS3FullAccess](https://console.aws.amazon.com/iam/home?#/policies/arn:aws:iam::aws:policy/AmazonS3FullAccess$jsonEditor) – Grants permissions to all Amazon S3 actions, including permissions to create and use an Object Lambda Access Point. 
+ [AWSLambda\$1FullAccess](https://console.aws.amazon.com/iam/home#/policies/arn:aws:iam::aws:policy/AWSLambda_FullAccess$jsonEditor) – Grants permissions to all Lambda actions. 
+ [AWSCloudFormationFullAccess](https://console.aws.amazon.com/iam/home?#/policies/arn:aws:iam::aws:policy/AWSCloudFormationFullAccess$serviceLevelSummary) – Grants permissions to all AWS CloudFormation actions.
+ [IAMFullAccess](https://console.aws.amazon.com/iam/home#/policies/arn:aws:iam::aws:policy/IAMFullAccess$jsonEditor) – Grants permissions to all IAM actions. 
+ [IAMAccessAnalyzerReadOnlyAccess](https://console.aws.amazon.com/iam/home#/policies/arn:aws:iam::aws:policy/IAMAccessAnalyzerReadOnlyAccess$jsonEditor) – Grants permissions to read all access information provided by IAM Access Analyzer. 

You can directly attach these existing policies when creating an IAM user. For more information about how to create an IAM user, see [Creating IAM users (console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html#id_users_create_console) in the *IAM User Guide*.

In addition, your IAM user requires a customer managed policy. To grant the IAM user permissions to all AWS Serverless Application Repository resources and actions, you must create an IAM policy and attach the policy to the IAM user.

**To create and attach an IAM policy to your IAM user**

1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the left navigation pane, choose **Policies**.

1. Choose **Create policy**. 

1. On the **Visual editor** tab, for **Service**, choose **Choose a service**. Then, choose **Serverless Application Repository**. 

1. For **Actions**, under **Manual actions**, select **All Serverless Application Repository actions (serverlessrepo:\$1)** for this tutorial.

   As a security best practice, you should allow permissions to only those actions and resources that a user needs, based on your use case. For more information, see [ Security best practices in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) in the *IAM User Guide*.

1. For **Resources**, choose **All resources** for this tutorial.

   As a best practice, you should define permissions for only specific resources in specific accounts. Alternatively, you can grant least privilege using condition keys. For more information, see [ Grant least privilege](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege) in the *IAM User Guide*.

1. Choose **Next: Tags**.

1. Choose **Next: Review**.

1. On the **Review policy** page, enter a **Name** (for example, **tutorial-serverless-application-repository**) and a **Description** (optional) for the policy that you are creating. Review the policy summary to make sure that you have granted the intended permissions, and then choose **Create policy** to save your new policy.

1. In the left navigation pane, choose **Users**. Then, choose the IAM user for this tutorial. 

1. On the **Summary** page of the chosen user, choose the **Permissions** tab, and then choose **Add permissions**.

1. Under **Grant permissions**, choose **Attach existing policies directly**.

1. Select the check box next to the policy that you just created (for example, **tutorial-serverless-application-repository**) and then choose **Next: Review**. 

1. Under **Permissions summary**, review the summary to make sure that you attached the intended policy. Then, choose **Add permissions**.

## Step 1: Create an S3 bucket
<a name="ol-pii-step1"></a>

Create a bucket to store the original data that you plan to transform. 

**Note**  
Access points may be attached for another data source, such as an Amazon FSx for OpenZFS volume, however this tutorial uses a supporting access point attached to an S3 bucket.

**To create a bucket**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Buckets**.

1. Choose **Create bucket**. 

   The **Create bucket** page opens.

1. For **Bucket name**, enter a name (for example, **tutorial-bucket**) for your bucket. 

   For more information about naming buckets in Amazon S3, see [General purpose bucket naming rules](bucketnamingrules.md).

1. For **Region**, choose the AWS Region where you want the bucket to reside. 

   For more information about the bucket Region, see [General purpose buckets overview](UsingBucket.md).

1. For **Block Public Access settings for this bucket**, keep the default settings (**Block *all *public access** is enabled). 

   We recommend that you keep all Block Public Access settings enabled unless you need to turn off one or more of them for your use case. For more information about blocking public access, see [Blocking public access to your Amazon S3 storage](access-control-block-public-access.md).

1. For the remaining settings, keep the defaults. 

   (Optional) If you want to configure additional bucket settings for your specific use case, see [Creating a general purpose bucket](create-bucket-overview.md).

1. Choose **Create bucket**.

## Step 2: Upload a file to the S3 bucket
<a name="ol-pii-step2"></a>

Upload a text file containing known PII data of various types, such as names, banking information, phone numbers, and SSNs, to the S3 bucket as the original data that you will redact PII from later in this tutorial. 

For example, you can upload following the `tutorial.txt` file. This is an example input file from Amazon Comprehend.

```
Hello Zhang Wei, I am John. Your AnyCompany Financial Services, 
LLC credit card account 1111-0000-1111-0008 has a minimum payment 
of $24.53 that is due by July 31st. Based on your autopay settings, 
we will withdraw your payment on the due date from your 
bank account number XXXXXX1111 with the routing number XXXXX0000. 

Your latest statement was mailed to 100 Main Street, Any City, 
WA 98121. 
After your payment is received, you will receive a confirmation 
text message at 206-555-0100. 
If you have questions about your bill, AnyCompany Customer Service 
is available by phone at 206-555-0199 or 
email at support@anycompany.com.
```

**To upload a file to a bucket**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Buckets**.

1. In the **Buckets** list, choose the name of the bucket that you created in [Step 1](#ol-pii-step1) (for example, **tutorial-bucket**) to upload your file to.

1. On the **Objects** tab for your bucket, choose **Upload**.

1. On the **Upload** page, under **Files and folders**, choose **Add files**.

1. Choose a file to upload, and then choose **Open**. For example, you can upload the `tutorial.txt` file example mentioned earlier.

1. Choose **Upload**.

## Step 3: Create an S3 access point
<a name="ol-pii-step3"></a>

To use an S3 Object Lambda Access Point to access and transform the original data, you must create an S3 access point and associate it with the S3 bucket that you created in [Step 1](#ol-pii-step1). The access point must be in the same AWS Region as the objects you want to transform.

Later in this tutorial, you'll use this access point as a supporting access point for your Object Lambda Access Point. 

**To create an access point**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Access Points**.

1. On the **Access Points** page, choose **Create access point**.

1. In the **Access point name** field, enter the name (for example, **tutorial-pii-access-point**) for the access point.

   For more information about naming access points, see [Naming rules for access points](access-points-restrictions-limitations-naming-rules.md#access-points-names).

1. In the **Data source** field, enter the name of the bucket that you created in [Step 1](#ol-pii-step1) (for example, **tutorial-bucket**). S3 attaches the access point to this bucket. 

   (Optional) You can choose **Browse S3** to browse and search the buckets in your account. If you choose **Browse S3**, choose the desired bucket, and then choose **Choose path** to populate the **Bucket name** field with that bucket's name.

1. For **Network origin**, choose **Internet**. 

   For more information about network origins for access points, see [Creating access points restricted to a virtual private cloud](access-points-vpc.md).

1. By default, all block public access settings are turned on for your access point. We recommend that you keep **Block *all* public access** enabled. For more information, see [Managing public access to access points for general purpose buckets](access-points-bpa-settings.md).

1. For all other access point settings, keep the default settings.

   (Optional) You can modify the access point settings to support your use case. For this tutorial, we recommend keeping the default settings. 

   (Optional) If you need to manage access to your access point, you can specify an access point policy. For more information, see [Policy examples for access points](access-points-policies.md#access-points-policy-examples). 

1. Choose **Create access point**.

## Step 4: Configure and deploy a prebuilt Lambda function
<a name="ol-pii-step4"></a>

To redact PII data, configure and deploy the prebuilt AWS Lambda function `ComprehendPiiRedactionS3ObjectLambda` for use with your S3 Object Lambda Access Point.

**To configure and deploy the Lambda function**

1. Sign in to the AWS Management Console and view the [https://console.aws.amazon.com/lambda/home#/create/app?applicationId=arn:aws:serverlessrepo:us-east-1:839782855223:applications/ComprehendPiiRedactionS3ObjectLambda](https://console.aws.amazon.com/lambda/home#/create/app?applicationId=arn:aws:serverlessrepo:us-east-1:839782855223:applications/ComprehendPiiRedactionS3ObjectLambda) function in the AWS Serverless Application Repository.

1. For **Application settings**, under **Application name**, keep the default value (`ComprehendPiiRedactionS3ObjectLambda`) for this tutorial.

   (Optional) You can enter the name that you want to give to this application. You might want to do this if you plan to configure multiple Lambda functions for different access needs for the same shared dataset.

1. For **MaskCharacter**, keep the default value (`*`). The mask character replaces each character in the redacted PII entity. 

1. For **MaskMode**, keep the default value (**MASK**). The **MaskMode** value specifies whether the PII entity is redacted with the `MASK` character or the `PII_ENTITY_TYPE` value.

1. To redact the specified types of data, for **PiiEntityTypes**, keep the default value **ALL**. The **PiiEntityTypes** value specifies the PII entity types to be considered for redaction. 

   For more information about the list of supported PII entity types, see [Detect Personally Identifiable Information (PII)](https://docs.aws.amazon.com/comprehend/latest/dg/how-pii.html) in the *Amazon Comprehend Developer Guide*.

1. Keep the remaining settings set to the defaults.

   (Optional) If you want to configure additional settings for your specific use case, see the **Readme file** section on the left side of the page.

1. Select the check box next to **I acknowledge that this app creates custom IAM roles**.

1. Choose **Deploy**.

1. On the new application's page, under **Resources**, choose the **Logical ID** of the Lambda function that you deployed to review the function on the Lambda function page.

## Step 5: Create an S3 Object Lambda Access Point
<a name="ol-pii-step5"></a>

An S3 Object Lambda Access Point provides the flexibility to invoke a Lambda function directly from an S3 GET request so that the function can redact PII data retrieved from an S3 access point. When creating and configuring an S3 Object Lambda Access Point, you must specify the redacting Lambda function to invoke and provide the event context in JSON format as custom parameters for Lambda to use. 

The event context provides information about the request being made in the event passed from S3 Object Lambda to Lambda. For more information about all the fields in the event context, see [Event context format and usage](olap-event-context.md).

**To create an S3 Object Lambda Access Point**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Object Lambda Access Points**.

1. On the **Object Lambda Access Points** page, choose **Create Object Lambda Access Point**.

1. For **Object Lambda Access Point name**, enter the name that you want to use for the Object Lambda Access Point (for example, **tutorial-pii-object-lambda-accesspoint**). 

1. For **Supporting Access Point**, enter or browse to the standard access point that you created in [Step 3](#ol-pii-step3) (for example, **tutorial-pii-access-point**), and then choose **Choose supporting Access Point**. 

1. For **S3 APIs**, to retrieve objects from the S3 bucket for Lambda function to process, select **GetObject**.

1. For **Invoke Lambda function**, you can choose either of the following two options for this tutorial. 
   + Choose **Choose from functions in your account** and choose the Lambda function that you deployed in [Step 4](#ol-pii-step4) (for example, **serverlessrepo-ComprehendPiiRedactionS3ObjectLambda**) from the **Lambda function** dropdown list.
   + Choose **Enter ARN**, and then enter the Amazon Resource Name (ARN) of the Lambda function that you created in [Step 4](#ol-pii-step4).

1. For **Lambda function version**, choose **\$1LATEST** (the latest version of the Lambda function that you deployed in [Step 4](#ol-pii-step4)).

1. (Optional) If you need your Lambda function to recognize and process GET requests with range and part number headers, select **Lambda function supports requests using range** and **Lambda function supports requests using part numbers**. Otherwise, clear these two check boxes.

   For more information about how to use range or part numbers with S3 Object Lambda, see [Working with Range and partNumber headers](range-get-olap.md).

1. (Optional) Under **Payload - *optional***, add JSON text to provide your Lambda function with additional information.

   A payload is optional JSON text that you can provide to your Lambda function as input for all invocations coming from a specific S3 Object Lambda Access Point. To customize the behaviors for multiple Object Lambda Access Points that invoke the same Lambda function, you can configure payloads with different parameters, thereby extending the flexibility of your Lambda function.

   For more information about payload, see [Event context format and usage](olap-event-context.md).

1. (Optional) For **Request metrics - *optional***, choose **Disable** or **Enable** to add Amazon S3 monitoring to your Object Lambda Access Point. Request metrics are billed at the standard Amazon CloudWatch rate. For more information, see [CloudWatch pricing](https://aws.amazon.com/cloudwatch/pricing/).

1. Under **Object Lambda Access Point policy - *optional***, keep the default setting. 

   (Optional) You can set a resource policy. This resource policy grants the `GetObject` API permission to use the specified Object Lambda Access Point.

1. Keep the remaining settings set to the defaults, and choose **Create Object Lambda Access Point**.

## Step 6: Use the S3 Object Lambda Access Point to retrieve the redacted file
<a name="ol-pii-step6"></a>

Now, S3 Object Lambda is ready to redact PII data from your original file. 

**To use the S3 Object Lambda Access Point to retrieve the redacted file**

When you request to retrieve a file through your S3 Object Lambda Access Point, you make a `GetObject` API call to S3 Object Lambda. S3 Object Lambda invokes the Lambda function to redact your PII data and returns the transformed data as the response to the standard S3 `GetObject` API call.

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Object Lambda Access Points**.

1. On the **Object Lambda Access Points** page, choose the S3 Object Lambda Access Point that you created in [Step 5](#ol-pii-step5) (for example, **tutorial-pii-object-lambda-accesspoint**).

1. On the **Objects** tab of your S3 Object Lambda Access Point, select the file that has the same name (for example, `tutorial.txt`) as the one that you uploaded to the S3 bucket in [Step 2](#ol-pii-step2). 

   This file should contain all the transformed data.

1. To view the transformed data, choose **Open** or **Download**.

    You should be able to see the redacted file, as shown in the following example. 

   ```
   Hello *********. Your AnyCompany Financial Services, 
   LLC credit card account ******************* has a minimum payment 
   of $24.53 that is due by *********. Based on your autopay settings, 
   we will withdraw your payment on the due date from your 
   bank account ********** with the routing number *********. 
   
   Your latest statement was mailed to **********************************. 
   After your payment is received, you will receive a confirmation 
   text message at ************. 
   If you have questions about your bill, AnyCompany Customer Service 
   is available by phone at ************ or 
   email at **********************.
   ```

## Step 7: Clean up
<a name="ol-pii-step7"></a>

If you redacted your data through S3 Object Lambda only as a learning exercise, delete the AWS resources that you allocated so that you no longer accrue charges. 

**Topics**
+ [

### Delete the Object Lambda Access Point
](#ol-pii-step8-delete-olap)
+ [

### Delete the S3 access point
](#ol-pii-step8-delete-ap)
+ [

### Delete the Lambda function
](#ol-pii-step8-delete-lambda-function)
+ [

### Delete the CloudWatch log group
](#ol-pii-step8-delete-cloudwatch)
+ [

### Delete the original file in the S3 source bucket
](#ol-pii-step8-delete-file)
+ [

### Delete the S3 source bucket
](#ol-pii-step8-delete-bucket)
+ [

### Delete the IAM role for your Lambda function
](#ol-pii-step8-delete-lambda-role)
+ [

### Delete the customer managed policy for your IAM user
](#ol-pii-step8-delete-function-policy)
+ [

### Delete the IAM user
](#ol-pii-step8-delete-user)

### Delete the Object Lambda Access Point
<a name="ol-pii-step8-delete-olap"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Object Lambda Access Points**.

1. On the **Object Lambda Access Points** page, choose the option button to the left of the S3 Object Lambda Access Point that you created in [Step 5](#ol-pii-step5) (for example, **tutorial-pii-object-lambda-accesspoint**).

1. Choose **Delete**.

1. Confirm that you want to delete your Object Lambda Access Point by entering its name in the text field that appears, and then choose **Delete**.

### Delete the S3 access point
<a name="ol-pii-step8-delete-ap"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Access Points**.

1. Navigate to the access point that you created in [Step 3](#ol-pii-step3) (for example, **tutorial-pii-access-point**), and choose the option button next to the name of the access point.

1. Choose **Delete**.

1. Confirm that you want to delete your access point by entering its name in the text field that appears, and then choose **Delete**.

### Delete the Lambda function
<a name="ol-pii-step8-delete-lambda-function"></a>

1. In the AWS Lambda console at [https://console.aws.amazon.com/lambda/](https://console.aws.amazon.com/lambda/), choose **Functions** in the left navigation pane. 

1. Choose the function that you created in [Step 4](#ol-pii-step4) (for example, **serverlessrepo-ComprehendPiiRedactionS3ObjectLambda**).

1. Choose **Actions**, and then choose **Delete**.

1. In the **Delete function** dialog box, choose **Delete**.

### Delete the CloudWatch log group
<a name="ol-pii-step8-delete-cloudwatch"></a>

1. Open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

1. In the left navigation pane, choose **Log groups**.

1. Find the log group whose name ends with the Lambda function that you created in [Step 4](#ol-pii-step4) (for example, **serverlessrepo-ComprehendPiiRedactionS3ObjectLambda**).

1. Choose **Actions**, and then choose **Delete log group(s)**.

1. In the **Delete log group(s)** dialog box, choose **Delete**.

### Delete the original file in the S3 source bucket
<a name="ol-pii-step8-delete-file"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Buckets**.

1. In the **Bucket name** list, choose the name of the bucket that you uploaded the original file to in [Step 2](#ol-pii-step2) (for example, **tutorial-bucket**).

1. Select the check box to the left of the name of the object that you want to delete (for example, `tutorial.txt`).

1. Choose **Delete**.

1. On the **Delete objects** page, in the **Permanently delete objects?** section, confirm that you want to delete this object by entering **permanently delete** in the text box.

1. Choose **Delete objects**.

### Delete the S3 source bucket
<a name="ol-pii-step8-delete-bucket"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Buckets**.

1. In the **Buckets** list, choose the option button next to the name of the bucket that you created in [Step 1](#ol-pii-step1) (for example, **tutorial-bucket**).

1. Choose **Delete**.

1. On the **Delete bucket** page, confirm that you want to delete the bucket by entering the bucket name in the text field, and then choose **Delete bucket**.

### Delete the IAM role for your Lambda function
<a name="ol-pii-step8-delete-lambda-role"></a>

1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the left navigation pane, choose **Roles**, and then select the check box next to the role name that you want to delete. The role name starts with the name of the Lambda function that you deployed in [Step 4](#ol-pii-step4) (for example, **serverlessrepo-ComprehendPiiRedactionS3ObjectLambda**).

1. Choose **Delete**.

1. In the **Delete** dialog box, enter the role name in the text input field to confirm deletion. Then, choose **Delete**.

### Delete the customer managed policy for your IAM user
<a name="ol-pii-step8-delete-function-policy"></a>

1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the left navigation pane, choose **Policies**.

1. On the **Policies** page, enter the name of the customer managed policy that you created in the [Prerequisites](#ol-pii-prerequisites) (for example, **tutorial-serverless-application-repository**) in the search box to filter the list of policies. Select the option button next to the name of the policy that you want to delete.

1. Choose **Actions**, and then choose **Delete**.

1. Confirm that you want to delete this policy by entering its name in the text field that appears, and then choose **Delete**.

### Delete the IAM user
<a name="ol-pii-step8-delete-user"></a>

1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the left navigation pane, choose **Users**, and then select the check box next to the user name that you want to delete.

1. At the top of the page, choose **Delete**.

1. In the **Delete *user name*?** dialog box, enter the user name in the text input field to confirm the deletion of the user. Choose **Delete**.

## Next steps
<a name="ol-pii-next-steps"></a>

After completing this tutorial, you can further explore the following related use cases:
+ You can create multiple S3 Object Lambda Access Points and enable them with prebuilt Lambda functions that are configured differently to redact specific types of PII depending on the data accessors' business needs. 

  Each type of user assumes an IAM role and only has access to one S3 Object Lambda Access Point (managed through IAM policies). Then, you attach each `ComprehendPiiRedactionS3ObjectLambda` Lambda function configured for a different redaction use case to a different S3 Object Lambda Access Point. For each S3 Object Lambda Access Point, you can have a supporting S3 access point to read data from an S3 bucket that stores the shared dataset. 

  For more information about how to create an S3 bucket policy that allows users to read from the bucket only through S3 access points, see [Configuring IAM policies for using access points](access-points-policies.md).

  For more information about how to grant a user permission to access the Lambda function, the S3 access point, and the S3 Object Lambda Access Point, see [Configuring IAM policies for Object Lambda Access Points](olap-policies.md).
+ You can build your own Lambda function and use S3 Object Lambda with your customized Lambda function to meet your specific data needs.

  For example, to explore various data values, you can use S3 Object Lambda and your own Lambda function that uses additional [Amazon Comprehend features](https://aws.amazon.com/comprehend/features/), such as entity recognition, key phrase recognition, sentiment analysis, and document classification, to process data. You can also use S3 Object Lambda together with [Amazon Comprehend Medical](https://aws.amazon.com/comprehend/medical/), a HIPAA-eligible NLP service, to analyze and extract data in a context-aware manner.

  For more information about how to transform data with S3 Object Lambda and your own Lambda function, see [Tutorial: Transforming data for your application with S3 Object Lambda](tutorial-s3-object-lambda-uppercase.md).

# Debugging and troubleshooting S3 Object Lambda
<a name="olap-debugging-lambda"></a>

**Note**  
As of November 7th, 2025, S3 Object Lambda is available only to existing customers that are currently using the service as well as to select AWS Partner Network (APN) partners. For capabilities similar to S3 Object Lambda, learn more here - [Amazon S3 Object Lambda availability change](https://docs.aws.amazon.com/AmazonS3/latest/userguide/amazons3-ol-change.html).

Requests to Amazon S3 Object Lambda access points might result in a new error response when something goes wrong with the Lambda function invocation or execution. These errors follow the same format as standard Amazon S3 errors. For information about S3 Object Lambda errors, see [S3 Object Lambda Error Code List](https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html#S3ObjectLambdaErrorCodeList) in the *Amazon Simple Storage Service API Reference*.

For more information about general Lambda function debugging, see [Monitoring and troubleshooting Lambda applications](https://docs.aws.amazon.com/lambda/latest/dg/lambda-monitoring.html ) in the *AWS Lambda Developer Guide*.

For information about standard Amazon S3 errors, see [Error Responses](https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html) in the *Amazon Simple Storage Service API Reference*.

You can enable request metrics in Amazon CloudWatch for your Object Lambda Access Points. These metrics help you monitor the operational performance of your access point. You can enable request metrics during or after creation of your Object Lambda Access Point. For more information, see [S3 Object Lambda request metrics in CloudWatch](metrics-dimensions.md#olap-cloudwatch-metrics).

To get more granular logging about requests made to your Object Lambda Access Points, you can enable AWS CloudTrail data events. For more information, see [Logging data events for trails](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html) in the *AWS CloudTrail User Guide*.

For S3 Object Lambda tutorials, see the following:
+ [Tutorial: Transforming data for your application with S3 Object Lambda](tutorial-s3-object-lambda-uppercase.md)
+ [Tutorial: Detecting and redacting PII data with S3 Object Lambda and Amazon Comprehend](tutorial-s3-object-lambda-redact-pii.md)
+ [Tutorial: Using S3 Object Lambda to dynamically watermark images as they are retrieved](https://aws.amazon.com/getting-started/hands-on/amazon-s3-object-lambda-to-dynamically-watermark-images/?ref=docs_gateway/amazons3/transforming-objects.html)

For more information about standard access points, see [Managing access to shared datasets with access points](access-points.md). 

For information about working with buckets, see [General purpose buckets overview](UsingBucket.md). For information about working with objects, see [Amazon S3 objects overview](UsingObjects.md).