

# Process Amazon S3 event notifications with Lambda
<a name="with-s3"></a>

You can use Lambda to process [event notifications](https://docs.aws.amazon.com/AmazonS3/latest/userguide/NotificationHowTo.html) from Amazon Simple Storage Service. Amazon S3 can send an event to a Lambda function when an object is created or deleted. You configure notification settings on a bucket, and grant Amazon S3 permission to invoke a function on the function's resource-based permissions policy.

**Warning**  
If your Lambda function uses the same bucket that triggers it, it could cause the function to run in a loop. For example, if the bucket triggers a function each time an object is uploaded, and the function uploads an object to the bucket, then the function indirectly triggers itself. To avoid this, use two buckets, or configure the trigger to only apply to a prefix used for incoming objects.

Amazon S3 invokes your function [asynchronously](invocation-async.md) with an event that contains details about the object. The following example shows an event that Amazon S3 sent when a deployment package was uploaded to Amazon S3.

**Example Amazon S3 notification event**  

```
{
  "Records": [
    {
      "eventVersion": "2.1",
      "eventSource": "aws:s3",
      "awsRegion": "us-east-2",
      "eventTime": "2019-09-03T19:37:27.192Z",
      "eventName": "ObjectCreated:Put",
      "userIdentity": {
        "principalId": "AWS:AIDAINPONIXQXHT3IKHL2"
      },
      "requestParameters": {
        "sourceIPAddress": "205.255.255.255"
      },
      "responseElements": {
        "x-amz-request-id": "D82B88E5F771F645",
        "x-amz-id-2": "vlR7PnpV2Ce81l0PRw6jlUpck7Jo5ZsQjryTjKlc5aLWGVHPZLj5NeC6qMa0emYBDXOo6QBU0Wo="
      },
      "s3": {
        "s3SchemaVersion": "1.0",
        "configurationId": "828aa6fc-f7b5-4305-8584-487c791949c1",
        "bucket": {
          "name": "amzn-s3-demo-bucket",
          "ownerIdentity": {
            "principalId": "A3I5XTEXAMAI3E"
          },
          "arn": "arn:aws:s3:::lambda-artifacts-deafc19498e3f2df"
        },
        "object": {
          "key": "b21b84d653bb07b05b1e6b33684dc11b",
          "size": 1305107,
          "eTag": "b21b84d653bb07b05b1e6b33684dc11b",
          "sequencer": "0C0F6F405D6ED209E1"
        }
      }
    }
  ]
}
```

To invoke your function, Amazon S3 needs permission from the function's [resource-based policy](access-control-resource-based.md). When you configure an Amazon S3 trigger in the Lambda console, the console modifies the resource-based policy to allow Amazon S3 to invoke the function if the bucket name and account ID match. If you configure the notification in Amazon S3, you use the Lambda API to update the policy. You can also use the Lambda API to grant permission to another account, or restrict permission to a designated alias.

If your function uses the AWS SDK to manage Amazon S3 resources, it also needs Amazon S3 permissions in its [execution role](lambda-intro-execution-role.md). 

**Topics**
+ [

# Tutorial: Using an Amazon S3 trigger to invoke a Lambda function
](with-s3-example.md)
+ [

# Tutorial: Using an Amazon S3 trigger to create thumbnail images
](with-s3-tutorial.md)

# Tutorial: Using an Amazon S3 trigger to invoke a Lambda function
<a name="with-s3-example"></a>

In this tutorial, you use the console to create a Lambda function and configure a trigger for an Amazon Simple Storage Service (Amazon S3) bucket. Every time that you add an object to your Amazon S3 bucket, your function runs and outputs the object type to Amazon CloudWatch Logs.

![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/services-s3-example/s3_tut_config.png)


This tutorial demonstrates how to:

1. Create an Amazon S3 bucket.

1. Create a Lambda function that returns the object type of objects in an Amazon S3 bucket.

1. Configure a Lambda trigger that invokes your function when objects are uploaded to your bucket.

1. Test your function, first with a dummy event, and then using the trigger.

By completing these steps, you’ll learn how to configure a Lambda function to run whenever objects are added to or deleted from an Amazon S3 bucket. You can complete this tutorial using only the AWS Management Console.

## Create an Amazon S3 bucket
<a name="with-s3-example-create-bucket"></a>

![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/services-s3-example/s3trigger_tut_steps1.png)


**To create an Amazon S3 bucket**

1. Open the [Amazon S3 console](https://console.aws.amazon.com/s3) and select the **General purpose buckets** page.

1. Select the AWS Region closest to your geographical location. You can change your region using the drop-down list at the top of the screen. Later in the tutorial, you must create your Lambda function in the same Region.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/console_region_select.png)

1. Choose **Create bucket**.

1. Under **General configuration**, do the following:

   1. For **Bucket type**, ensure **General purpose** is selected.

   1. For **Bucket name**, enter a globally unique name that meets the Amazon S3 [Bucket naming rules](https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucketnamingrules.html). Bucket names can contain only lower case letters, numbers, dots (.), and hyphens (-).

1. Leave all other options set to their default values and choose **Create bucket**.

## Upload a test object to your bucket
<a name="with-s3-example-upload-test-object"></a>

![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/services-s3-example/s3trigger_tut_steps2.png)


**To upload a test object**

1. Open the [Buckets](https://console.aws.amazon.com/s3/buckets) page of the Amazon S3 console and choose the bucket you created during the previous step.

1. Choose **Upload**.

1. Choose **Add files** and select the object that you want to upload. You can select any file (for example, `HappyFace.jpg`).

1. Choose **Open**, then choose **Upload**.

Later in the tutorial, you’ll test your Lambda function using this object.

## Create a permissions policy
<a name="with-s3-example-create-policy"></a>

![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/services-s3-example/s3trigger_tut_steps3.png)


Create a permissions policy that allows Lambda to get objects from an Amazon S3 bucket and to write to Amazon CloudWatch Logs. 

**To create the policy**

1. Open the [Policies page](https://console.aws.amazon.com/iam/home#/policies) of the IAM console.

1. Choose **Create Policy**.

1. Choose the **JSON** tab, and then paste the following custom policy into the JSON editor.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                   "logs:PutLogEvents",
                   "logs:CreateLogGroup",
                   "logs:CreateLogStream"
               ],
               "Resource": "arn:aws:logs:*:*:*"
           },
           {
               "Effect": "Allow",
               "Action": [
                   "s3:GetObject"
               ],
               "Resource": "arn:aws:s3:::*/*"
           }
       ]
   }
   ```

------

1. Choose **Next: Tags**.

1. Choose **Next: Review**.

1. Under **Review policy**, for the policy **Name**, enter **s3-trigger-tutorial**.

1. Choose **Create policy**.

## Create an execution role
<a name="with-s3-example-create-role"></a>

![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/services-s3-example/s3trigger_tut_steps4.png)


An [execution role](lambda-intro-execution-role.md) is an AWS Identity and Access Management (IAM) role that grants a Lambda function permission to access AWS services and resources. In this step, create an execution role using the permissions policy that you created in the previous step.

**To create an execution role and attach your custom permissions policy**

1. Open the [Roles page](https://console.aws.amazon.com/iam/home#/roles) of the IAM console.

1. Choose **Create role**.

1. For the type of trusted entity, choose **AWS service**, then for the use case, choose **Lambda**.

1. Choose **Next**.

1. In the policy search box, enter **s3-trigger-tutorial**.

1. In the search results, select the policy that you created (`s3-trigger-tutorial`), and then choose **Next**.

1. Under **Role details**, for the **Role name**, enter **lambda-s3-trigger-role**, then choose **Create role**.

## Create the Lambda function
<a name="with-s3-example-create-function"></a>

![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/services-s3-example/s3trigger_tut_steps5.png)


Create a Lambda function in the console using the Python 3.14 runtime.

**To create the Lambda function**

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console.

1. Make sure you're working in the same AWS Region you created your Amazon S3 bucket in. You can change your Region using the drop-down list at the top of the screen.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/console_region_select.png)

1. Choose **Create function**.

1. Choose **Author from scratch**

1. Under **Basic information**, do the following:

   1. For **Function name**, enter `s3-trigger-tutorial`

   1. For **Runtime**, choose **Python 3.14**.

   1. For **Architecture**, choose **x86\$164**.

1. In the **Change default execution role** tab, do the following:

   1. Expand the tab, then choose **Use an existing role**.

   1. Select the `lambda-s3-trigger-role` you created earlier.

1. Choose **Create function**.

## Deploy the function code
<a name="with-s3-example-deploy-code"></a>

![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/services-s3-example/s3trigger_tut_steps6.png)


This tutorial uses the Python 3.14 runtime, but we’ve also provided example code files for other runtimes. You can select the tab in the following box to see the code for the runtime you’re interested in.

The Lambda function retrieves the key name of the uploaded object and the name of the bucket from the `event` parameter it receives from Amazon S3. The function then uses the [get\$1object](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3/client/get_object.html) method from the AWS SDK for Python (Boto3) to retrieve the object's metadata, including the content type (MIME type) of the uploaded object.

**To deploy the function code**

1. Choose the **Python** tab in the following box and copy the code.

------
#### [ .NET ]

**SDK for .NET**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-s3-to-lambda) repository. 
Consuming an S3 event with Lambda using .NET.  

   ```
   // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
   // SPDX-License-Identifier: Apache-2.0
   ﻿using System.Threading.Tasks;
   using Amazon.Lambda.Core;
   using Amazon.S3;
   using System;
   using Amazon.Lambda.S3Events;
   using System.Web;
   
   // Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class.
   [assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.SystemTextJson.DefaultLambdaJsonSerializer))]
   
   namespace S3Integration
   {
       public class Function
       {
           private static AmazonS3Client _s3Client;
           public Function() : this(null)
           {
           }
   
           internal Function(AmazonS3Client s3Client)
           {
               _s3Client = s3Client ?? new AmazonS3Client();
           }
   
           public async Task<string> Handler(S3Event evt, ILambdaContext context)
           {
               try
               {
                   if (evt.Records.Count <= 0)
                   {
                       context.Logger.LogLine("Empty S3 Event received");
                       return string.Empty;
                   }
   
                   var bucket = evt.Records[0].S3.Bucket.Name;
                   var key = HttpUtility.UrlDecode(evt.Records[0].S3.Object.Key);
   
                   context.Logger.LogLine($"Request is for {bucket} and {key}");
   
                   var objectResult = await _s3Client.GetObjectAsync(bucket, key);
   
                   context.Logger.LogLine($"Returning {objectResult.Key}");
   
                   return objectResult.Key;
               }
               catch (Exception e)
               {
                   context.Logger.LogLine($"Error processing request - {e.Message}");
   
                   return string.Empty;
               }
           }
       }
   }
   ```

------
#### [ Go ]

**SDK for Go V2**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-s3-to-lambda) repository. 
Consuming an S3 event with Lambda using Go.  

   ```
   // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
   // SPDX-License-Identifier: Apache-2.0
   package main
   
   import (
   	"context"
   	"log"
   
   	"github.com/aws/aws-lambda-go/events"
   	"github.com/aws/aws-lambda-go/lambda"
   	"github.com/aws/aws-sdk-go-v2/config"
   	"github.com/aws/aws-sdk-go-v2/service/s3"
   )
   
   func handler(ctx context.Context, s3Event events.S3Event) error {
   	sdkConfig, err := config.LoadDefaultConfig(ctx)
   	if err != nil {
   		log.Printf("failed to load default config: %s", err)
   		return err
   	}
   	s3Client := s3.NewFromConfig(sdkConfig)
   
   	for _, record := range s3Event.Records {
   		bucket := record.S3.Bucket.Name
   		key := record.S3.Object.URLDecodedKey
   		headOutput, err := s3Client.HeadObject(ctx, &s3.HeadObjectInput{
   			Bucket: &bucket,
   			Key:    &key,
   		})
   		if err != nil {
   			log.Printf("error getting head of object %s/%s: %s", bucket, key, err)
   			return err
   		}
   		log.Printf("successfully retrieved %s/%s of type %s", bucket, key, *headOutput.ContentType)
   	}
   
   	return nil
   }
   
   func main() {
   	lambda.Start(handler)
   }
   ```

------
#### [ Java ]

**SDK for Java 2.x**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-s3-to-lambda) repository. 
Consuming an S3 event with Lambda using Java.  

   ```
   // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
   // SPDX-License-Identifier: Apache-2.0
   package example;
   
   import software.amazon.awssdk.services.s3.model.HeadObjectRequest;
   import software.amazon.awssdk.services.s3.model.HeadObjectResponse;
   import software.amazon.awssdk.services.s3.S3Client;
   
   import com.amazonaws.services.lambda.runtime.Context;
   import com.amazonaws.services.lambda.runtime.RequestHandler;
   import com.amazonaws.services.lambda.runtime.events.S3Event;
   import com.amazonaws.services.lambda.runtime.events.models.s3.S3EventNotification.S3EventNotificationRecord;
   
   import org.slf4j.Logger;
   import org.slf4j.LoggerFactory;
   
   public class Handler implements RequestHandler<S3Event, String> {
       private static final Logger logger = LoggerFactory.getLogger(Handler.class);
       @Override
       public String handleRequest(S3Event s3event, Context context) {
           try {
             S3EventNotificationRecord record = s3event.getRecords().get(0);
             String srcBucket = record.getS3().getBucket().getName();
             String srcKey = record.getS3().getObject().getUrlDecodedKey();
   
             S3Client s3Client = S3Client.builder().build();
             HeadObjectResponse headObject = getHeadObject(s3Client, srcBucket, srcKey);
   
             logger.info("Successfully retrieved " + srcBucket + "/" + srcKey + " of type " + headObject.contentType());
   
             return "Ok";
           } catch (Exception e) {
             throw new RuntimeException(e);
           }
       }
   
       private HeadObjectResponse getHeadObject(S3Client s3Client, String bucket, String key) {
           HeadObjectRequest headObjectRequest = HeadObjectRequest.builder()
                   .bucket(bucket)
                   .key(key)
                   .build();
           return s3Client.headObject(headObjectRequest);
       }
   }
   ```

------
#### [ JavaScript ]

**SDK for JavaScript (v3)**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-s3-to-lambda) repository. 
Consuming an S3 event with Lambda using JavaScript.  

   ```
   import { S3Client, HeadObjectCommand } from "@aws-sdk/client-s3";
   
   const client = new S3Client();
   
   export const handler = async (event, context) => {
   
       // Get the object from the event and show its content type
       const bucket = event.Records[0].s3.bucket.name;
       const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' '));
   
       try {
           const { ContentType } = await client.send(new HeadObjectCommand({
               Bucket: bucket,
               Key: key,
           }));
   
           console.log('CONTENT TYPE:', ContentType);
           return ContentType;
   
       } catch (err) {
           console.log(err);
           const message = `Error getting object ${key} from bucket ${bucket}. Make sure they exist and your bucket is in the same region as this function.`;
           console.log(message);
           throw new Error(message);
       }
   };
   ```
Consuming an S3 event with Lambda using TypeScript.  

   ```
   // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
   // SPDX-License-Identifier: Apache-2.0
   import { S3Event } from 'aws-lambda';
   import { S3Client, HeadObjectCommand } from '@aws-sdk/client-s3';
   
   const s3 = new S3Client({ region: process.env.AWS_REGION });
   
   export const handler = async (event: S3Event): Promise<string | undefined> => {
     // Get the object from the event and show its content type
     const bucket = event.Records[0].s3.bucket.name;
     const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' '));
     const params = {
       Bucket: bucket,
       Key: key,
     };
     try {
       const { ContentType } = await s3.send(new HeadObjectCommand(params));
       console.log('CONTENT TYPE:', ContentType);
       return ContentType;
     } catch (err) {
       console.log(err);
       const message = `Error getting object ${key} from bucket ${bucket}. Make sure they exist and your bucket is in the same region as this function.`;
       console.log(message);
       throw new Error(message);
     }
   };
   ```

------
#### [ PHP ]

**SDK for PHP**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-s3-to-lambda) repository. 
Consuming an S3 event with Lambda using PHP.  

   ```
   <?php
   
   use Bref\Context\Context;
   use Bref\Event\S3\S3Event;
   use Bref\Event\S3\S3Handler;
   use Bref\Logger\StderrLogger;
   
   require __DIR__ . '/vendor/autoload.php';
   
   
   class Handler extends S3Handler 
   {
       private StderrLogger $logger;
       public function __construct(StderrLogger $logger)
       {
           $this->logger = $logger;
       }
       
       public function handleS3(S3Event $event, Context $context) : void
       {
           $this->logger->info("Processing S3 records");
   
           // Get the object from the event and show its content type
           $records = $event->getRecords();
           
           foreach ($records as $record) 
           {
               $bucket = $record->getBucket()->getName();
               $key = urldecode($record->getObject()->getKey());
   
               try {
                   $fileSize = urldecode($record->getObject()->getSize());
                   echo "File Size: " . $fileSize . "\n";
                   // TODO: Implement your custom processing logic here
               } catch (Exception $e) {
                   echo $e->getMessage() . "\n";
                   echo 'Error getting object ' . $key . ' from bucket ' . $bucket . '. Make sure they exist and your bucket is in the same region as this function.' . "\n";
                   throw $e;
               }
           }
       }
   }
   
   $logger = new StderrLogger();
   return new Handler($logger);
   ```

------
#### [ Python ]

**SDK for Python (Boto3)**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-s3-to-lambda) repository. 
Consuming an S3 event with Lambda using Python.  

   ```
   # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
   # SPDX-License-Identifier: Apache-2.0
   import json
   import urllib.parse
   import boto3
   
   print('Loading function')
   
   s3 = boto3.client('s3')
   
   
   def lambda_handler(event, context):
       #print("Received event: " + json.dumps(event, indent=2))
   
       # Get the object from the event and show its content type
       bucket = event['Records'][0]['s3']['bucket']['name']
       key = urllib.parse.unquote_plus(event['Records'][0]['s3']['object']['key'], encoding='utf-8')
       try:
           response = s3.get_object(Bucket=bucket, Key=key)
           print("CONTENT TYPE: " + response['ContentType'])
           return response['ContentType']
       except Exception as e:
           print(e)
           print('Error getting object {} from bucket {}. Make sure they exist and your bucket is in the same region as this function.'.format(key, bucket))
           raise e
   ```

------
#### [ Ruby ]

**SDK for Ruby**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-s3-to-lambda) repository. 
Consuming an S3 event with Lambda using Ruby.  

   ```
   require 'json'
   require 'uri'
   require 'aws-sdk'
   
   puts 'Loading function'
   
   def lambda_handler(event:, context:)
     s3 = Aws::S3::Client.new(region: 'region') # Your AWS region
     # puts "Received event: #{JSON.dump(event)}"
   
     # Get the object from the event and show its content type
     bucket = event['Records'][0]['s3']['bucket']['name']
     key = URI.decode_www_form_component(event['Records'][0]['s3']['object']['key'], Encoding::UTF_8)
     begin
       response = s3.get_object(bucket: bucket, key: key)
       puts "CONTENT TYPE: #{response.content_type}"
       return response.content_type
     rescue StandardError => e
       puts e.message
       puts "Error getting object #{key} from bucket #{bucket}. Make sure they exist and your bucket is in the same region as this function."
       raise e
     end
   end
   ```

------
#### [ Rust ]

**SDK for Rust**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-s3-to-lambda) repository. 
Consuming an S3 event with Lambda using Rust.  

   ```
   // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
   // SPDX-License-Identifier: Apache-2.0
   use aws_lambda_events::event::s3::S3Event;
   use aws_sdk_s3::{Client};
   use lambda_runtime::{run, service_fn, Error, LambdaEvent};
   
   
   /// Main function
   #[tokio::main]
   async fn main() -> Result<(), Error> {
       tracing_subscriber::fmt()
           .with_max_level(tracing::Level::INFO)
           .with_target(false)
           .without_time()
           .init();
   
       // Initialize the AWS SDK for Rust
       let config = aws_config::load_from_env().await;
       let s3_client = Client::new(&config);
   
       let res = run(service_fn(|request: LambdaEvent<S3Event>| {
           function_handler(&s3_client, request)
       })).await;
   
       res
   }
   
   async fn function_handler(
       s3_client: &Client,
       evt: LambdaEvent<S3Event>
   ) -> Result<(), Error> {
       tracing::info!(records = ?evt.payload.records.len(), "Received request from SQS");
   
       if evt.payload.records.len() == 0 {
           tracing::info!("Empty S3 event received");
       }
   
       let bucket = evt.payload.records[0].s3.bucket.name.as_ref().expect("Bucket name to exist");
       let key = evt.payload.records[0].s3.object.key.as_ref().expect("Object key to exist");
   
       tracing::info!("Request is for {} and object {}", bucket, key);
   
       let s3_get_object_result = s3_client
           .get_object()
           .bucket(bucket)
           .key(key)
           .send()
           .await;
   
       match s3_get_object_result {
           Ok(_) => tracing::info!("S3 Get Object success, the s3GetObjectResult contains a 'body' property of type ByteStream"),
           Err(_) => tracing::info!("Failure with S3 Get Object request")
       }
   
       Ok(())
   }
   ```

------

1. In the **Code source** pane on the Lambda console, paste the code into the code editor, replacing the code that Lambda created.

1. In the **DEPLOY** section, choose **Deploy** to update your function's code:  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/getting-started-tutorial/deploy-console.png)

## Create the Amazon S3 trigger
<a name="with-s3-example-create-trigger"></a>

![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/services-s3-example/s3trigger_tut_steps7.png)


**To create the Amazon S3 trigger**

1. In the **Function overview** pane, choose **Add trigger**.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/overview-trigger.png)

1. Select **S3**.

1. Under **Bucket**, select the bucket you created earlier in the tutorial.

1. Under **Event types**, be sure that **All object create events** is selected.

1. Under **Recursive invocation**, select the check box to acknowledge that using the same Amazon S3 bucket for input and output is not recommended.

1. Choose **Add**.

**Note**  
When you create an Amazon S3 trigger for a Lambda function using the Lambda console, Amazon S3 configures an [event notification](https://docs.aws.amazon.com/AmazonS3/latest/userguide/EventNotifications.html) on the bucket you specify. Before configuring this event notification, Amazon S3 performs a series of checks to confirm that the event destination exists and has the required IAM policies. Amazon S3 also performs these tests on any other event notifications configured for that bucket.  
Because of this check, if the bucket has previously configured event destinations for resources that no longer exist, or for resources that don't have the required permissions policies, Amazon S3 won't be able to create the new event notification. You'll see the following error message indicating that your trigger couldn't be created:  

```
An error occurred when creating the trigger: Unable to validate the following destination configurations.
```
You can see this error if you previously configured a trigger for another Lambda function using the same bucket, and you have since deleted the function or modified its permissions policies.

## Test your Lambda function with a dummy event
<a name="with-s3-example-test-dummy-event"></a>

![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/services-s3-example/s3trigger_tut_steps8.png)


**To test the Lambda function with a dummy event**

1. In the Lambda console page for your function, choose the **Test** tab.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/test-tab.png)

1. For **Event name**, enter `MyTestEvent`.

1. In the **Event JSON**, paste the following test event. Be sure to replace these values:
   + Replace `us-east-1` with the region you created your Amazon S3 bucket in.
   + Replace both instances of `amzn-s3-demo-bucket` with the name of your own Amazon S3 bucket.
   + Replace `test%2FKey` with the name of the test object you uploaded to your bucket earlier (for example, `HappyFace.jpg`).

   ```
   {
     "Records": [
       {
         "eventVersion": "2.0",
         "eventSource": "aws:s3",
         "awsRegion": "us-east-1",
         "eventTime": "1970-01-01T00:00:00.000Z",
         "eventName": "ObjectCreated:Put",
         "userIdentity": {
           "principalId": "EXAMPLE"
         },
         "requestParameters": {
           "sourceIPAddress": "127.0.0.1"
         },
         "responseElements": {
           "x-amz-request-id": "EXAMPLE123456789",
           "x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH"
         },
         "s3": {
           "s3SchemaVersion": "1.0",
           "configurationId": "testConfigRule",
           "bucket": {
             "name": "amzn-s3-demo-bucket",
             "ownerIdentity": {
               "principalId": "EXAMPLE"
             },
             "arn": "arn:aws:s3:::amzn-s3-demo-bucket"
           },
           "object": {
             "key": "test%2Fkey",
             "size": 1024,
             "eTag": "0123456789abcdef0123456789abcdef",
             "sequencer": "0A1B2C3D4E5F678901"
           }
         }
       }
     ]
   }
   ```

1. Choose **Save**.

1. Choose **Test**.

1. If your function runs successfully, you’ll see output similar to the following in the **Execution results** tab.

   ```
   Response
   "image/jpeg"
   
   Function Logs
   START RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Version: $LATEST
   2021-02-18T21:40:59.280Z    12b3cae7-5f4e-415e-93e6-416b8f8b66e6    INFO    INPUT BUCKET AND KEY:  { Bucket: 'amzn-s3-demo-bucket', Key: 'HappyFace.jpg' }
   2021-02-18T21:41:00.215Z    12b3cae7-5f4e-415e-93e6-416b8f8b66e6    INFO    CONTENT TYPE: image/jpeg
   END RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6
   REPORT RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6    Duration: 976.25 ms    Billed Duration: 977 ms    Memory Size: 128 MB    Max Memory Used: 90 MB    Init Duration: 430.47 ms        
   
   Request ID
   12b3cae7-5f4e-415e-93e6-416b8f8b66e6
   ```

### Test the Lambda function with the Amazon S3 trigger
<a name="with-s3-example-test-s3-trigger"></a>

![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/services-s3-example/s3trigger_tut_steps9.png)


To test your function with the configured trigger, upload an object to your Amazon S3 bucket using the console. To verify that your Lambda function ran as expected, use CloudWatch Logs to view your function’s output.

**To upload an object to your Amazon S3 bucket**

1. Open the [Buckets](https://console.aws.amazon.com/s3/buckets) page of the Amazon S3 console and choose the bucket that you created earlier.

1. Choose **Upload**.

1. Choose **Add files** and use the file selector to choose an object you want to upload. This object can be any file you choose.

1. Choose **Open**, then choose **Upload**.

**To verify the function invocation using CloudWatch Logs**

1. Open the [CloudWatch](https://console.aws.amazon.com/cloudwatch/home) console.

1. Make sure you're working in the same AWS Region you created your Lambda function in. You can change your Region using the drop-down list at the top of the screen.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/console_region_select.png)

1. Choose **Logs**, then choose **Log groups**.

1. Choose the log group for your function (`/aws/lambda/s3-trigger-tutorial`).

1. Under **Log streams**, choose the most recent log stream.

1. If your function was invoked correctly in response to your Amazon S3 trigger, you’ll see output similar to the following. The `CONTENT TYPE` you see depends on the type of file you uploaded to your bucket.

   ```
   2022-05-09T23:17:28.702Z	0cae7f5a-b0af-4c73-8563-a3430333cc10	INFO	CONTENT TYPE: image/jpeg
   ```

## Clean up your resources
<a name="cleanup"></a>

You can now delete the resources that you created for this tutorial, unless you want to retain them. By deleting AWS resources that you're no longer using, you prevent unnecessary charges to your AWS account.

**To delete the Lambda function**

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console.

1. Select the function that you created.

1. Choose **Actions**, **Delete**.

1. Type **confirm** in the text input field and choose **Delete**.

**To delete the execution role**

1. Open the [Roles page](https://console.aws.amazon.com/iam/home#/roles) of the IAM console.

1. Select the execution role that you created.

1. Choose **Delete**.

1. Enter the name of the role in the text input field and choose **Delete**.

**To delete the S3 bucket**

1. Open the [Amazon S3 console.](https://console.aws.amazon.com//s3/home#)

1. Select the bucket you created.

1. Choose **Delete**.

1. Enter the name of the bucket in the text input field.

1. Choose **Delete bucket**.

## Next steps
<a name="next-steps"></a>

In [Tutorial: Using an Amazon S3 trigger to create thumbnail images](with-s3-tutorial.md), the Amazon S3 trigger invokes a function that creates a thumbnail image for each image file that is uploaded to a bucket. This tutorial requires a moderate level of AWS and Lambda domain knowledge. It demonstrates how to create resources using the AWS Command Line Interface (AWS CLI) and how to create a .zip file archive deployment package for the function and its dependencies.

# Tutorial: Using an Amazon S3 trigger to create thumbnail images
<a name="with-s3-tutorial"></a>

In this tutorial, you create and configure a Lambda function that resizes images added to an Amazon Simple Storage Service (Amazon S3) bucket. When you add an image file to your bucket, Amazon S3 invokes your Lambda function. The function then creates a thumbnail version of the image and outputs it to a different Amazon S3 bucket.

![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/services-s3-tutorial/s3thumb_tut_resources.png)


To complete this tutorial, you carry out the following steps:

1. Create source and destination Amazon S3 buckets and upload a sample image.

1. Create a Lambda function that resizes an image and outputs a thumbnail to an Amazon S3 bucket.

1. Configure a Lambda trigger that invokes your function when objects are uploaded to your source bucket.

1. Test your function, first with a dummy event, and then by uploading an image to your source bucket.

By completing these steps, you’ll learn how to use Lambda to carry out a file processing task on objects added to an Amazon S3 bucket. You can complete this tutorial using the AWS Command Line Interface (AWS CLI) or the AWS Management Console.

If you're looking for a simpler example to learn how to configure an Amazon S3 trigger for Lambda, you can try [Tutorial: Using an Amazon S3 trigger to invoke a Lambda function](https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html).

**Topics**
+ [

## Prerequisites
](#with-s3-example-prereqs)
+ [

## Create two Amazon S3 buckets
](#with-s3-tutorial-prepare-create-buckets)
+ [

## Upload a test image to your source bucket
](#with-s3-tutorial-test-image)
+ [

## Create a permissions policy
](#with-s3-tutorial-create-policy)
+ [

## Create an execution role
](#with-s3-tutorial-create-execution-role)
+ [

## Create the function deployment package
](#with-s3-tutorial-create-function-package)
+ [

## Create the Lambda function
](#with-s3-tutorial-create-function-createfunction)
+ [

## Configure Amazon S3 to invoke the function
](#with-s3-tutorial-configure-s3-trigger)
+ [

## Test your Lambda function with a dummy event
](#with-s3-tutorial-dummy-test)
+ [

## Test your function using the Amazon S3 trigger
](#with-s3-tutorial-test-s3)
+ [

## Clean up your resources
](#s3-tutorial-cleanup)

## Prerequisites
<a name="with-s3-example-prereqs"></a>

If you want to use the AWS CLI to complete the tutorial, install the [latest version of the AWS Command Line Interface]().

For your Lambda function code, you can use Python or Node.js. Install the language support tools and a package manager for the language that you want to use. 

### Install the AWS Command Line Interface
<a name="install_aws_cli"></a>

If you have not yet installed the AWS Command Line Interface, follow the steps at [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) to install it.

The tutorial requires a command line terminal or shell to run commands. In Linux and macOS, use your preferred shell and package manager.

**Note**  
In Windows, some Bash CLI commands that you commonly use with Lambda (such as `zip`) are not supported by the operating system's built-in terminals. To get a Windows-integrated version of Ubuntu and Bash, [install the Windows Subsystem for Linux](https://docs.microsoft.com/en-us/windows/wsl/install-win10). 

## Create two Amazon S3 buckets
<a name="with-s3-tutorial-prepare-create-buckets"></a>

![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/services-s3-tutorial/s3thumb_tut_steps1.png)


First create two Amazon S3 buckets. The first bucket is the source bucket you will upload your images to. The second bucket is used by Lambda to save the resized thumbnail when you invoke your function.

------
#### [ AWS Management Console ]

**To create the Amazon S3 buckets (console)**

1. Open the [Amazon S3 console](https://console.aws.amazon.com/s3) and select the **General purpose buckets** page.

1. Select the AWS Region closest to your geographical location. You can change your region using the drop-down list at the top of the screen. Later in the tutorial, you must create your Lambda function in the same Region.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/console_region_select.png)

1. Choose **Create bucket**.

1. Under **General configuration**, do the following:

   1. For **Bucket type**, ensure **General purpose** is selected.

   1. For **Bucket name**, enter a globally unique name that meets the Amazon S3 [Bucket naming rules](https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucketnamingrules.html). Bucket names can contain only lower case letters, numbers, dots (.), and hyphens (-).

1. Leave all other options set to their default values and choose **Create bucket**.

1. Repeat steps 1 to 5 to create your destination bucket. For **Bucket name**, enter `amzn-s3-demo-source-bucket-resized`, where `amzn-s3-demo-source-bucket` is the name of the source bucket you just created.

------
#### [ AWS CLI ]

**To create the Amazon S3 buckets (AWS CLI)**

1. Run the following CLI command to create your source bucket. The name you choose for your bucket must be globally unique and follow the Amazon S3 [Bucket naming rules](https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucketnamingrules.html). Names can only contain lower case letters, numbers, dots (.), and hyphens (-). For `region` and `LocationConstraint`, choose the [AWS Region](https://docs.aws.amazon.com/general/latest/gr/lambda-service.html) closest to your geographical location.

   ```
   aws s3api create-bucket --bucket amzn-s3-demo-source-bucket --region us-east-1 \
   --create-bucket-configuration LocationConstraint=us-east-1
   ```

   Later in the tutorial, you must create your Lambda function in the same AWS Region as your source bucket, so make a note of the region you chose.

1. Run the following command to create your destination bucket. For the bucket name, you must use `amzn-s3-demo-source-bucket-resized`, where `amzn-s3-demo-source-bucket` is the name of the source bucket you created in step 1. For `region` and `LocationConstraint`, choose the same AWS Region you used to create your source bucket.

   ```
   aws s3api create-bucket --bucket amzn-s3-demo-source-bucket-resized --region us-east-1 \
   --create-bucket-configuration LocationConstraint=us-east-1
   ```

------

## Upload a test image to your source bucket
<a name="with-s3-tutorial-test-image"></a>

![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/services-s3-tutorial/s3thumb_tut_steps2.png)


Later in the tutorial, you’ll test your Lambda function by invoking it using the AWS CLI or the Lambda console. To confirm that your function is operating correctly, your source bucket needs to contain a test image. This image can be any JPG or PNG file you choose.

------
#### [ AWS Management Console ]

**To upload a test image to your source bucket (console)**

1. Open the [Buckets](https://console.aws.amazon.com/s3/buckets) page of the Amazon S3 console.

1. Select the source bucket you created in the previous step.

1. Choose **Upload**.

1. Choose **Add files** and use the file selector to choose the object you want to upload.

1. Choose **Open**, then choose **Upload**.

------
#### [ AWS CLI ]

**To upload a test image to your source bucket (AWS CLI)**
+ From the directory containing the image you want to upload, run the following CLI command. Replace the `--bucket` parameter with the name of your source bucket. For the `--key` and `--body` parameters, use the filename of your test image.

  ```
  aws s3api put-object --bucket amzn-s3-demo-source-bucket --key HappyFace.jpg --body ./HappyFace.jpg
  ```

------

## Create a permissions policy
<a name="with-s3-tutorial-create-policy"></a>

![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/services-s3-tutorial/s3thumb_tut_steps3.png)


The first step in creating your Lambda function is to create a permissions policy. This policy gives your function the permissions it needs to access other AWS resources. For this tutorial, the policy gives Lambda read and write permissions for Amazon S3 buckets and allows it to write to Amazon CloudWatch Logs.

------
#### [ AWS Management Console ]

**To create the policy (console)**

1. Open the [Policies](https://console.aws.amazon.com/iamv2/home#policies) page of the AWS Identity and Access Management (IAM) console.

1. Choose **Create policy**.

1. Choose the **JSON** tab, and then paste the following custom policy into the JSON editor.  
****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                   "logs:PutLogEvents",
                   "logs:CreateLogGroup",
                   "logs:CreateLogStream"
               ],
               "Resource": "arn:aws:logs:*:*:*"
           },
           {
               "Effect": "Allow",
               "Action": [
                   "s3:GetObject"
               ],
               "Resource": "arn:aws:s3:::*/*"
           },
           {
               "Effect": "Allow",
               "Action": [
                   "s3:PutObject"
               ],
               "Resource": "arn:aws:s3:::*/*"
           }
       ]
   }
   ```

1. Choose **Next**.

1. Under **Policy details**, for **Policy name**, enter `LambdaS3Policy`.

1. Choose **Create policy**.

------
#### [ AWS CLI ]

**To create the policy (AWS CLI)**

1. Save the following JSON in a file named `policy.json`.  
****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                   "logs:PutLogEvents",
                   "logs:CreateLogGroup",
                   "logs:CreateLogStream"
               ],
               "Resource": "arn:aws:logs:*:*:*"
           },
           {
               "Effect": "Allow",
               "Action": [
                   "s3:GetObject"
               ],
               "Resource": "arn:aws:s3:::*/*"
           },
           {
               "Effect": "Allow",
               "Action": [
                   "s3:PutObject"
               ],
               "Resource": "arn:aws:s3:::*/*"
           }
       ]
   }
   ```

1. From the directory you saved the JSON policy document in, run the following CLI command.

   ```
   aws iam create-policy --policy-name LambdaS3Policy --policy-document file://policy.json
   ```

------

## Create an execution role
<a name="with-s3-tutorial-create-execution-role"></a>

![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/services-s3-tutorial/s3thumb_tut_steps4.png)


An execution role is an IAM role that grants a Lambda function permission to access AWS services and resources. To give your function read and write access to an Amazon S3 bucket, you attach the permissions policy you created in the previous step.

------
#### [ AWS Management Console ]

**To create an execution role and attach your permissions policy (console)**

1. Open the [Roles](https://console.aws.amazon.com/iamv2/home#roles) page of the (IAM) console.

1. Choose **Create role**.

1. For **Trusted entity type**, select **AWS service**, and for **Use case**, select **Lambda**.

1. Choose **Next**.

1. Add the permissions policy you created in the previous step by doing the following:

   1. In the policy search box, enter `LambdaS3Policy`.

   1. In the search results, select the check box for `LambdaS3Policy`.

   1. Choose **Next**.

1. Under **Role details**, for the **Role name** enter `LambdaS3Role`.

1. Choose **Create role**.

------
#### [ AWS CLI ]

**To create an execution role and attach your permissions policy (AWS CLI)**

1. Save the following JSON in a file named `trust-policy.json`. This trust policy allows Lambda to use the role’s permissions by giving the service principal `lambda.amazonaws.com` permission to call the AWS Security Token Service (AWS STS) `AssumeRole` action.  
****  

   ```
   {
     "Version":"2012-10-17",		 	 	 
     "Statement": [
       {
         "Effect": "Allow",
         "Principal": {
           "Service": "lambda.amazonaws.com"
         },
         "Action": "sts:AssumeRole"
       }
     ]
   }
   ```

1. From the directory you saved the JSON trust policy document in, run the following CLI command to create the execution role.

   ```
   aws iam create-role --role-name LambdaS3Role --assume-role-policy-document file://trust-policy.json
   ```

1. To attach the permissions policy you created in the previous step, run the following CLI command. Replace the AWS account number in the policy’s ARN with your own account number.

   ```
   aws iam attach-role-policy --role-name LambdaS3Role --policy-arn arn:aws:iam::123456789012:policy/LambdaS3Policy
   ```

------

## Create the function deployment package
<a name="with-s3-tutorial-create-function-package"></a>

![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/services-s3-tutorial/s3thumb_tut_steps5.png)


To create your function, you create a *deployment package* containing your function code and its dependencies. For this `CreateThumbnail` function, your function code uses a separate library for the image resizing. Follow the instructions for your chosen language to create a deployment package containing the required library.

------
#### [ Node.js ]

**To create the deployment package (Node.js)**

1. Create a directory named `lambda-s3` for your function code and dependencies and navigate into it.

   ```
   mkdir lambda-s3
   cd lambda-s3
   ```

1. Create a new Node.js project with `npm`. To accept the default options provided in the interactive experience, press `Enter`.

   ```
   npm init
   ```

1. Save the following function code in a file named `index.mjs`. Make sure to replace `us-east-1` with the AWS Region in which you created your own source and destination buckets.

   ```
   // dependencies
   import { S3Client, GetObjectCommand, PutObjectCommand } from '@aws-sdk/client-s3';
   
   import { Readable } from 'stream';
   
   import sharp from 'sharp';
   import util from 'util';
   
   
   // create S3 client
   const s3 = new S3Client({region: 'us-east-1'});
   
   // define the handler function
   export const handler = async (event, context) => {
   
   // Read options from the event parameter and get the source bucket
   console.log("Reading options from event:\n", util.inspect(event, {depth: 5}));
     const srcBucket = event.Records[0].s3.bucket.name;
     
   // Object key may have spaces or unicode non-ASCII characters
   const srcKey    = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, " "));
   const dstBucket = srcBucket + "-resized";
   const dstKey    = "resized-" + srcKey;
   
   // Infer the image type from the file suffix
   const typeMatch = srcKey.match(/\.([^.]*)$/);
   if (!typeMatch) {
     console.log("Could not determine the image type.");
     return;
   }
   
   // Check that the image type is supported
   const imageType = typeMatch[1].toLowerCase();
   if (imageType != "jpg" && imageType != "png") {
     console.log(`Unsupported image type: ${imageType}`);
     return;
   }
   
   // Get the image from the source bucket. GetObjectCommand returns a stream.
   try {
     const params = {
       Bucket: srcBucket,
       Key: srcKey
     };
     var response = await s3.send(new GetObjectCommand(params));
     var stream = response.Body;
     
   // Convert stream to buffer to pass to sharp resize function.
     if (stream instanceof Readable) {
       var content_buffer = Buffer.concat(await stream.toArray());
       
     } else {
       throw new Error('Unknown object stream type');
     }
   
   
   } catch (error) {
     console.log(error);
     return;
   }
   
     
   // set thumbnail width. Resize will set the height automatically to maintain aspect ratio.
   const width  = 200;
   
   // Use the sharp module to resize the image and save in a buffer.
   try {    
     var output_buffer = await sharp(content_buffer).resize(width).toBuffer();
   
   } catch (error) {
     console.log(error);
     return;
   }
   
   // Upload the thumbnail image to the destination bucket
   try {
     const destparams = {
       Bucket: dstBucket,
       Key: dstKey,
       Body: output_buffer,
       ContentType: "image"
     };
   
     const putResult = await s3.send(new PutObjectCommand(destparams));
   
     } catch (error) {
       console.log(error);
       return;
     }
   
     console.log('Successfully resized ' + srcBucket + '/' + srcKey +
       ' and uploaded to ' + dstBucket + '/' + dstKey);
     };
   ```

1. In your `lambda-s3` directory, install the sharp library using npm. Note that the latest version of sharp (0.33) isn't compatible with Lambda. Install version 0.32.6 to complete this tutorial.

   ```
   npm install sharp@0.32.6
   ```

   The npm `install` command creates a `node_modules` directory for your modules. After this step, your directory structure should look like the following.

   ```
   lambda-s3
   |- index.mjs
   |- node_modules
   |  |- base64js
   |  |- bl
   |  |- buffer
   ...
   |- package-lock.json
   |- package.json
   ```

1. Create a .zip deployment package containing your function code and its dependencies. In MacOS and Linux, run the following command.

   ```
   zip -r function.zip .
   ```

   In Windows, use your preferred zip utility to create a .zip file. Ensure that your `index.mjs`, `package.json`, and `package-lock.json` files and your `node_modules` directory are all at the root of your .zip file.

------
#### [ Python ]

**To create the deployment package (Python)**

1. Save the example code as a file named `lambda_function.py`.

   ```
   import boto3
   import os
   import sys
   import uuid
   from urllib.parse import unquote_plus
   from PIL import Image
   import PIL.Image
               
   s3_client = boto3.client('s3')
               
   def resize_image(image_path, resized_path):
     with Image.open(image_path) as image:
       image.thumbnail(tuple(x / 2 for x in image.size))
       image.save(resized_path)
               
   def lambda_handler(event, context):
     for record in event['Records']:
       bucket = record['s3']['bucket']['name']
       key = unquote_plus(record['s3']['object']['key'])
       tmpkey = key.replace('/', '')
       download_path = '/tmp/{}{}'.format(uuid.uuid4(), tmpkey)
       upload_path = '/tmp/resized-{}'.format(tmpkey)
       s3_client.download_file(bucket, key, download_path)
       resize_image(download_path, upload_path)
       s3_client.upload_file(upload_path, '{}-resized'.format(bucket), 'resized-{}'.format(key))
   ```

1. In the same directory in which you created your `lambda_function.py` file, create a new directory named `package` and install the [Pillow (PIL)](https://pypi.org/project/Pillow/) library and the AWS SDK for Python (Boto3). Although the Lambda Python runtime includes a version of the Boto3 SDK, we recommend that you add all of your function's dependencies to your deployment package, even if they are included in the runtime. For more information, see [Runtime dependencies in Python](https://docs.aws.amazon.com/lambda/latest/dg/python-package.html#python-package-dependencies).

   ```
   mkdir package
   pip install \
   --platform manylinux2014_x86_64 \
   --target=package \
   --implementation cp \
   --python-version 3.12 \
   --only-binary=:all: --upgrade \
   pillow boto3
   ```

   The Pillow library contains C/C\$1\$1 code. By using the `--platform manylinux_2014_x86_64` and `--only-binary=:all:` options, pip will download and install a version of Pillow that contains pre-compiled binaries compatible with the Amazon Linux 2 operating system. This ensures that your deployment package will work in the Lambda execution environment, regardless of the operating system and architecture of your local build machine.

1. Create a .zip file containing your application code and the Pillow and Boto3 libraries. In Linux or MacOS, run the following commands from your command line interface.

   ```
   cd package
   zip -r ../lambda_function.zip .
   cd ..
   zip lambda_function.zip lambda_function.py
   ```

    In Windows, use your preferred zip tool to create the `lambda_function.zip` file. Make sure that your `lambda_function.py` file and the folders containing your dependencies are all at the root of the .zip file.

You can also create your deployment package using a Python virtual environment. See [Working with .zip file archives for Python Lambda functions](python-package.md)

------

## Create the Lambda function
<a name="with-s3-tutorial-create-function-createfunction"></a>

![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/services-s3-tutorial/s3thumb_tut_steps6.png)


You can create your Lambda function using either the AWS CLI or the Lambda console. Follow the instructions for your chosen language to create the function.

------
#### [ AWS Management Console ]

**To create the function (console)**

To create your Lambda function using the console, you first create a basic function containing some ‘Hello world’ code. You then replace this code with your own function code by uploading the.zip or JAR file you created in the previous step.

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console.

1. Make sure you're working in the same AWS Region you created your Amazon S3 bucket in. You can change your region using the drop-down list at the top of the screen.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/console_region_select.png)

1. Choose **Create function**.

1. Choose **Author from scratch**.

1. Under **Basic information**, do the following:

   1. For **Function name**, enter `CreateThumbnail`.

   1. For **Runtime**, choose either **Node.js 22.x** or **Python 3.12** according to the language you chose for your function.

   1. For **Architecture**, choose **x86\$164**.

1. In the **Change default execution role** tab, do the following:

   1. Expand the tab, then choose **Use an existing role**.

   1. Select the `LambdaS3Role` you created earlier.

1. Choose **Create function**.

**To upload the function code (console)**

1. In the **Code source** pane, choose **Upload from**.

1. Choose **.zip file**. 

1. Choose **Upload**.

1. In the file selector, select your .zip file and choose **Open**.

1. Choose **Save**.

------
#### [ AWS CLI ]

**To create the function (AWS CLI)**
+ Run the CLI command for the language you chose. For the `role` parameter, make sure to replace `123456789012` with your own AWS account ID. For the `region` parameter, replace `us-east-1` with the region you created your Amazon S3 buckets in.
  + For **Node.js**, run the following command from the directory containing your `function.zip` file.

    ```
    aws lambda create-function --function-name CreateThumbnail \
    --zip-file fileb://function.zip --handler index.handler --runtime nodejs24.x \
    --timeout 10 --memory-size 1024 \
    --role arn:aws:iam::123456789012:role/LambdaS3Role --region us-east-1
    ```
  + For **Python**, run the following command from the directory containing your `lambda_function.zip` file.

    ```
    aws lambda create-function --function-name CreateThumbnail \
    --zip-file fileb://lambda_function.zip --handler lambda_function.lambda_handler \
    --runtime python3.14 --timeout 10 --memory-size 1024 \
    --role arn:aws:iam::123456789012:role/LambdaS3Role --region us-east-1
    ```

------

## Configure Amazon S3 to invoke the function
<a name="with-s3-tutorial-configure-s3-trigger"></a>

![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/services-s3-tutorial/s3thumb_tut_steps7.png)


For your Lambda function to run when you upload an image to your source bucket, you need to configure a trigger for your function. You can configure the Amazon S3 trigger using either the console or the AWS CLI.

**Important**  
This procedure configures the Amazon S3 bucket to invoke your function every time that an object is created in the bucket. Be sure to configure this only on the source bucket. If your Lambda function creates objects in the same bucket that invokes it, your function can be [invoked continuously in a loop](https://serverlessland.com/content/service/lambda/guides/aws-lambda-operator-guide/recursive-runaway). This can result in un expected charges being billed to your AWS account.

------
#### [ AWS Management Console ]

**To configure the Amazon S3 trigger (console)**

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console and choose your function (`CreateThumbnail`).

1. Choose **Add trigger**.

1. Select **S3**.

1. Under **Bucket**, select your source bucket.

1. Under **Event types**, select **All object create events**.

1. Under **Recursive invocation**, select the check box to acknowledge that using the same Amazon S3 bucket for input and output is not recommended. You can learn more about recursive invocation patterns in Lambda by reading [Recursive patterns that cause run-away Lambda functions](https://serverlessland.com/content/service/lambda/guides/aws-lambda-operator-guide/recursive-runaway) in Serverless Land.

1. Choose **Add**.

   When you create a trigger using the Lambda console, Lambda automatically creates a [resource based policy](https://docs.aws.amazon.com/lambda/latest/dg/access-control-resource-based.html) to give the service you select permission to invoke your function. 

------
#### [ AWS CLI ]

**To configure the Amazon S3 trigger (AWS CLI)**

1. For your Amazon S3 source bucket to invoke your function when you add an image file, you first need to configure permissions for your function using a [resource based policy](https://docs.aws.amazon.com/lambda/latest/dg/access-control-resource-based.html). A resource-based policy statement gives other AWS services permission to invoke your function. To give Amazon S3 permission to invoke your function, run the following CLI command. Be sure to replace the `source-account` parameter with your own AWS account ID and to use your own source bucket name.

   ```
   aws lambda add-permission --function-name CreateThumbnail \
   --principal s3.amazonaws.com --statement-id s3invoke --action "lambda:InvokeFunction" \
   --source-arn arn:aws:s3:::amzn-s3-demo-source-bucket \
   --source-account 123456789012
   ```

   The policy you define with this command allows Amazon S3 to invoke your function only when an action takes place on your source bucket.
**Note**  
Although Amazon S3 bucket names are globally unique, when using resource-based policies it is best practice to specify that the bucket must belong to your account. This is because if you delete a bucket, it is possible for another AWS account to create a bucket with the same Amazon Resource Name (ARN).

1. Save the following JSON in a file named `notification.json`. When applied to your source bucket, this JSON configures the bucket to send a notification to your Lambda function every time a new object is added. Replace the AWS account number and AWS Region in the Lambda function ARN with your own account number and region.

   ```
   {
   "LambdaFunctionConfigurations": [
       {
         "Id": "CreateThumbnailEventConfiguration",
         "LambdaFunctionArn": "arn:aws:lambda:us-east-1:123456789012:function:CreateThumbnail",
         "Events": [ "s3:ObjectCreated:Put" ]
       }
     ]
   }
   ```

1. Run the following CLI command to apply the notification settings in the JSON file you created to your source bucket. Replace `amzn-s3-demo-source-bucket` with the name of your own source bucket.

   ```
   aws s3api put-bucket-notification-configuration --bucket amzn-s3-demo-source-bucket \
   --notification-configuration file://notification.json
   ```

   To learn more about the `put-bucket-notification-configuration` command and the `notification-configuration` option, see [put-bucket-notification-configuration](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/put-bucket-notification-configuration.html) in the *AWS CLI Command Reference*.

------

## Test your Lambda function with a dummy event
<a name="with-s3-tutorial-dummy-test"></a>

![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/services-s3-tutorial/s3thumb_tut_steps8.png)


Before you test your whole setup by adding an image file to your Amazon S3 source bucket, you test that your Lambda function is working correctly by invoking it with a dummy event. An event in Lambda is a JSON-formatted document that contains data for your function to process. When your function is invoked by Amazon S3, the event sent to your function contains information such as the bucket name, bucket ARN, and object key.

------
#### [ AWS Management Console ]

**To test your Lambda function with a dummy event (console)**

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console and choose your function (`CreateThumbnail`).

1. Choose the **Test** tab.

1. To create your test event, in the **Test event** pane, do the following:

   1. Under **Test event action**, select **Create new event**.

   1. For **Event name**, enter **myTestEvent**.

   1. For **Template**, select **S3 Put**.

   1. Replace the values for the following parameters with your own values.
      + For `awsRegion`, replace `us-east-1` with the AWS Region you created your Amazon S3 buckets in.
      + For `name`, replace `amzn-s3-demo-bucket` with the name of your own Amazon S3 source bucket.
      + For `key`, replace `test%2Fkey` with the filename of the test object you uploaded to your source bucket in the step [Upload a test image to your source bucket](#with-s3-tutorial-test-image).

      ```
      {
        "Records": [
          {
            "eventVersion": "2.0",
            "eventSource": "aws:s3",
            "awsRegion": "us-east-1",
            "eventTime": "1970-01-01T00:00:00.000Z",
            "eventName": "ObjectCreated:Put",
            "userIdentity": {
              "principalId": "EXAMPLE"
            },
            "requestParameters": {
              "sourceIPAddress": "127.0.0.1"
            },
            "responseElements": {
              "x-amz-request-id": "EXAMPLE123456789",
              "x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH"
            },
            "s3": {
              "s3SchemaVersion": "1.0",
              "configurationId": "testConfigRule",
              "bucket": {
                "name": "amzn-s3-demo-bucket",
                "ownerIdentity": {
                  "principalId": "EXAMPLE"
                },
                "arn": "arn:aws:s3:::amzn-s3-demo-bucket"
              },
              "object": {
                "key": "test%2Fkey",
                "size": 1024,
                "eTag": "0123456789abcdef0123456789abcdef",
                "sequencer": "0A1B2C3D4E5F678901"
              }
            }
          }
        ]
      }
      ```

   1. Choose **Save**.

1. In the **Test event** pane, choose **Test**.

1. To check the your function has created a resized verison of your image and stored it in your target Amazon S3 bucket, do the following:

   1. Open the [Buckets page](https://console.aws.amazon.com/s3/buckets) of the Amazon S3 console.

   1. Choose your target bucket and confirm that your resized file is listed in the **Objects** pane.

------
#### [ AWS CLI ]

**To test your Lambda function with a dummy event (AWS CLI)**

1. Save the following JSON in a file named `dummyS3Event.json`. Replace the values for the following parameters with your own values:
   + For `awsRegion`, replace `us-east-1` with the AWS Region you created your Amazon S3 buckets in.
   + For `name`, replace `amzn-s3-demo-bucket` with the name of your own Amazon S3 source bucket.
   + For `key`, replace `test%2Fkey` with the filename of the test object you uploaded to your source bucket in the step [Upload a test image to your source bucket](#with-s3-tutorial-test-image).

   ```
   {
     "Records": [
       {
         "eventVersion": "2.0",
         "eventSource": "aws:s3",
         "awsRegion": "us-east-1",
         "eventTime": "1970-01-01T00:00:00.000Z",
         "eventName": "ObjectCreated:Put",
         "userIdentity": {
           "principalId": "EXAMPLE"
         },
         "requestParameters": {
           "sourceIPAddress": "127.0.0.1"
         },
         "responseElements": {
           "x-amz-request-id": "EXAMPLE123456789",
           "x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH"
         },
         "s3": {
           "s3SchemaVersion": "1.0",
           "configurationId": "testConfigRule",
           "bucket": {
             "name": "amzn-s3-demo-bucket",
             "ownerIdentity": {
               "principalId": "EXAMPLE"
             },
             "arn": "arn:aws:s3:::amzn-s3-demo-bucket"
           },
           "object": {
             "key": "test%2Fkey",
             "size": 1024,
             "eTag": "0123456789abcdef0123456789abcdef",
             "sequencer": "0A1B2C3D4E5F678901"
           }
         }
       }
     ]
   }
   ```

1. From the directory you saved your `dummyS3Event.json` file in, invoke the function by running the following CLI command. This command invokes your Lambda function synchronously by specifying `RequestResponse` as the value of the invocation-type parameter. To learn more about synchronous and asynchronous invocation, see [Invoking Lambda functions](https://docs.aws.amazon.com/lambda/latest/dg/lambda-invocation.html).

   ```
   aws lambda invoke --function-name CreateThumbnail \
   --invocation-type RequestResponse --cli-binary-format raw-in-base64-out \
   --payload file://dummyS3Event.json outputfile.txt
   ```

   The cli-binary-format option is required if you are using version 2 of the AWS CLI. To make this the default setting, run `aws configure set cli-binary-format raw-in-base64-out`. For more information, see [AWS CLI supported global command line options](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-options.html#cli-configure-options-list).

1. Verify that your function has created a thumbnail version of your image and saved it to your target Amazon S3 bucket. Run the following CLI command, replacing `amzn-s3-demo-source-bucket-resized` with the name of your own destination bucket.

   ```
   aws s3api list-objects-v2 --bucket amzn-s3-demo-source-bucket-resized
   ```

   You should see output similar to the following. The `Key` parameter shows the filename of your resized image file.

   ```
   {
       "Contents": [
           {
               "Key": "resized-HappyFace.jpg",
               "LastModified": "2023-06-06T21:40:07+00:00",
               "ETag": "\"d8ca652ffe83ba6b721ffc20d9d7174a\"",
               "Size": 2633,
               "StorageClass": "STANDARD"
           }
       ]
   }
   ```

------

## Test your function using the Amazon S3 trigger
<a name="with-s3-tutorial-test-s3"></a>

![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/services-s3-tutorial/s3thumb_tut_steps9.png)


Now that you’ve confirmed your Lambda function is operating correctly, you’re ready to test your complete setup by adding an image file to your Amazon S3 source bucket. When you add your image to the source bucket, your Lambda function should be automatically invoked. Your function creates a resized version of the file and stores it in your target bucket.

------
#### [ AWS Management Console ]

**To test your Lambda function using the Amazon S3 trigger (console)**

1. To upload an image to your Amazon S3 bucket, do the following:

   1. Open the [Buckets](https://console.aws.amazon.com/s3/buckets) page of the Amazon S3 console and choose your source bucket.

   1. Choose **Upload**.

   1. Choose **Add files** and use the file selector to choose the image file you want to upload. Your image object can be any .jpg or .png file.

   1. Choose **Open**, then choose **Upload**.

1. Verify that Lambda has saved a resized version of your image file in your target bucket by doing the following:

   1. Navigate back to the [Buckets](https://console.aws.amazon.com/s3/buckets) page of the Amazon S3 console and choose your destination bucket.

   1. In the **Objects** pane, you should now see two resized image files, one from each test of your Lambda function. To download your resized image, select the file, then choose **Download**.

------
#### [ AWS CLI ]

**To test your Lambda function using the Amazon S3 trigger (AWS CLI)**

1. From the directory containing the image you want to upload, run the following CLI command. Replace the `--bucket` parameter with the name of your source bucket. For the `--key` and `--body` parameters, use the filename of your test image. Your test image can be any .jpg or .png file.

   ```
   aws s3api put-object --bucket amzn-s3-demo-source-bucket --key SmileyFace.jpg --body ./SmileyFace.jpg
   ```

1. Verify that your function has created a thumbnail version of your image and saved it to your target Amazon S3 bucket. Run the following CLI command, replacing `amzn-s3-demo-source-bucket-resized` with the name of your own destination bucket.

   ```
   aws s3api list-objects-v2 --bucket amzn-s3-demo-source-bucket-resized
   ```

   If your function runs successfully, you’ll see output similar to the following. Your target bucket should now contain two resized files.

   ```
   {
       "Contents": [
           {
               "Key": "resized-HappyFace.jpg",
               "LastModified": "2023-06-07T00:15:50+00:00",
               "ETag": "\"7781a43e765a8301713f533d70968a1e\"",
               "Size": 2763,
               "StorageClass": "STANDARD"
           },
           {
               "Key": "resized-SmileyFace.jpg",
               "LastModified": "2023-06-07T00:13:18+00:00",
               "ETag": "\"ca536e5a1b9e32b22cd549e18792cdbc\"",
               "Size": 1245,
               "StorageClass": "STANDARD"
           }
       ]
   }
   ```

------

## Clean up your resources
<a name="s3-tutorial-cleanup"></a>

You can now delete the resources that you created for this tutorial, unless you want to retain them. By deleting AWS resources that you're no longer using, you prevent unnecessary charges to your AWS account.

**To delete the Lambda function**

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console.

1. Select the function that you created.

1. Choose **Actions**, **Delete**.

1. Type **confirm** in the text input field and choose **Delete**.

**To delete the policy that you created**

1. Open the [Policies page](https://console.aws.amazon.com/iam/home#/policies) of the IAM console.

1. Select the policy that you created (**AWSLambdaS3Policy**).

1. Choose **Policy actions**, **Delete**.

1. Choose **Delete**.

**To delete the execution role**

1. Open the [Roles page](https://console.aws.amazon.com/iam/home#/roles) of the IAM console.

1. Select the execution role that you created.

1. Choose **Delete**.

1. Enter the name of the role in the text input field and choose **Delete**.

**To delete the S3 bucket**

1. Open the [Amazon S3 console.](https://console.aws.amazon.com//s3/home#)

1. Select the bucket you created.

1. Choose **Delete**.

1. Enter the name of the bucket in the text input field.

1. Choose **Delete bucket**.