使用 SDK for Rust 的 Amazon S3 範例 - AWS SDK 程式碼範例

文件 AWS SDK AWS 範例 SDK 儲存庫中有更多可用的 GitHub 範例。

本文為英文版的機器翻譯版本,如內容有任何歧義或不一致之處,概以英文版為準。

使用 SDK for Rust 的 Amazon S3 範例

下列程式碼範例示範如何使用 AWS SDK for Rust 搭配 Amazon S3 來執行動作和實作常見案例。

基本概念是程式碼範例,示範如何在服務內執行基本操作。

Actions 是大型程式的程式碼摘錄,必須在內容中執行。雖然 動作會示範如何呼叫個別服務函數,但您可以在其相關案例中查看內容中的動作。

案例是程式碼範例,示範如何透過呼叫服務內的多個函數或與其他函數結合,來完成特定任務 AWS 服務。

每個範例都包含完整原始程式碼的連結,您可以在其中找到如何在內容中設定和執行程式碼的指示。

開始使用

下列程式碼範例示範如何開始使用 Amazon S3。

Rust 的 SDK
注意

還有更多 on GitHub。尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

/// S3 Hello World Example using the AWS SDK for Rust. /// /// This example lists the objects in a bucket, uploads an object to that bucket, /// and then retrieves the object and prints some S3 information about the object. /// This shows a number of S3 features, including how to use built-in paginators /// for large data sets. /// /// # Arguments /// /// * `client` - an S3 client configured appropriately for the environment. /// * `bucket` - the bucket name that the object will be uploaded to. Must be present in the region the `client` is configured to use. /// * `filename` - a reference to a path that will be read and uploaded to S3. /// * `key` - the string key that the object will be uploaded as inside the bucket. async fn list_bucket_and_upload_object( client: &aws_sdk_s3::Client, bucket: &str, filepath: &Path, key: &str, ) -> Result<(), S3ExampleError> { // List the buckets in this account let mut objects = client .list_objects_v2() .bucket(bucket) .into_paginator() .send(); println!("key\tetag\tlast_modified\tstorage_class"); while let Some(Ok(object)) = objects.next().await { for item in object.contents() { println!( "{}\t{}\t{}\t{}", item.key().unwrap_or_default(), item.e_tag().unwrap_or_default(), item.last_modified() .map(|lm| format!("{lm}")) .unwrap_or_default(), item.storage_class() .map(|sc| format!("{sc}")) .unwrap_or_default() ); } } // Prepare a ByteStream around the file, and upload the object using that ByteStream. let body = aws_sdk_s3::primitives::ByteStream::from_path(filepath) .await .map_err(|err| { S3ExampleError::new(format!( "Failed to create bytestream for {filepath:?} ({err:?})" )) })?; let resp = client .put_object() .bucket(bucket) .key(key) .body(body) .send() .await?; println!( "Upload success. Version: {:?}", resp.version_id() .expect("S3 Object upload missing version ID") ); // Retrieve the just-uploaded object. let resp = client.get_object().bucket(bucket).key(key).send().await?; println!("etag: {}", resp.e_tag().unwrap_or("(missing)")); println!("version: {}", resp.version_id().unwrap_or("(missing)")); Ok(()) }

S3ExampleError 公用程式

/// S3ExampleError provides a From<T: ProvideErrorMetadata> impl to extract /// client-specific error details. This serves as a consistent backup to handling /// specific service errors, depending on what is needed by the scenario. /// It is used throughout the code examples for the AWS SDK for Rust. #[derive(Debug)] pub struct S3ExampleError(String); impl S3ExampleError { pub fn new(value: impl Into<String>) -> Self { S3ExampleError(value.into()) } pub fn add_message(self, message: impl Into<String>) -> Self { S3ExampleError(format!("{}: {}", message.into(), self.0)) } } impl<T: aws_sdk_s3::error::ProvideErrorMetadata> From<T> for S3ExampleError { fn from(value: T) -> Self { S3ExampleError(format!( "{}: {}", value .code() .map(String::from) .unwrap_or("unknown code".into()), value .message() .map(String::from) .unwrap_or("missing reason".into()), )) } } impl std::error::Error for S3ExampleError {} impl std::fmt::Display for S3ExampleError { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { write!(f, "{}", self.0) } }
  • 如需 API 詳細資訊,請參閱 ListBuckets AWS for Rust SDK 參考中的 API

基本概念

以下程式碼範例顯示做法:

  • 建立儲存貯體並上傳檔案到該儲存貯體。

  • 從儲存貯體下載物件。

  • 將物件複製至儲存貯體中的子文件夾。

  • 列出儲存貯體中的物件。

  • 刪除儲存貯體物件和該儲存貯體。

Rust 的 SDK
注意

還有更多 on GitHub。尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

案例中執行的二進制文件的程式碼。

#![allow(clippy::result_large_err)] //! Purpose //! Shows how to use the AWS SDK for Rust to get started using //! Amazon Simple Storage Service (Amazon S3). Create a bucket, move objects into and out of it, //! and delete all resources at the end of the demo. //! //! This example follows the steps in "Getting started with Amazon S3" in the Amazon S3 //! user guide. //! - https://docs.aws.amazon.com/AmazonS3/latest/userguide/GetStartedWithS3.html use aws_config::meta::region::RegionProviderChain; use aws_sdk_s3::{config::Region, Client}; use s3_code_examples::error::S3ExampleError; use uuid::Uuid; #[tokio::main] async fn main() -> Result<(), S3ExampleError> { let region_provider = RegionProviderChain::first_try(Region::new("us-west-2")); let region = region_provider.region().await.unwrap(); let shared_config = aws_config::from_env().region(region_provider).load().await; let client = Client::new(&shared_config); let bucket_name = format!("amzn-s3-demo-bucket-{}", Uuid::new_v4()); let file_name = "s3/testfile.txt".to_string(); let key = "test file key name".to_string(); let target_key = "target_key".to_string(); if let Err(e) = run_s3_operations(region, client, bucket_name, file_name, key, target_key).await { eprintln!("{:?}", e); }; Ok(()) } async fn run_s3_operations( region: Region, client: Client, bucket_name: String, file_name: String, key: String, target_key: String, ) -> Result<(), S3ExampleError> { s3_code_examples::create_bucket(&client, &bucket_name, &region).await?; let run_example: Result<(), S3ExampleError> = (async { s3_code_examples::upload_object(&client, &bucket_name, &file_name, &key).await?; let _object = s3_code_examples::download_object(&client, &bucket_name, &key).await; s3_code_examples::copy_object(&client, &bucket_name, &bucket_name, &key, &target_key) .await?; s3_code_examples::list_objects(&client, &bucket_name).await?; s3_code_examples::clear_bucket(&client, &bucket_name).await?; Ok(()) }) .await; if let Err(err) = run_example { eprintln!("Failed to complete getting-started example: {err:?}"); } s3_code_examples::delete_bucket(&client, &bucket_name).await?; Ok(()) }

案例使用的常見動作。

pub async fn create_bucket( client: &aws_sdk_s3::Client, bucket_name: &str, region: &aws_config::Region, ) -> Result<Option<aws_sdk_s3::operation::create_bucket::CreateBucketOutput>, S3ExampleError> { let constraint = aws_sdk_s3::types::BucketLocationConstraint::from(region.to_string().as_str()); let cfg = aws_sdk_s3::types::CreateBucketConfiguration::builder() .location_constraint(constraint) .build(); let create = client .create_bucket() .create_bucket_configuration(cfg) .bucket(bucket_name) .send() .await; // BucketAlreadyExists and BucketAlreadyOwnedByYou are not problems for this task. create.map(Some).or_else(|err| { if err .as_service_error() .map(|se| se.is_bucket_already_exists() || se.is_bucket_already_owned_by_you()) == Some(true) { Ok(None) } else { Err(S3ExampleError::from(err)) } }) } pub async fn upload_object( client: &aws_sdk_s3::Client, bucket_name: &str, file_name: &str, key: &str, ) -> Result<aws_sdk_s3::operation::put_object::PutObjectOutput, S3ExampleError> { let body = aws_sdk_s3::primitives::ByteStream::from_path(std::path::Path::new(file_name)).await; client .put_object() .bucket(bucket_name) .key(key) .body(body.unwrap()) .send() .await .map_err(S3ExampleError::from) } pub async fn download_object( client: &aws_sdk_s3::Client, bucket_name: &str, key: &str, ) -> Result<aws_sdk_s3::operation::get_object::GetObjectOutput, S3ExampleError> { client .get_object() .bucket(bucket_name) .key(key) .send() .await .map_err(S3ExampleError::from) } /// Copy an object from one bucket to another. pub async fn copy_object( client: &aws_sdk_s3::Client, source_bucket: &str, destination_bucket: &str, source_object: &str, destination_object: &str, ) -> Result<(), S3ExampleError> { let source_key = format!("{source_bucket}/{source_object}"); let response = client .copy_object() .copy_source(&source_key) .bucket(destination_bucket) .key(destination_object) .send() .await?; println!( "Copied from {source_key} to {destination_bucket}/{destination_object} with etag {}", response .copy_object_result .unwrap_or_else(|| aws_sdk_s3::types::CopyObjectResult::builder().build()) .e_tag() .unwrap_or("missing") ); Ok(()) } pub async fn list_objects(client: &aws_sdk_s3::Client, bucket: &str) -> Result<(), S3ExampleError> { let mut response = client .list_objects_v2() .bucket(bucket.to_owned()) .max_keys(10) // In this example, go 10 at a time. .into_paginator() .send(); while let Some(result) = response.next().await { match result { Ok(output) => { for object in output.contents() { println!(" - {}", object.key().unwrap_or("Unknown")); } } Err(err) => { eprintln!("{err:?}") } } } Ok(()) } /// Given a bucket, remove all objects in the bucket, and then ensure no objects /// remain in the bucket. pub async fn clear_bucket( client: &aws_sdk_s3::Client, bucket_name: &str, ) -> Result<Vec<String>, S3ExampleError> { let objects = client.list_objects_v2().bucket(bucket_name).send().await?; // delete_objects no longer needs to be mutable. let objects_to_delete: Vec<String> = objects .contents() .iter() .filter_map(|obj| obj.key()) .map(String::from) .collect(); if objects_to_delete.is_empty() { return Ok(vec![]); } let return_keys = objects_to_delete.clone(); delete_objects(client, bucket_name, objects_to_delete).await?; let objects = client.list_objects_v2().bucket(bucket_name).send().await?; eprintln!("{objects:?}"); match objects.key_count { Some(0) => Ok(return_keys), _ => Err(S3ExampleError::new( "There were still objects left in the bucket.", )), } } pub async fn delete_bucket( client: &aws_sdk_s3::Client, bucket_name: &str, ) -> Result<(), S3ExampleError> { let resp = client.delete_bucket().bucket(bucket_name).send().await; match resp { Ok(_) => Ok(()), Err(err) => { if err .as_service_error() .and_then(aws_sdk_s3::error::ProvideErrorMetadata::code) == Some("NoSuchBucket") { Ok(()) } else { Err(S3ExampleError::from(err)) } } } }

動作

下列程式碼範例示範如何使用 CompleteMultipartUpload

Rust 的 SDK
注意

還有更多 on GitHub。尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

// upload_parts: Vec<aws_sdk_s3::types::CompletedPart> let completed_multipart_upload: CompletedMultipartUpload = CompletedMultipartUpload::builder() .set_parts(Some(upload_parts)) .build(); let _complete_multipart_upload_res = client .complete_multipart_upload() .bucket(&bucket_name) .key(&key) .multipart_upload(completed_multipart_upload) .upload_id(upload_id) .send() .await?;
// Create a multipart upload. Use UploadPart and CompleteMultipartUpload to // upload the file. let multipart_upload_res: CreateMultipartUploadOutput = client .create_multipart_upload() .bucket(&bucket_name) .key(&key) .send() .await?; let upload_id = multipart_upload_res.upload_id().ok_or(S3ExampleError::new( "Missing upload_id after CreateMultipartUpload", ))?;
let mut upload_parts: Vec<aws_sdk_s3::types::CompletedPart> = Vec::new(); for chunk_index in 0..chunk_count { let this_chunk = if chunk_count - 1 == chunk_index { size_of_last_chunk } else { CHUNK_SIZE }; let stream = ByteStream::read_from() .path(path) .offset(chunk_index * CHUNK_SIZE) .length(Length::Exact(this_chunk)) .build() .await .unwrap(); // Chunk index needs to start at 0, but part numbers start at 1. let part_number = (chunk_index as i32) + 1; let upload_part_res = client .upload_part() .key(&key) .bucket(&bucket_name) .upload_id(upload_id) .body(stream) .part_number(part_number) .send() .await?; upload_parts.push( CompletedPart::builder() .e_tag(upload_part_res.e_tag.unwrap_or_default()) .part_number(part_number) .build(), ); }

下列程式碼範例示範如何使用 CopyObject

Rust 的 SDK
注意

還有更多 on GitHub。尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

/// Copy an object from one bucket to another. pub async fn copy_object( client: &aws_sdk_s3::Client, source_bucket: &str, destination_bucket: &str, source_object: &str, destination_object: &str, ) -> Result<(), S3ExampleError> { let source_key = format!("{source_bucket}/{source_object}"); let response = client .copy_object() .copy_source(&source_key) .bucket(destination_bucket) .key(destination_object) .send() .await?; println!( "Copied from {source_key} to {destination_bucket}/{destination_object} with etag {}", response .copy_object_result .unwrap_or_else(|| aws_sdk_s3::types::CopyObjectResult::builder().build()) .e_tag() .unwrap_or("missing") ); Ok(()) }
  • 如需 API 詳細資訊,請參閱 CopyObject AWS for Rust SDK 參考中的 API

下列程式碼範例示範如何使用 CreateBucket

Rust 的 SDK
注意

還有更多 on GitHub。尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

pub async fn create_bucket( client: &aws_sdk_s3::Client, bucket_name: &str, region: &aws_config::Region, ) -> Result<Option<aws_sdk_s3::operation::create_bucket::CreateBucketOutput>, S3ExampleError> { let constraint = aws_sdk_s3::types::BucketLocationConstraint::from(region.to_string().as_str()); let cfg = aws_sdk_s3::types::CreateBucketConfiguration::builder() .location_constraint(constraint) .build(); let create = client .create_bucket() .create_bucket_configuration(cfg) .bucket(bucket_name) .send() .await; // BucketAlreadyExists and BucketAlreadyOwnedByYou are not problems for this task. create.map(Some).or_else(|err| { if err .as_service_error() .map(|se| se.is_bucket_already_exists() || se.is_bucket_already_owned_by_you()) == Some(true) { Ok(None) } else { Err(S3ExampleError::from(err)) } }) }
  • 如需 API 詳細資訊,請參閱 CreateBucket AWS for Rust SDK 參考中的 API

下列程式碼範例示範如何使用 CreateMultipartUpload

Rust 的 SDK
注意

還有更多 on GitHub。尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

// Create a multipart upload. Use UploadPart and CompleteMultipartUpload to // upload the file. let multipart_upload_res: CreateMultipartUploadOutput = client .create_multipart_upload() .bucket(&bucket_name) .key(&key) .send() .await?; let upload_id = multipart_upload_res.upload_id().ok_or(S3ExampleError::new( "Missing upload_id after CreateMultipartUpload", ))?;
let mut upload_parts: Vec<aws_sdk_s3::types::CompletedPart> = Vec::new(); for chunk_index in 0..chunk_count { let this_chunk = if chunk_count - 1 == chunk_index { size_of_last_chunk } else { CHUNK_SIZE }; let stream = ByteStream::read_from() .path(path) .offset(chunk_index * CHUNK_SIZE) .length(Length::Exact(this_chunk)) .build() .await .unwrap(); // Chunk index needs to start at 0, but part numbers start at 1. let part_number = (chunk_index as i32) + 1; let upload_part_res = client .upload_part() .key(&key) .bucket(&bucket_name) .upload_id(upload_id) .body(stream) .part_number(part_number) .send() .await?; upload_parts.push( CompletedPart::builder() .e_tag(upload_part_res.e_tag.unwrap_or_default()) .part_number(part_number) .build(), ); }
// upload_parts: Vec<aws_sdk_s3::types::CompletedPart> let completed_multipart_upload: CompletedMultipartUpload = CompletedMultipartUpload::builder() .set_parts(Some(upload_parts)) .build(); let _complete_multipart_upload_res = client .complete_multipart_upload() .bucket(&bucket_name) .key(&key) .multipart_upload(completed_multipart_upload) .upload_id(upload_id) .send() .await?;

下列程式碼範例示範如何使用 DeleteBucket

Rust 的 SDK
注意

還有更多 on GitHub。尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

pub async fn delete_bucket( client: &aws_sdk_s3::Client, bucket_name: &str, ) -> Result<(), S3ExampleError> { let resp = client.delete_bucket().bucket(bucket_name).send().await; match resp { Ok(_) => Ok(()), Err(err) => { if err .as_service_error() .and_then(aws_sdk_s3::error::ProvideErrorMetadata::code) == Some("NoSuchBucket") { Ok(()) } else { Err(S3ExampleError::from(err)) } } } }
  • 如需 API 詳細資訊,請參閱 DeleteBucket AWS for Rust SDK 參考中的 API

下列程式碼範例示範如何使用 DeleteObject

Rust 的 SDK
注意

還有更多 on GitHub。尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

/// Delete an object from a bucket. pub async fn remove_object( client: &aws_sdk_s3::Client, bucket: &str, key: &str, ) -> Result<(), S3ExampleError> { client .delete_object() .bucket(bucket) .key(key) .send() .await?; // There are no modeled errors to handle when deleting an object. Ok(()) }
  • 如需 API 詳細資訊,請參閱 DeleteObject AWS for Rust SDK 參考中的 API

下列程式碼範例示範如何使用 DeleteObjects

Rust 的 SDK
注意

還有更多 on GitHub。尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

/// Delete the objects in a bucket. pub async fn delete_objects( client: &aws_sdk_s3::Client, bucket_name: &str, objects_to_delete: Vec<String>, ) -> Result<(), S3ExampleError> { // Push into a mut vector to use `?` early return errors while building object keys. let mut delete_object_ids: Vec<aws_sdk_s3::types::ObjectIdentifier> = vec![]; for obj in objects_to_delete { let obj_id = aws_sdk_s3::types::ObjectIdentifier::builder() .key(obj) .build() .map_err(|err| { S3ExampleError::new(format!("Failed to build key for delete_object: {err:?}")) })?; delete_object_ids.push(obj_id); } client .delete_objects() .bucket(bucket_name) .delete( aws_sdk_s3::types::Delete::builder() .set_objects(Some(delete_object_ids)) .build() .map_err(|err| { S3ExampleError::new(format!("Failed to build delete_object input {err:?}")) })?, ) .send() .await?; Ok(()) }
  • 如需 API 詳細資訊,請參閱 DeleteObjects AWS for Rust SDK 參考中的 API

下列程式碼範例示範如何使用 GetBucketLocation

Rust 的 SDK
注意

還有更多 on GitHub。尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

async fn show_buckets( strict: bool, client: &Client, region: BucketLocationConstraint, ) -> Result<(), S3ExampleError> { let mut buckets = client.list_buckets().into_paginator().send(); let mut num_buckets = 0; let mut in_region = 0; while let Some(Ok(output)) = buckets.next().await { for bucket in output.buckets() { num_buckets += 1; if strict { let r = client .get_bucket_location() .bucket(bucket.name().unwrap_or_default()) .send() .await?; if r.location_constraint() == Some(&region) { println!("{}", bucket.name().unwrap_or_default()); in_region += 1; } } else { println!("{}", bucket.name().unwrap_or_default()); } } } println!(); if strict { println!( "Found {} buckets in the {} region out of a total of {} buckets.", in_region, region, num_buckets ); } else { println!("Found {} buckets in all regions.", num_buckets); } Ok(()) }
  • 如需 API 詳細資訊,請參閱 GetBucketLocation AWS for Rust SDK 參考中的 API

下列程式碼範例示範如何使用 GetObject

Rust 的 SDK
注意

還有更多 on GitHub。尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

async fn get_object(client: Client, opt: Opt) -> Result<usize, S3ExampleError> { trace!("bucket: {}", opt.bucket); trace!("object: {}", opt.object); trace!("destination: {}", opt.destination.display()); let mut file = File::create(opt.destination.clone()).map_err(|err| { S3ExampleError::new(format!( "Failed to initialize file for saving S3 download: {err:?}" )) })?; let mut object = client .get_object() .bucket(opt.bucket) .key(opt.object) .send() .await?; let mut byte_count = 0_usize; while let Some(bytes) = object.body.try_next().await.map_err(|err| { S3ExampleError::new(format!("Failed to read from S3 download stream: {err:?}")) })? { let bytes_len = bytes.len(); file.write_all(&bytes).map_err(|err| { S3ExampleError::new(format!( "Failed to write from S3 download stream to local file: {err:?}" )) })?; trace!("Intermediate write of {bytes_len}"); byte_count += bytes_len; } Ok(byte_count) }
  • 如需 API 詳細資訊,請參閱 GetObject AWS for Rust SDK 參考中的 API

下列程式碼範例示範如何使用 ListBuckets

Rust 的 SDK
注意

還有更多 on GitHub。尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

async fn show_buckets( strict: bool, client: &Client, region: BucketLocationConstraint, ) -> Result<(), S3ExampleError> { let mut buckets = client.list_buckets().into_paginator().send(); let mut num_buckets = 0; let mut in_region = 0; while let Some(Ok(output)) = buckets.next().await { for bucket in output.buckets() { num_buckets += 1; if strict { let r = client .get_bucket_location() .bucket(bucket.name().unwrap_or_default()) .send() .await?; if r.location_constraint() == Some(&region) { println!("{}", bucket.name().unwrap_or_default()); in_region += 1; } } else { println!("{}", bucket.name().unwrap_or_default()); } } } println!(); if strict { println!( "Found {} buckets in the {} region out of a total of {} buckets.", in_region, region, num_buckets ); } else { println!("Found {} buckets in all regions.", num_buckets); } Ok(()) }
  • 如需 API 詳細資訊,請參閱 ListBuckets AWS for Rust SDK 參考中的 API

下列程式碼範例示範如何使用 ListObjectVersions

Rust 的 SDK
注意

還有更多 on GitHub。尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

async fn show_versions(client: &Client, bucket: &str) -> Result<(), Error> { let resp = client.list_object_versions().bucket(bucket).send().await?; for version in resp.versions() { println!("{}", version.key().unwrap_or_default()); println!(" version ID: {}", version.version_id().unwrap_or_default()); println!(); } Ok(()) }
  • 如需 API 詳細資訊,請參閱 ListObjectVersions AWS for Rust SDK 參考中的 API

下列程式碼範例示範如何使用 ListObjectsV2

Rust 的 SDK
注意

還有更多 on GitHub。尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

pub async fn list_objects(client: &aws_sdk_s3::Client, bucket: &str) -> Result<(), S3ExampleError> { let mut response = client .list_objects_v2() .bucket(bucket.to_owned()) .max_keys(10) // In this example, go 10 at a time. .into_paginator() .send(); while let Some(result) = response.next().await { match result { Ok(output) => { for object in output.contents() { println!(" - {}", object.key().unwrap_or("Unknown")); } } Err(err) => { eprintln!("{err:?}") } } } Ok(()) }

下列程式碼範例示範如何使用 PutObject

Rust 的 SDK
注意

還有更多 on GitHub。尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

pub async fn upload_object( client: &aws_sdk_s3::Client, bucket_name: &str, file_name: &str, key: &str, ) -> Result<aws_sdk_s3::operation::put_object::PutObjectOutput, S3ExampleError> { let body = aws_sdk_s3::primitives::ByteStream::from_path(std::path::Path::new(file_name)).await; client .put_object() .bucket(bucket_name) .key(key) .body(body.unwrap()) .send() .await .map_err(S3ExampleError::from) }
  • 如需 API 詳細資訊,請參閱 PutObject AWS for Rust SDK 參考中的 API

下列程式碼範例示範如何使用 UploadPart

Rust 的 SDK
注意

還有更多 on GitHub。尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

let mut upload_parts: Vec<aws_sdk_s3::types::CompletedPart> = Vec::new(); for chunk_index in 0..chunk_count { let this_chunk = if chunk_count - 1 == chunk_index { size_of_last_chunk } else { CHUNK_SIZE }; let stream = ByteStream::read_from() .path(path) .offset(chunk_index * CHUNK_SIZE) .length(Length::Exact(this_chunk)) .build() .await .unwrap(); // Chunk index needs to start at 0, but part numbers start at 1. let part_number = (chunk_index as i32) + 1; let upload_part_res = client .upload_part() .key(&key) .bucket(&bucket_name) .upload_id(upload_id) .body(stream) .part_number(part_number) .send() .await?; upload_parts.push( CompletedPart::builder() .e_tag(upload_part_res.e_tag.unwrap_or_default()) .part_number(part_number) .build(), ); }
// Create a multipart upload. Use UploadPart and CompleteMultipartUpload to // upload the file. let multipart_upload_res: CreateMultipartUploadOutput = client .create_multipart_upload() .bucket(&bucket_name) .key(&key) .send() .await?; let upload_id = multipart_upload_res.upload_id().ok_or(S3ExampleError::new( "Missing upload_id after CreateMultipartUpload", ))?;
// upload_parts: Vec<aws_sdk_s3::types::CompletedPart> let completed_multipart_upload: CompletedMultipartUpload = CompletedMultipartUpload::builder() .set_parts(Some(upload_parts)) .build(); let _complete_multipart_upload_res = client .complete_multipart_upload() .bucket(&bucket_name) .key(&key) .multipart_upload(completed_multipart_upload) .upload_id(upload_id) .send() .await?;
  • 如需 API 詳細資訊,請參閱 UploadPart AWS for Rust SDK 參考中的 API

案例

以下程式碼範例顯示做法:

  • 使用 Amazon Polly 將純文字 (UTF-8) 輸入檔案合成至音訊檔案。

  • 將音訊檔案上傳至 Amazon S3 儲存貯體。

  • 使用 Amazon Transcribe 將音訊檔案轉換為文字。

  • 顯示文字。

Rust 的 SDK

使用 Amazon Polly 將純文字 (UTF-8) 輸入檔案合成至音訊檔案、將音訊檔案上傳至 Amazon S3 儲存貯體、使用 Amazon Transcribe 將該音訊檔案轉換為文字,以及顯示文字。

如需完整的原始程式碼和如何設定和執行的指示,請參閱 GitHub 上的完整範例。

此範例中使用的服務
  • Amazon Polly

  • Amazon S3

  • Amazon Transcribe

下列程式碼範例示範如何為 Amazon S3 建立預先簽章的 URL 並上傳物件。

Rust 的 SDK
注意

還有更多 on GitHub。尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

建立對 GET S3 物件的預先簽署請求。

/// Generate a URL for a presigned GET request. async fn get_object( client: &Client, bucket: &str, object: &str, expires_in: u64, ) -> Result<(), Box<dyn Error>> { let expires_in = Duration::from_secs(expires_in); let presigned_request = client .get_object() .bucket(bucket) .key(object) .presigned(PresigningConfig::expires_in(expires_in)?) .await?; println!("Object URI: {}", presigned_request.uri()); let valid_until = chrono::offset::Local::now() + expires_in; println!("Valid until: {valid_until}"); Ok(()) }

建立對 PUT S3 物件的預先簽署請求。

async fn put_object( client: &Client, bucket: &str, object: &str, expires_in: u64, ) -> Result<String, S3ExampleError> { let expires_in: std::time::Duration = std::time::Duration::from_secs(expires_in); let expires_in: aws_sdk_s3::presigning::PresigningConfig = PresigningConfig::expires_in(expires_in).map_err(|err| { S3ExampleError::new(format!( "Failed to convert expiration to PresigningConfig: {err:?}" )) })?; let presigned_request = client .put_object() .bucket(bucket) .key(object) .presigned(expires_in) .await?; Ok(presigned_request.uri().into()) }

下列程式碼範例示範如何建立無伺服器應用程式,讓使用者以標籤管理相片。

Rust 的 SDK

顯示如何開發照片資產管理應用程式,以便使用 Amazon Rekognition 偵測圖片中的標籤,並將其儲存以供日後擷取。

如需完整的原始程式碼和如何設定和執行的指示,請參閱 GitHub 上的完整範例。

如要深入探索此範例的來源,請參閱 AWS  社群上的文章。

此範例中使用的服務
  • API Gateway

  • DynamoDB

  • Lambda

  • Amazon Rekognition

  • Amazon S3

  • Amazon SNS

以下程式碼範例顯示做法:

  • 在 Amazon S3 儲存貯體儲存映像。

  • 使用 Amazon Rekognition 偵測面部細節,例如年齡範圍、性別和情感 (例如微笑)。

  • 顯示這些詳細資訊。

Rust 的 SDK

將映像儲存在 Amazon S3 儲存貯體中,並包含上傳字首,使用 Amazon Rekognition 偵測面部細節,例如年齡範圍、性別和情感 (微笑等),並顯示這些詳細資訊。

如需完整的原始程式碼和如何設定和執行的指示,請參閱 GitHub 上的完整範例。

此範例中使用的服務
  • Amazon Rekognition

  • Amazon S3

下列程式碼範例示範如何從 S3 儲存貯體中物件讀取資料,但僅在自上次擷取時後該儲存貯體尚未修改時才能進行此讀取。

Rust 的 SDK
注意

還有更多 on GitHub。尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

use aws_sdk_s3::{ error::SdkError, primitives::{ByteStream, DateTime, DateTimeFormat}, Client, }; use s3_code_examples::error::S3ExampleError; use tracing::{error, warn}; const KEY: &str = "key"; const BODY: &str = "Hello, world!"; /// Demonstrate how `if-modified-since` reports that matching objects haven't /// changed. /// /// # Steps /// - Create a bucket. /// - Put an object in the bucket. /// - Get the bucket headers. /// - Get the bucket headers again but only if modified. /// - Delete the bucket. #[tokio::main] async fn main() -> Result<(), S3ExampleError> { tracing_subscriber::fmt::init(); // Get a new UUID to use when creating a unique bucket name. let uuid = uuid::Uuid::new_v4(); // Load the AWS configuration from the environment. let client = Client::new(&aws_config::load_from_env().await); // Generate a unique bucket name using the previously generated UUID. // Then create a new bucket with that name. let bucket_name = format!("if-modified-since-{uuid}"); client .create_bucket() .bucket(bucket_name.clone()) .send() .await?; // Create a new object in the bucket whose name is `KEY` and whose // contents are `BODY`. let put_object_output = client .put_object() .bucket(bucket_name.as_str()) .key(KEY) .body(ByteStream::from_static(BODY.as_bytes())) .send() .await; // If the `PutObject` succeeded, get the eTag string from it. Otherwise, // report an error and return an empty string. let e_tag_1 = match put_object_output { Ok(put_object) => put_object.e_tag.unwrap(), Err(err) => { error!("{err:?}"); String::new() } }; // Request the object's headers. let head_object_output = client .head_object() .bucket(bucket_name.as_str()) .key(KEY) .send() .await; // If the `HeadObject` request succeeded, create a tuple containing the // values of the headers `last-modified` and `etag`. If the request // failed, return the error in a tuple instead. let (last_modified, e_tag_2) = match head_object_output { Ok(head_object) => ( Ok(head_object.last_modified().cloned().unwrap()), head_object.e_tag.unwrap(), ), Err(err) => (Err(err), String::new()), }; warn!("last modified: {last_modified:?}"); assert_eq!( e_tag_1, e_tag_2, "PutObject and first GetObject had differing eTags" ); println!("First value of last_modified: {last_modified:?}"); println!("First tag: {}\n", e_tag_1); // Send a second `HeadObject` request. This time, the `if_modified_since` // option is specified, giving the `last_modified` value returned by the // first call to `HeadObject`. // // Since the object hasn't been changed, and there are no other objects in // the bucket, there should be no matching objects. let head_object_output = client .head_object() .bucket(bucket_name.as_str()) .key(KEY) .if_modified_since(last_modified.unwrap()) .send() .await; // If the `HeadObject` request succeeded, the result is a typle containing // the `last_modified` and `e_tag_1` properties. This is _not_ the expected // result. // // The _expected_ result of the second call to `HeadObject` is an // `SdkError::ServiceError` containing the HTTP error response. If that's // the case and the HTTP status is 304 (not modified), the output is a // tuple containing the values of the HTTP `last-modified` and `etag` // headers. // // If any other HTTP error occurred, the error is returned as an // `SdkError::ServiceError`. let (last_modified, e_tag_2) = match head_object_output { Ok(head_object) => ( Ok(head_object.last_modified().cloned().unwrap()), head_object.e_tag.unwrap(), ), Err(err) => match err { SdkError::ServiceError(err) => { // Get the raw HTTP response. If its status is 304, the // object has not changed. This is the expected code path. let http = err.raw(); match http.status().as_u16() { // If the HTTP status is 304: Not Modified, return a // tuple containing the values of the HTTP // `last-modified` and `etag` headers. 304 => ( Ok(DateTime::from_str( http.headers().get("last-modified").unwrap(), DateTimeFormat::HttpDate, ) .unwrap()), http.headers().get("etag").map(|t| t.into()).unwrap(), ), // Any other HTTP status code is returned as an // `SdkError::ServiceError`. _ => (Err(SdkError::ServiceError(err)), String::new()), } } // Any other kind of error is returned in a tuple containing the // error and an empty string. _ => (Err(err), String::new()), }, }; warn!("last modified: {last_modified:?}"); assert_eq!( e_tag_1, e_tag_2, "PutObject and second HeadObject had different eTags" ); println!("Second value of last modified: {last_modified:?}"); println!("Second tag: {}", e_tag_2); // Clean up by deleting the object and the bucket. client .delete_object() .bucket(bucket_name.as_str()) .key(KEY) .send() .await?; client .delete_bucket() .bucket(bucket_name.as_str()) .send() .await?; Ok(()) }
  • 如需 API 詳細資訊,請參閱 GetObject AWS for Rust SDK 參考中的 API

以下程式碼範例顯示做法:

  • 從 EXIFJPG、JPEG 或 Word 檔案取得 PNG 資訊。

  • 將映像檔案上傳至 Amazon S3 儲存貯體。

  • 使用 Amazon Rekognition 識別檔案中的三個主要屬性 (標籤)。

  • 將 EXIF 和標籤資訊新增至 區域中的 Amazon DynamoDB 資料表。

Rust 的 SDK

從 EXIFJPG、JPEG 或 PNG 檔案取得 Word 資訊、將映像檔案上傳至 Amazon S3 儲存貯體、使用 Amazon Rekognition 來識別檔案中的三個主要屬性 (Amazon Rekognition 中的標籤),以及將 EXIF 和標籤資訊新增至區域中的 Amazon DynamoDB 資料表。

如需完整的原始程式碼和如何設定和執行的指示,請參閱 GitHub 上的完整範例。

此範例中使用的服務
  • DynamoDB

  • Amazon Rekognition

  • Amazon S3

下列程式碼範例示範如何使用 a AWS SDK 撰寫單元和整合測試時,如何範例最佳實務技術。

Rust 的 SDK
注意

還有更多 on GitHub。尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

Cargo.toml 用於測試範例。

[package] name = "testing-examples" version = "0.1.0" authors = [ "John Disanti <jdisanti@amazon.com>", "Doug Schwartz <dougsch@amazon.com>", ] edition = "2021" [dependencies] async-trait = "0.1.51" aws-config = { version = "1.0.1", features = ["behavior-version-latest"] } aws-credential-types = { version = "1.0.1", features = [ "hardcoded-credentials", ] } aws-sdk-s3 = { version = "1.4.0" } aws-smithy-types = { version = "1.0.1" } aws-smithy-runtime = { version = "1.0.1", features = ["test-util"] } aws-smithy-runtime-api = { version = "1.0.1", features = ["test-util"] } aws-types = { version = "1.0.1" } clap = { version = "4.4", features = ["derive"] } http = "0.2.9" mockall = "0.11.4" serde_json = "1" tokio = { version = "1.20.1", features = ["full"] } tracing-subscriber = { version = "0.3.15", features = ["env-filter"] } [[bin]] name = "main" path = "src/main.rs"

使用自動模擬和服務包裝函式的單元測試範例。

use aws_sdk_s3 as s3; #[allow(unused_imports)] use mockall::automock; use s3::operation::list_objects_v2::{ListObjectsV2Error, ListObjectsV2Output}; #[cfg(test)] pub use MockS3Impl as S3; #[cfg(not(test))] pub use S3Impl as S3; #[allow(dead_code)] pub struct S3Impl { inner: s3::Client, } #[cfg_attr(test, automock)] impl S3Impl { #[allow(dead_code)] pub fn new(inner: s3::Client) -> Self { Self { inner } } #[allow(dead_code)] pub async fn list_objects( &self, bucket: &str, prefix: &str, continuation_token: Option<String>, ) -> Result<ListObjectsV2Output, s3::error::SdkError<ListObjectsV2Error>> { self.inner .list_objects_v2() .bucket(bucket) .prefix(prefix) .set_continuation_token(continuation_token) .send() .await } } #[allow(dead_code)] pub async fn determine_prefix_file_size( // Now we take a reference to our trait object instead of the S3 client // s3_list: ListObjectsService, s3_list: S3, bucket: &str, prefix: &str, ) -> Result<usize, s3::Error> { let mut next_token: Option<String> = None; let mut total_size_bytes = 0; loop { let result = s3_list .list_objects(bucket, prefix, next_token.take()) .await?; // Add up the file sizes we got back for object in result.contents() { total_size_bytes += object.size().unwrap_or(0) as usize; } // Handle pagination, and break the loop if there are no more pages next_token = result.next_continuation_token.clone(); if next_token.is_none() { break; } } Ok(total_size_bytes) } #[cfg(test)] mod test { use super::*; use mockall::predicate::eq; #[tokio::test] async fn test_single_page() { let mut mock = MockS3Impl::default(); mock.expect_list_objects() .with(eq("test-bucket"), eq("test-prefix"), eq(None)) .return_once(|_, _, _| { Ok(ListObjectsV2Output::builder() .set_contents(Some(vec![ // Mock content for ListObjectsV2 response s3::types::Object::builder().size(5).build(), s3::types::Object::builder().size(2).build(), ])) .build()) }); // Run the code we want to test with it let size = determine_prefix_file_size(mock, "test-bucket", "test-prefix") .await .unwrap(); // Verify we got the correct total size back assert_eq!(7, size); } #[tokio::test] async fn test_multiple_pages() { // Create the Mock instance with two pages of objects now let mut mock = MockS3Impl::default(); mock.expect_list_objects() .with(eq("test-bucket"), eq("test-prefix"), eq(None)) .return_once(|_, _, _| { Ok(ListObjectsV2Output::builder() .set_contents(Some(vec![ // Mock content for ListObjectsV2 response s3::types::Object::builder().size(5).build(), s3::types::Object::builder().size(2).build(), ])) .set_next_continuation_token(Some("next".to_string())) .build()) }); mock.expect_list_objects() .with( eq("test-bucket"), eq("test-prefix"), eq(Some("next".to_string())), ) .return_once(|_, _, _| { Ok(ListObjectsV2Output::builder() .set_contents(Some(vec![ // Mock content for ListObjectsV2 response s3::types::Object::builder().size(3).build(), s3::types::Object::builder().size(9).build(), ])) .build()) }); // Run the code we want to test with it let size = determine_prefix_file_size(mock, "test-bucket", "test-prefix") .await .unwrap(); assert_eq!(19, size); } }

使用 StaticReplayClient 的整合測試範例。

use aws_sdk_s3 as s3; #[allow(dead_code)] pub async fn determine_prefix_file_size( // Now we take a reference to our trait object instead of the S3 client // s3_list: ListObjectsService, s3: s3::Client, bucket: &str, prefix: &str, ) -> Result<usize, s3::Error> { let mut next_token: Option<String> = None; let mut total_size_bytes = 0; loop { let result = s3 .list_objects_v2() .prefix(prefix) .bucket(bucket) .set_continuation_token(next_token.take()) .send() .await?; // Add up the file sizes we got back for object in result.contents() { total_size_bytes += object.size().unwrap_or(0) as usize; } // Handle pagination, and break the loop if there are no more pages next_token = result.next_continuation_token.clone(); if next_token.is_none() { break; } } Ok(total_size_bytes) } #[allow(dead_code)] fn make_s3_test_credentials() -> s3::config::Credentials { s3::config::Credentials::new( "ATESTCLIENT", "astestsecretkey", Some("atestsessiontoken".to_string()), None, "", ) } #[cfg(test)] mod test { use super::*; use aws_config::BehaviorVersion; use aws_sdk_s3 as s3; use aws_smithy_runtime::client::http::test_util::{ReplayEvent, StaticReplayClient}; use aws_smithy_types::body::SdkBody; #[tokio::test] async fn test_single_page() { let page_1 = ReplayEvent::new( http::Request::builder() .method("GET") .uri("https://test-bucket.s3.us-east-1.amazonaws.com/?list-type=2&prefix=test-prefix") .body(SdkBody::empty()) .unwrap(), http::Response::builder() .status(200) .body(SdkBody::from(include_str!("./testing/response_1.xml"))) .unwrap(), ); let replay_client = StaticReplayClient::new(vec![page_1]); let client: s3::Client = s3::Client::from_conf( s3::Config::builder() .behavior_version(BehaviorVersion::latest()) .credentials_provider(make_s3_test_credentials()) .region(s3::config::Region::new("us-east-1")) .http_client(replay_client.clone()) .build(), ); // Run the code we want to test with it let size = determine_prefix_file_size(client, "test-bucket", "test-prefix") .await .unwrap(); // Verify we got the correct total size back assert_eq!(7, size); replay_client.assert_requests_match(&[]); } #[tokio::test] async fn test_multiple_pages() { let page_1 = ReplayEvent::new( http::Request::builder() .method("GET") .uri("https://test-bucket.s3.us-east-1.amazonaws.com/?list-type=2&prefix=test-prefix") .body(SdkBody::empty()) .unwrap(), http::Response::builder() .status(200) .body(SdkBody::from(include_str!("./testing/response_multi_1.xml"))) .unwrap(), ); let page_2 = ReplayEvent::new( http::Request::builder() .method("GET") .uri("https://test-bucket.s3.us-east-1.amazonaws.com/?list-type=2&prefix=test-prefix&continuation-token=next") .body(SdkBody::empty()) .unwrap(), http::Response::builder() .status(200) .body(SdkBody::from(include_str!("./testing/response_multi_2.xml"))) .unwrap(), ); let replay_client = StaticReplayClient::new(vec![page_1, page_2]); let client: s3::Client = s3::Client::from_conf( s3::Config::builder() .behavior_version(BehaviorVersion::latest()) .credentials_provider(make_s3_test_credentials()) .region(s3::config::Region::new("us-east-1")) .http_client(replay_client.clone()) .build(), ); // Run the code we want to test with it let size = determine_prefix_file_size(client, "test-bucket", "test-prefix") .await .unwrap(); assert_eq!(19, size); replay_client.assert_requests_match(&[]); } }

下列程式碼範例示範如何在 Amazon S3 之間上傳或下載大型檔案。

如需詳細資訊,請參閱使用分段上傳以上傳物件

Rust 的 SDK
注意

還有更多 on GitHub。尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

use std::fs::File; use std::io::prelude::*; use std::path::Path; use aws_config::meta::region::RegionProviderChain; use aws_sdk_s3::error::DisplayErrorContext; use aws_sdk_s3::operation::{ create_multipart_upload::CreateMultipartUploadOutput, get_object::GetObjectOutput, }; use aws_sdk_s3::types::{CompletedMultipartUpload, CompletedPart}; use aws_sdk_s3::{config::Region, Client as S3Client}; use aws_smithy_types::byte_stream::{ByteStream, Length}; use rand::distributions::Alphanumeric; use rand::{thread_rng, Rng}; use s3_code_examples::error::S3ExampleError; use std::process; use uuid::Uuid; //In bytes, minimum chunk size of 5MB. Increase CHUNK_SIZE to send larger chunks. const CHUNK_SIZE: u64 = 1024 * 1024 * 5; const MAX_CHUNKS: u64 = 10000; #[tokio::main] pub async fn main() { if let Err(err) = run_example().await { eprintln!("Error: {}", DisplayErrorContext(err)); process::exit(1); } } async fn run_example() -> Result<(), S3ExampleError> { let shared_config = aws_config::load_from_env().await; let client = S3Client::new(&shared_config); let bucket_name = format!("amzn-s3-demo-bucket-{}", Uuid::new_v4()); let region_provider = RegionProviderChain::first_try(Region::new("us-west-2")); let region = region_provider.region().await.unwrap(); s3_code_examples::create_bucket(&client, &bucket_name, &region).await?; let key = "sample.txt".to_string(); // Create a multipart upload. Use UploadPart and CompleteMultipartUpload to // upload the file. let multipart_upload_res: CreateMultipartUploadOutput = client .create_multipart_upload() .bucket(&bucket_name) .key(&key) .send() .await?; let upload_id = multipart_upload_res.upload_id().ok_or(S3ExampleError::new( "Missing upload_id after CreateMultipartUpload", ))?; //Create a file of random characters for the upload. let mut file = File::create(&key).expect("Could not create sample file."); // Loop until the file is 5 chunks. while file.metadata().unwrap().len() <= CHUNK_SIZE * 4 { let rand_string: String = thread_rng() .sample_iter(&Alphanumeric) .take(256) .map(char::from) .collect(); let return_string: String = "\n".to_string(); file.write_all(rand_string.as_ref()) .expect("Error writing to file."); file.write_all(return_string.as_ref()) .expect("Error writing to file."); } let path = Path::new(&key); let file_size = tokio::fs::metadata(path) .await .expect("it exists I swear") .len(); let mut chunk_count = (file_size / CHUNK_SIZE) + 1; let mut size_of_last_chunk = file_size % CHUNK_SIZE; if size_of_last_chunk == 0 { size_of_last_chunk = CHUNK_SIZE; chunk_count -= 1; } if file_size == 0 { return Err(S3ExampleError::new("Bad file size.")); } if chunk_count > MAX_CHUNKS { return Err(S3ExampleError::new( "Too many chunks! Try increasing your chunk size.", )); } let mut upload_parts: Vec<aws_sdk_s3::types::CompletedPart> = Vec::new(); for chunk_index in 0..chunk_count { let this_chunk = if chunk_count - 1 == chunk_index { size_of_last_chunk } else { CHUNK_SIZE }; let stream = ByteStream::read_from() .path(path) .offset(chunk_index * CHUNK_SIZE) .length(Length::Exact(this_chunk)) .build() .await .unwrap(); // Chunk index needs to start at 0, but part numbers start at 1. let part_number = (chunk_index as i32) + 1; let upload_part_res = client .upload_part() .key(&key) .bucket(&bucket_name) .upload_id(upload_id) .body(stream) .part_number(part_number) .send() .await?; upload_parts.push( CompletedPart::builder() .e_tag(upload_part_res.e_tag.unwrap_or_default()) .part_number(part_number) .build(), ); } // upload_parts: Vec<aws_sdk_s3::types::CompletedPart> let completed_multipart_upload: CompletedMultipartUpload = CompletedMultipartUpload::builder() .set_parts(Some(upload_parts)) .build(); let _complete_multipart_upload_res = client .complete_multipart_upload() .bucket(&bucket_name) .key(&key) .multipart_upload(completed_multipart_upload) .upload_id(upload_id) .send() .await?; let data: GetObjectOutput = s3_code_examples::download_object(&client, &bucket_name, &key).await?; let data_length: u64 = data .content_length() .unwrap_or_default() .try_into() .unwrap(); if file.metadata().unwrap().len() == data_length { println!("Data lengths match."); } else { println!("The data was not the same size!"); } s3_code_examples::clear_bucket(&client, &bucket_name) .await .expect("Error emptying bucket."); s3_code_examples::delete_bucket(&client, &bucket_name) .await .expect("Error deleting bucket."); Ok(()) }

無伺服器範例

下列程式碼範例示範如何實作 Lambda 函數,該函數接收透過將物件上傳至 S3 儲存貯體所觸發的事件。函數會從事件參數擷取 S3 儲存貯體名稱和物件金鑰,並呼叫 Amazon S3 API 來擷取和記錄物件的內容類型。

Rust 的 SDK
注意

還有更多 on GitHub。尋找完整範例,並了解如何在無伺服器範例儲存庫中設定和執行。

使用 Rust 搭配 Lambda 來使用 S3 事件。

// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 use aws_lambda_events::event::s3::S3Event; use aws_sdk_s3::{Client}; use lambda_runtime::{run, service_fn, Error, LambdaEvent}; /// Main function #[tokio::main] async fn main() -> Result<(), Error> { tracing_subscriber::fmt() .with_max_level(tracing::Level::INFO) .with_target(false) .without_time() .init(); // Initialize the AWS SDK for Rust let config = aws_config::load_from_env().await; let s3_client = Client::new(&config); let res = run(service_fn(|request: LambdaEvent<S3Event>| { function_handler(&s3_client, request) })).await; res } async fn function_handler( s3_client: &Client, evt: LambdaEvent<S3Event> ) -> Result<(), Error> { tracing::info!(records = ?evt.payload.records.len(), "Received request from SQS"); if evt.payload.records.len() == 0 { tracing::info!("Empty S3 event received"); } let bucket = evt.payload.records[0].s3.bucket.name.as_ref().expect("Bucket name to exist"); let key = evt.payload.records[0].s3.object.key.as_ref().expect("Object key to exist"); tracing::info!("Request is for {} and object {}", bucket, key); let s3_get_object_result = s3_client .get_object() .bucket(bucket) .key(key) .send() .await; match s3_get_object_result { Ok(_) => tracing::info!("S3 Get Object success, the s3GetObjectResult contains a 'body' property of type ByteStream"), Err(_) => tracing::info!("Failure with S3 Get Object request") } Ok(()) }