使用分段上傳來上傳物件 - Amazon Simple Storage Service

本文為英文版的機器翻譯版本,如內容有任何歧義或不一致之處,概以英文版為準。

使用分段上傳來上傳物件

您可以使用分段上傳,以程式設計方式將單一物件上傳到 Amazon S3。每個物件都會上傳為一組部分。每個組件都是物件資料的接續部分。您可依任何順序分別上傳這些物件組件。若任何組件的傳輸失敗,您可再次傳輸該組件,而不會影響其他組件。當物件的所有組件都全部上傳完後,Amazon S3 會將這些組件組合起來建立該物件。

如需使用額外檢查總和上傳具有分段上傳的物件的 a end-to-end 程序,請參閱 教學課程:透過分段上傳上傳物件,並驗證其資料完整性

下一節說明如何搭配 AWS Command Line Interface和 AWS SDKs 使用分段上傳。

您可以將任何檔案類型 (影像、備份、資料、影片等) 上傳至 S3 儲存貯體。使用 Amazon S3 主控台可上傳的檔案大小上限為 160 GB。若要上傳大於 160 GB 的檔案,請使用 AWS Command Line Interface (AWS CLI)、 AWS SDKs 或 Amazon S3 RESTAPI。

如需透過 上傳物件的指示 AWS Management Console,請參閱 上傳物件

以下說明使用 進行分段上傳的 Amazon S3 操作 AWS CLI。

Amazon Simple Storage Service API 參考中的下列各節說明用於分段上傳的 REST API。

Some AWS SDKs 公開了高階 API,透過將完成分段上傳所需的不同 API 操作合併為單一操作來簡化分段上傳。如需詳細資訊,請參閱使用分段上傳來上傳和複製物件

如果您需要暫停和繼續分段上傳、在上傳期間變更部分大小,或事先不知道資料的大小,請使用低階 API 方法。分段上傳的低階 API 方法提供額外的功能,如需詳細資訊,請參閱 使用 AWS SDKs (低階 API)

.NET

若要上傳檔案到 S3 儲存貯體,可使用 TransferUtility 類別。當從檔案上傳資料時,您必須提供物件的金鑰名稱。如果沒有,API 會使用金鑰名稱的檔案名稱。當從串流上傳資料時,您必須提供物件的金鑰名稱。

若要設定進階上傳選項,例如零件大小、同時上傳零件時的執行緒數量、中繼資料、儲存類別或 ACL,請使用 TransferUtilityUploadRequest類別。

注意

在您使用資料來源的串流時,TransferUtility 類別不會執行並行上傳。

下列 C# 範例會將檔案分成多個部分 (組件),上傳至 Amazon S3 儲存貯體。此顯示如何使用不同的 TransferUtility.Upload 多載上傳檔案。每個後續的呼叫都會上傳取代前一個上傳。如需有關設定和執行程式碼範例的資訊,請參閱 AWS SDK for .NET 中的 Word for .Word 入門 AWS SDK NET 開發人員指南

using Amazon; using Amazon.S3; using Amazon.S3.Transfer; using System; using System.IO; using System.Threading.Tasks; namespace Amazon.DocSamples.S3 { class UploadFileMPUHighLevelAPITest { private const string bucketName = "*** provide bucket name ***"; private const string keyName = "*** provide a name for the uploaded object ***"; private const string filePath = "*** provide the full path name of the file to upload ***"; // Specify your bucket region (an example region is shown). private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2; private static IAmazonS3 s3Client; public static void Main() { s3Client = new AmazonS3Client(bucketRegion); UploadFileAsync().Wait(); } private static async Task UploadFileAsync() { try { var fileTransferUtility = new TransferUtility(s3Client); // Option 1. Upload a file. The file name is used as the object key name. await fileTransferUtility.UploadAsync(filePath, bucketName); Console.WriteLine("Upload 1 completed"); // Option 2. Specify object key name explicitly. await fileTransferUtility.UploadAsync(filePath, bucketName, keyName); Console.WriteLine("Upload 2 completed"); // Option 3. Upload data from a type of System.IO.Stream. using (var fileToUpload = new FileStream(filePath, FileMode.Open, FileAccess.Read)) { await fileTransferUtility.UploadAsync(fileToUpload, bucketName, keyName); } Console.WriteLine("Upload 3 completed"); // Option 4. Specify advanced settings. var fileTransferUtilityRequest = new TransferUtilityUploadRequest { BucketName = bucketName, FilePath = filePath, StorageClass = S3StorageClass.StandardInfrequentAccess, PartSize = 6291456, // 6 MB. Key = keyName, CannedACL = S3CannedACL.PublicRead }; fileTransferUtilityRequest.Metadata.Add("param1", "Value1"); fileTransferUtilityRequest.Metadata.Add("param2", "Value2"); await fileTransferUtility.UploadAsync(fileTransferUtilityRequest); Console.WriteLine("Upload 4 completed"); } catch (AmazonS3Exception e) { Console.WriteLine("Error encountered on server. Message:'{0}' when writing an object", e.Message); } catch (Exception e) { Console.WriteLine("Unknown encountered on server. Message:'{0}' when writing an object", e.Message); } } } }
JavaScript

上傳大型檔案。

import { S3Client } from "@aws-sdk/client-s3"; import { Upload } from "@aws-sdk/lib-storage"; import { ProgressBar, logger, } from "@aws-doc-sdk-examples/lib/utils/util-log.js"; const twentyFiveMB = 25 * 1024 * 1024; export const createString = (size = twentyFiveMB) => { return "x".repeat(size); }; /** * Create a 25MB file and upload it in parts to the specified * Amazon S3 bucket. * @param {{ bucketName: string, key: string }} */ export const main = async ({ bucketName, key }) => { const str = createString(); const buffer = Buffer.from(str, "utf8"); const progressBar = new ProgressBar({ description: `Uploading "${key}" to "${bucketName}"`, barLength: 30, }); try { const upload = new Upload({ client: new S3Client({}), params: { Bucket: bucketName, Key: key, Body: buffer, }, }); upload.on("httpUploadProgress", ({ loaded, total }) => { progressBar.update({ current: loaded, total }); }); await upload.done(); } catch (caught) { if (caught instanceof Error && caught.name === "AbortError") { logger.error(`Multipart upload was aborted. ${caught.message}`); } else { throw caught; } } };

下載大型檔案。

import { GetObjectCommand, NoSuchKey, S3Client } from "@aws-sdk/client-s3"; import { createWriteStream, rmSync } from "node:fs"; const s3Client = new S3Client({}); const oneMB = 1024 * 1024; export const getObjectRange = ({ bucket, key, start, end }) => { const command = new GetObjectCommand({ Bucket: bucket, Key: key, Range: `bytes=${start}-${end}`, }); return s3Client.send(command); }; /** * @param {string | undefined} contentRange */ export const getRangeAndLength = (contentRange) => { const [range, length] = contentRange.split("/"); const [start, end] = range.split("-"); return { start: Number.parseInt(start), end: Number.parseInt(end), length: Number.parseInt(length), }; }; export const isComplete = ({ end, length }) => end === length - 1; const downloadInChunks = async ({ bucket, key }) => { const writeStream = createWriteStream( fileURLToPath(new URL(`./${key}`, import.meta.url)), ).on("error", (err) => console.error(err)); let rangeAndLength = { start: -1, end: -1, length: -1 }; while (!isComplete(rangeAndLength)) { const { end } = rangeAndLength; const nextRange = { start: end + 1, end: end + oneMB }; const { ContentRange, Body } = await getObjectRange({ bucket, key, ...nextRange, }); console.log(`Downloaded bytes ${nextRange.start} to ${nextRange.end}`); writeStream.write(await Body.transformToByteArray()); rangeAndLength = getRangeAndLength(ContentRange); } }; /** * Download a large object from and Amazon S3 bucket. * * When downloading a large file, you might want to break it down into * smaller pieces. Amazon S3 accepts a Range header to specify the start * and end of the byte range to be downloaded. * * @param {{ bucketName: string, key: string }} */ export const main = async ({ bucketName, key }) => { try { await downloadInChunks({ bucket: bucketName, key: key, }); } catch (caught) { if (caught instanceof NoSuchKey) { console.error(`Failed to download object. No such key "${key}".`); rmSync(key); } } };
Go

如需有關分段上傳的 Go 程式碼範例的詳細資訊,請參閱使用 a AWS SDK 上傳或下載大型檔案往返 Amazon S3

使用上傳管理員將資料分成多個部分,並同時上傳它們來上傳大型物件。

import ( "bytes" "context" "errors" "fmt" "io" "log" "os" "time" "github.com/aws/aws-sdk-go-v2/aws" "github.com/aws/aws-sdk-go-v2/feature/s3/manager" "github.com/aws/aws-sdk-go-v2/service/s3" "github.com/aws/aws-sdk-go-v2/service/s3/types" "github.com/aws/smithy-go" ) // BucketBasics encapsulates the Amazon Simple Storage Service (Amazon S3) actions // used in the examples. // It contains S3Client, an Amazon S3 service client that is used to perform bucket // and object actions. type BucketBasics struct { S3Client *s3.Client }
// UploadLargeObject uses an upload manager to upload data to an object in a bucket. // The upload manager breaks large data into parts and uploads the parts concurrently. func (basics BucketBasics) UploadLargeObject(ctx context.Context, bucketName string, objectKey string, largeObject []byte) error { largeBuffer := bytes.NewReader(largeObject) var partMiBs int64 = 10 uploader := manager.NewUploader(basics.S3Client, func(u *manager.Uploader) { u.PartSize = partMiBs * 1024 * 1024 }) _, err := uploader.Upload(ctx, &s3.PutObjectInput{ Bucket: aws.String(bucketName), Key: aws.String(objectKey), Body: largeBuffer, }) if err != nil { var apiErr smithy.APIError if errors.As(err, &apiErr) && apiErr.ErrorCode() == "EntityTooLarge" { log.Printf("Error while uploading object to %s. The object is too large.\n"+ "The maximum size for a multipart upload is 5TB.", bucketName) } else { log.Printf("Couldn't upload large object to %v:%v. Here's why: %v\n", bucketName, objectKey, err) } } else { err = s3.NewObjectExistsWaiter(basics.S3Client).Wait( ctx, &s3.HeadObjectInput{Bucket: aws.String(bucketName), Key: aws.String(objectKey)}, time.Minute) if err != nil { log.Printf("Failed attempt to wait for object %s to exist.\n", objectKey) } } return err }

使用下載管理員分段取得資料並同時下載它們來下載大型物件。

// DownloadLargeObject uses a download manager to download an object from a bucket. // The download manager gets the data in parts and writes them to a buffer until all of // the data has been downloaded. func (basics BucketBasics) DownloadLargeObject(ctx context.Context, bucketName string, objectKey string) ([]byte, error) { var partMiBs int64 = 10 downloader := manager.NewDownloader(basics.S3Client, func(d *manager.Downloader) { d.PartSize = partMiBs * 1024 * 1024 }) buffer := manager.NewWriteAtBuffer([]byte{}) _, err := downloader.Download(ctx, buffer, &s3.GetObjectInput{ Bucket: aws.String(bucketName), Key: aws.String(objectKey), }) if err != nil { log.Printf("Couldn't download large object from %v:%v. Here's why: %v\n", bucketName, objectKey, err) } return buffer.Bytes(), err }
PHP

本主題說明如何從 使用高階Aws\S3\Model\MultipartUpload\UploadBuilder類別 AWS SDK for PHP 進行分段檔案上傳。如需有關 AWS SDK for Ruby API 的詳細資訊,請前往 AWS SDK for Ruby - 第 2 版

下列 PHP 範例會將檔案上傳至 Amazon S3 儲存貯體。本範例會示範如何設定 MultipartUploader 物件的參數。

require 'vendor/autoload.php'; use Aws\Exception\MultipartUploadException; use Aws\S3\MultipartUploader; use Aws\S3\S3Client; $bucket = '*** Your Bucket Name ***'; $keyname = '*** Your Object Key ***'; $s3 = new S3Client([ 'version' => 'latest', 'region' => 'us-east-1' ]); // Prepare the upload parameters. $uploader = new MultipartUploader($s3, '/path/to/large/file.zip', [ 'bucket' => $bucket, 'key' => $keyname ]); // Perform the upload. try { $result = $uploader->upload(); echo "Upload complete: {$result['ObjectURL']}" . PHP_EOL; } catch (MultipartUploadException $e) { echo $e->getMessage() . PHP_EOL; }
Python

下列範例會使用高階分段上傳 Python API (TransferManager類別) 載入物件。

import sys import threading import boto3 from boto3.s3.transfer import TransferConfig MB = 1024 * 1024 s3 = boto3.resource("s3") class TransferCallback: """ Handle callbacks from the transfer manager. The transfer manager periodically calls the __call__ method throughout the upload and download process so that it can take action, such as displaying progress to the user and collecting data about the transfer. """ def __init__(self, target_size): self._target_size = target_size self._total_transferred = 0 self._lock = threading.Lock() self.thread_info = {} def __call__(self, bytes_transferred): """ The callback method that is called by the transfer manager. Display progress during file transfer and collect per-thread transfer data. This method can be called by multiple threads, so shared instance data is protected by a thread lock. """ thread = threading.current_thread() with self._lock: self._total_transferred += bytes_transferred if thread.ident not in self.thread_info.keys(): self.thread_info[thread.ident] = bytes_transferred else: self.thread_info[thread.ident] += bytes_transferred target = self._target_size * MB sys.stdout.write( f"\r{self._total_transferred} of {target} transferred " f"({(self._total_transferred / target) * 100:.2f}%)." ) sys.stdout.flush() def upload_with_default_configuration( local_file_path, bucket_name, object_key, file_size_mb ): """ Upload a file from a local folder to an Amazon S3 bucket, using the default configuration. """ transfer_callback = TransferCallback(file_size_mb) s3.Bucket(bucket_name).upload_file( local_file_path, object_key, Callback=transfer_callback ) return transfer_callback.thread_info def upload_with_chunksize_and_meta( local_file_path, bucket_name, object_key, file_size_mb, metadata=None ): """ Upload a file from a local folder to an Amazon S3 bucket, setting a multipart chunk size and adding metadata to the Amazon S3 object. The multipart chunk size controls the size of the chunks of data that are sent in the request. A smaller chunk size typically results in the transfer manager using more threads for the upload. The metadata is a set of key-value pairs that are stored with the object in Amazon S3. """ transfer_callback = TransferCallback(file_size_mb) config = TransferConfig(multipart_chunksize=1 * MB) extra_args = {"Metadata": metadata} if metadata else None s3.Bucket(bucket_name).upload_file( local_file_path, object_key, Config=config, ExtraArgs=extra_args, Callback=transfer_callback, ) return transfer_callback.thread_info def upload_with_high_threshold(local_file_path, bucket_name, object_key, file_size_mb): """ Upload a file from a local folder to an Amazon S3 bucket, setting a multipart threshold larger than the size of the file. Setting a multipart threshold larger than the size of the file results in the transfer manager sending the file as a standard upload instead of a multipart upload. """ transfer_callback = TransferCallback(file_size_mb) config = TransferConfig(multipart_threshold=file_size_mb * 2 * MB) s3.Bucket(bucket_name).upload_file( local_file_path, object_key, Config=config, Callback=transfer_callback ) return transfer_callback.thread_info def upload_with_sse( local_file_path, bucket_name, object_key, file_size_mb, sse_key=None ): """ Upload a file from a local folder to an Amazon S3 bucket, adding server-side encryption with customer-provided encryption keys to the object. When this kind of encryption is specified, Amazon S3 encrypts the object at rest and allows downloads only when the expected encryption key is provided in the download request. """ transfer_callback = TransferCallback(file_size_mb) if sse_key: extra_args = {"SSECustomerAlgorithm": "AES256", "SSECustomerKey": sse_key} else: extra_args = None s3.Bucket(bucket_name).upload_file( local_file_path, object_key, ExtraArgs=extra_args, Callback=transfer_callback ) return transfer_callback.thread_info def download_with_default_configuration( bucket_name, object_key, download_file_path, file_size_mb ): """ Download a file from an Amazon S3 bucket to a local folder, using the default configuration. """ transfer_callback = TransferCallback(file_size_mb) s3.Bucket(bucket_name).Object(object_key).download_file( download_file_path, Callback=transfer_callback ) return transfer_callback.thread_info def download_with_single_thread( bucket_name, object_key, download_file_path, file_size_mb ): """ Download a file from an Amazon S3 bucket to a local folder, using a single thread. """ transfer_callback = TransferCallback(file_size_mb) config = TransferConfig(use_threads=False) s3.Bucket(bucket_name).Object(object_key).download_file( download_file_path, Config=config, Callback=transfer_callback ) return transfer_callback.thread_info def download_with_high_threshold( bucket_name, object_key, download_file_path, file_size_mb ): """ Download a file from an Amazon S3 bucket to a local folder, setting a multipart threshold larger than the size of the file. Setting a multipart threshold larger than the size of the file results in the transfer manager sending the file as a standard download instead of a multipart download. """ transfer_callback = TransferCallback(file_size_mb) config = TransferConfig(multipart_threshold=file_size_mb * 2 * MB) s3.Bucket(bucket_name).Object(object_key).download_file( download_file_path, Config=config, Callback=transfer_callback ) return transfer_callback.thread_info def download_with_sse( bucket_name, object_key, download_file_path, file_size_mb, sse_key ): """ Download a file from an Amazon S3 bucket to a local folder, adding a customer-provided encryption key to the request. When this kind of encryption is specified, Amazon S3 encrypts the object at rest and allows downloads only when the expected encryption key is provided in the download request. """ transfer_callback = TransferCallback(file_size_mb) if sse_key: extra_args = {"SSECustomerAlgorithm": "AES256", "SSECustomerKey": sse_key} else: extra_args = None s3.Bucket(bucket_name).Object(object_key).download_file( download_file_path, ExtraArgs=extra_args, Callback=transfer_callback ) return transfer_callback.thread_info

The AWS SDK 公開的低階 API 與用於分段上傳API的 Amazon S3 REST 非常相似 (請參閱 使用分段上傳來上傳和複製物件。 當您需要暫停和繼續分段上傳、在上傳期間變更部分大小,或事先不知道上傳資料的大小時,請使用低階 API。 如果您沒有這些要求,請使用高階 API (請參閱 使用 AWS SDKs (高階 API))。

Java

以下範例示範如何使用低階 Java 類別上傳檔案。操作步驟如下:

  • 使用 AmazonS3Client.initiateMultipartUpload() 方法,經過 InitiateMultipartUploadRequest 物件,開始執行分段上傳。

  • 儲存 AmazonS3Client.initiateMultipartUpload() 方法所傳回的上傳 ID。您所提供的上傳 ID,用於後續的每項分段上傳操作。

  • 將物件的部分上傳。請為每一部分,呼叫 AmazonS3Client.uploadPart() 方法。您可以在 UploadPartRequest 物件中提供部分資訊。

  • 對於每個部分, 會從清單中AmazonS3Client.uploadPart()方法的回應中儲存 ETag。您可以使用 ETag 值來完成分段上傳。

  • 呼叫 AmazonS3Client.completeMultipartUpload() 方法,完成分段上傳。

如需建立和測試工作範例的說明,請參閱 AWS SDK for Java 開發人員指南中的入門

import com.amazonaws.AmazonServiceException; import com.amazonaws.SdkClientException; import com.amazonaws.auth.profile.ProfileCredentialsProvider; import com.amazonaws.regions.Regions; import com.amazonaws.services.s3.AmazonS3; import com.amazonaws.services.s3.AmazonS3ClientBuilder; import com.amazonaws.services.s3.model.*; import java.io.File; import java.io.IOException; import java.util.ArrayList; import java.util.List; public class LowLevelMultipartUpload { public static void main(String[] args) throws IOException { Regions clientRegion = Regions.DEFAULT_REGION; String bucketName = "*** Bucket name ***"; String keyName = "*** Key name ***"; String filePath = "*** Path to file to upload ***"; File file = new File(filePath); long contentLength = file.length(); long partSize = 5 * 1024 * 1024; // Set part size to 5 MB. try { AmazonS3 s3Client = AmazonS3ClientBuilder.standard() .withRegion(clientRegion) .withCredentials(new ProfileCredentialsProvider()) .build(); // Create a list of ETag objects. You retrieve ETags for each object part // uploaded, // then, after each individual part has been uploaded, pass the list of ETags to // the request to complete the upload. List<PartETag> partETags = new ArrayList<PartETag>(); // Initiate the multipart upload. InitiateMultipartUploadRequest initRequest = new InitiateMultipartUploadRequest(bucketName, keyName); InitiateMultipartUploadResult initResponse = s3Client.initiateMultipartUpload(initRequest); // Upload the file parts. long filePosition = 0; for (int i = 1; filePosition < contentLength; i++) { // Because the last part could be less than 5 MB, adjust the part size as // needed. partSize = Math.min(partSize, (contentLength - filePosition)); // Create the request to upload a part. UploadPartRequest uploadRequest = new UploadPartRequest() .withBucketName(bucketName) .withKey(keyName) .withUploadId(initResponse.getUploadId()) .withPartNumber(i) .withFileOffset(filePosition) .withFile(file) .withPartSize(partSize); // Upload the part and add the response's ETag to our list. UploadPartResult uploadResult = s3Client.uploadPart(uploadRequest); partETags.add(uploadResult.getPartETag()); filePosition += partSize; } // Complete the multipart upload. CompleteMultipartUploadRequest compRequest = new CompleteMultipartUploadRequest(bucketName, keyName, initResponse.getUploadId(), partETags); s3Client.completeMultipartUpload(compRequest); } catch (AmazonServiceException e) { // The call was transmitted successfully, but Amazon S3 couldn't process // it, so it returned an error response. e.printStackTrace(); } catch (SdkClientException e) { // Amazon S3 couldn't be contacted for a response, or the client // couldn't parse the response from Amazon S3. e.printStackTrace(); } } }
.NET

下列 C# 範例示範如何使用低階 AWS SDK for .NET 分段上傳 API 將檔案上傳至 S3 儲存貯體。如需 Amazon S3 分段上傳的資訊,請參閱 使用分段上傳來上傳和複製物件

注意

當您使用 AWS SDK for .NET API 上傳大型物件時,資料寫入請求串流時,可能會發生逾時。您可以使用 UploadPartRequest 設定明確逾時。

下列 C# 範例會使用低階分段上傳 API 將檔案上傳至 S3 儲存貯體。如需有關設定和執行程式碼範例的資訊,請參閱 AWS SDK for .NET 中的 Word for .Word 入門 AWS SDK NET 開發人員指南

using Amazon; using Amazon.Runtime; using Amazon.S3; using Amazon.S3.Model; using System; using System.Collections.Generic; using System.IO; using System.Threading.Tasks; namespace Amazon.DocSamples.S3 { class UploadFileMPULowLevelAPITest { private const string bucketName = "*** provide bucket name ***"; private const string keyName = "*** provide a name for the uploaded object ***"; private const string filePath = "*** provide the full path name of the file to upload ***"; // Specify your bucket region (an example region is shown). private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2; private static IAmazonS3 s3Client; public static void Main() { s3Client = new AmazonS3Client(bucketRegion); Console.WriteLine("Uploading an object"); UploadObjectAsync().Wait(); } private static async Task UploadObjectAsync() { // Create list to store upload part responses. List<UploadPartResponse> uploadResponses = new List<UploadPartResponse>(); // Setup information required to initiate the multipart upload. InitiateMultipartUploadRequest initiateRequest = new InitiateMultipartUploadRequest { BucketName = bucketName, Key = keyName }; // Initiate the upload. InitiateMultipartUploadResponse initResponse = await s3Client.InitiateMultipartUploadAsync(initiateRequest); // Upload parts. long contentLength = new FileInfo(filePath).Length; long partSize = 5 * (long)Math.Pow(2, 20); // 5 MB try { Console.WriteLine("Uploading parts"); long filePosition = 0; for (int i = 1; filePosition < contentLength; i++) { UploadPartRequest uploadRequest = new UploadPartRequest { BucketName = bucketName, Key = keyName, UploadId = initResponse.UploadId, PartNumber = i, PartSize = partSize, FilePosition = filePosition, FilePath = filePath }; // Track upload progress. uploadRequest.StreamTransferProgress += new EventHandler<StreamTransferProgressArgs>(UploadPartProgressEventCallback); // Upload a part and add the response to our list. uploadResponses.Add(await s3Client.UploadPartAsync(uploadRequest)); filePosition += partSize; } // Setup to complete the upload. CompleteMultipartUploadRequest completeRequest = new CompleteMultipartUploadRequest { BucketName = bucketName, Key = keyName, UploadId = initResponse.UploadId }; completeRequest.AddPartETags(uploadResponses); // Complete the upload. CompleteMultipartUploadResponse completeUploadResponse = await s3Client.CompleteMultipartUploadAsync(completeRequest); } catch (Exception exception) { Console.WriteLine("An AmazonS3Exception was thrown: { 0}", exception.Message); // Abort the upload. AbortMultipartUploadRequest abortMPURequest = new AbortMultipartUploadRequest { BucketName = bucketName, Key = keyName, UploadId = initResponse.UploadId }; await s3Client.AbortMultipartUploadAsync(abortMPURequest); } } public static void UploadPartProgressEventCallback(object sender, StreamTransferProgressArgs e) { // Process event. Console.WriteLine("{0}/{1}", e.TransferredBytes, e.TotalBytes); } } }
PHP

本主題說明如何使用 第 3 版的低階uploadPart方法 AWS SDK for PHP ,以多個部分上傳檔案。如需有關 AWS SDK for Ruby API 的詳細資訊,請前往 AWS SDK for Ruby - 第 2 版

下列 PHP 範例會使用低階 PHP API 分段上傳,將檔案上傳至 Amazon S3 儲存貯體。

require 'vendor/autoload.php'; use Aws\S3\Exception\S3Exception; use Aws\S3\S3Client; $bucket = '*** Your Bucket Name ***'; $keyname = '*** Your Object Key ***'; $filename = '*** Path to and Name of the File to Upload ***'; $s3 = new S3Client([ 'version' => 'latest', 'region' => 'us-east-1' ]); $result = $s3->createMultipartUpload([ 'Bucket' => $bucket, 'Key' => $keyname, 'StorageClass' => 'REDUCED_REDUNDANCY', 'Metadata' => [ 'param1' => 'value 1', 'param2' => 'value 2', 'param3' => 'value 3' ] ]); $uploadId = $result['UploadId']; // Upload the file in parts. try { $file = fopen($filename, 'r'); $partNumber = 1; while (!feof($file)) { $result = $s3->uploadPart([ 'Bucket' => $bucket, 'Key' => $keyname, 'UploadId' => $uploadId, 'PartNumber' => $partNumber, 'Body' => fread($file, 5 * 1024 * 1024), ]); $parts['Parts'][$partNumber] = [ 'PartNumber' => $partNumber, 'ETag' => $result['ETag'], ]; $partNumber++; echo "Uploading part $partNumber of $filename." . PHP_EOL; } fclose($file); } catch (S3Exception $e) { $result = $s3->abortMultipartUpload([ 'Bucket' => $bucket, 'Key' => $keyname, 'UploadId' => $uploadId ]); echo "Upload of $filename failed." . PHP_EOL; } // Complete the multipart upload. $result = $s3->completeMultipartUpload([ 'Bucket' => $bucket, 'Key' => $keyname, 'UploadId' => $uploadId, 'MultipartUpload' => $parts, ]); $url = $result['Location']; echo "Uploaded $filename to $url." . PHP_EOL;

第 3 AWS SDK for Ruby 版支援以兩種方式進行 Amazon S3 分段上傳。第一個選項是可以使用受管檔案上傳。如需詳細資訊,請參閱 AWS 開發人員部落格中的將檔案上傳至 Amazon S3。受管檔案上傳是上傳檔案至儲存貯體的推薦方式。他們具有以下優點:

  • 管理大於 15 MB 的物件之分段上傳。

  • 以二進位模式正確開啟檔案,避免編碼問題。

  • 使用多執行緒平行分段上傳大型物件。

或者,您可以直接使用以下分段上傳用戶端操作。