使用分段上傳來上傳物件 - Amazon Simple Storage Service

本文為英文版的機器翻譯版本,如內容有任何歧義或不一致之處,概以英文版為準。

使用分段上傳來上傳物件

您可以使用分段上傳,以程式設計方式將單一物件上傳到 Amazon S3。

如需詳細資訊,請參閱下列區段。

AWS SDK 公開了一個高級 API,稱為TransferManager,簡化了多部分上傳。如需詳細資訊,請參閱 使用分段上傳來上傳和複製物件

您可以上傳檔案或串流中的資料。您也可以設定進階選項 (例如分段上傳的組件大小),或同時上傳多個組件時,所要使用的執行緒數。您也可以選擇是否要設定物件屬性、儲存體方案或 存取控制清單 (ACL)。您可以使用 PutObjectRequestTransferManagerConfiguration 類別設定這些進階選項。

如有可能,TransferManager 會嘗試使用多執行緒一次上傳多個組件。在處理大量內容及高頻寬時,輸送量可能會顯著地增加。

除了檔案上傳功能外,TransferManager 類別也可讓您停止進行中的分段上傳。啟動上傳後,在尚未完成或停止之前,均視為進行中。TransferManager 停止在指定的儲存貯體中,某時間點前開始的所有執行分段上傳作業。

當您需要暫停再繼續進行分段上傳、改變部分大小或事先不知道資料大小時,請使用低階 PHP API。如需有關分段上傳的詳細資訊,包括額外由低階 API 法提供的功能,請參閱 使用 AWS 軟體開發套件 (低階 API)

Java

以下範例說明如何使用高階分段上傳 Java API (TransferManager 類別) 作業,上傳物件。如需建立及測試可行範例的說明,請參閱 測試 Amazon S3 Java 程式碼範例

import com.amazonaws.AmazonServiceException; import com.amazonaws.SdkClientException; import com.amazonaws.auth.profile.ProfileCredentialsProvider; import com.amazonaws.regions.Regions; import com.amazonaws.services.s3.AmazonS3; import com.amazonaws.services.s3.AmazonS3ClientBuilder; import com.amazonaws.services.s3.transfer.TransferManager; import com.amazonaws.services.s3.transfer.TransferManagerBuilder; import com.amazonaws.services.s3.transfer.Upload; import java.io.File; public class HighLevelMultipartUpload { public static void main(String[] args) throws Exception { Regions clientRegion = Regions.DEFAULT_REGION; String bucketName = "*** Bucket name ***"; String keyName = "*** Object key ***"; String filePath = "*** Path for file to upload ***"; try { AmazonS3 s3Client = AmazonS3ClientBuilder.standard() .withRegion(clientRegion) .withCredentials(new ProfileCredentialsProvider()) .build(); TransferManager tm = TransferManagerBuilder.standard() .withS3Client(s3Client) .build(); // TransferManager processes all transfers asynchronously, // so this call returns immediately. Upload upload = tm.upload(bucketName, keyName, new File(filePath)); System.out.println("Object upload started"); // Optionally, wait for the upload to finish before continuing. upload.waitForCompletion(); System.out.println("Object upload complete"); } catch (AmazonServiceException e) { // The call was transmitted successfully, but Amazon S3 couldn't process // it, so it returned an error response. e.printStackTrace(); } catch (SdkClientException e) { // Amazon S3 couldn't be contacted for a response, or the client // couldn't parse the response from Amazon S3. e.printStackTrace(); } } }
.NET

若要上傳檔案到 S3 儲存貯體,可使用 TransferUtility 類別。當從檔案上傳資料時,您必須提供物件的金鑰名稱。如果不提供的話,API 使用檔案名稱作為金鑰。當從串流上傳資料時,您必須提供物件的金鑰名稱。

若要設定進階上傳選項 (如片段大小、同時上傳多個片段時的執行緒數、中繼資料、儲存方案或 ACL),請使用 TransferUtilityUploadRequest 類別。

下列 C# 範例會將檔案分成多個部分 (組件),上傳至 Amazon S3 儲存貯體。此顯示如何使用不同的 TransferUtility.Upload 多載上傳檔案。每個後續的呼叫都會上傳取代前一個上傳。如需有關範例與特定版本的相容性的資訊,以 AWS SDK for .NET 及建立和測試工作範例的指示,請參閱執行 Amazon S3 .NET 程式碼範例

using Amazon; using Amazon.S3; using Amazon.S3.Transfer; using System; using System.IO; using System.Threading.Tasks; namespace Amazon.DocSamples.S3 { class UploadFileMPUHighLevelAPITest { private const string bucketName = "*** provide bucket name ***"; private const string keyName = "*** provide a name for the uploaded object ***"; private const string filePath = "*** provide the full path name of the file to upload ***"; // Specify your bucket region (an example region is shown). private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2; private static IAmazonS3 s3Client; public static void Main() { s3Client = new AmazonS3Client(bucketRegion); UploadFileAsync().Wait(); } private static async Task UploadFileAsync() { try { var fileTransferUtility = new TransferUtility(s3Client); // Option 1. Upload a file. The file name is used as the object key name. await fileTransferUtility.UploadAsync(filePath, bucketName); Console.WriteLine("Upload 1 completed"); // Option 2. Specify object key name explicitly. await fileTransferUtility.UploadAsync(filePath, bucketName, keyName); Console.WriteLine("Upload 2 completed"); // Option 3. Upload data from a type of System.IO.Stream. using (var fileToUpload = new FileStream(filePath, FileMode.Open, FileAccess.Read)) { await fileTransferUtility.UploadAsync(fileToUpload, bucketName, keyName); } Console.WriteLine("Upload 3 completed"); // Option 4. Specify advanced settings. var fileTransferUtilityRequest = new TransferUtilityUploadRequest { BucketName = bucketName, FilePath = filePath, StorageClass = S3StorageClass.StandardInfrequentAccess, PartSize = 6291456, // 6 MB. Key = keyName, CannedACL = S3CannedACL.PublicRead }; fileTransferUtilityRequest.Metadata.Add("param1", "Value1"); fileTransferUtilityRequest.Metadata.Add("param2", "Value2"); await fileTransferUtility.UploadAsync(fileTransferUtilityRequest); Console.WriteLine("Upload 4 completed"); } catch (AmazonS3Exception e) { Console.WriteLine("Error encountered on server. Message:'{0}' when writing an object", e.Message); } catch (Exception e) { Console.WriteLine("Unknown encountered on server. Message:'{0}' when writing an object", e.Message); } } } }
JavaScript

上傳大型檔案。

import { CreateMultipartUploadCommand, UploadPartCommand, CompleteMultipartUploadCommand, AbortMultipartUploadCommand, S3Client, } from "@aws-sdk/client-s3"; const twentyFiveMB = 25 * 1024 * 1024; export const createString = (size = twentyFiveMB) => { return "x".repeat(size); }; export const main = async () => { const s3Client = new S3Client({}); const bucketName = "test-bucket"; const key = "multipart.txt"; const str = createString(); const buffer = Buffer.from(str, "utf8"); let uploadId; try { const multipartUpload = await s3Client.send( new CreateMultipartUploadCommand({ Bucket: bucketName, Key: key, }), ); uploadId = multipartUpload.UploadId; const uploadPromises = []; // Multipart uploads require a minimum size of 5 MB per part. const partSize = Math.ceil(buffer.length / 5); // Upload each part. for (let i = 0; i < 5; i++) { const start = i * partSize; const end = start + partSize; uploadPromises.push( s3Client .send( new UploadPartCommand({ Bucket: bucketName, Key: key, UploadId: uploadId, Body: buffer.subarray(start, end), PartNumber: i + 1, }), ) .then((d) => { console.log("Part", i + 1, "uploaded"); return d; }), ); } const uploadResults = await Promise.all(uploadPromises); return await s3Client.send( new CompleteMultipartUploadCommand({ Bucket: bucketName, Key: key, UploadId: uploadId, MultipartUpload: { Parts: uploadResults.map(({ ETag }, i) => ({ ETag, PartNumber: i + 1, })), }, }), ); // Verify the output by downloading the file from the Amazon Simple Storage Service (Amazon S3) console. // Because the output is a 25 MB string, text editors might struggle to open the file. } catch (err) { console.error(err); if (uploadId) { const abortCommand = new AbortMultipartUploadCommand({ Bucket: bucketName, Key: key, UploadId: uploadId, }); await s3Client.send(abortCommand); } } };

下載大型檔案。

import { GetObjectCommand, S3Client } from "@aws-sdk/client-s3"; import { createWriteStream } from "fs"; const s3Client = new S3Client({}); const oneMB = 1024 * 1024; export const getObjectRange = ({ bucket, key, start, end }) => { const command = new GetObjectCommand({ Bucket: bucket, Key: key, Range: `bytes=${start}-${end}`, }); return s3Client.send(command); }; /** * @param {string | undefined} contentRange */ export const getRangeAndLength = (contentRange) => { const [range, length] = contentRange.split("/"); const [start, end] = range.split("-"); return { start: parseInt(start), end: parseInt(end), length: parseInt(length), }; }; export const isComplete = ({ end, length }) => end === length - 1; // When downloading a large file, you might want to break it down into // smaller pieces. Amazon S3 accepts a Range header to specify the start // and end of the byte range to be downloaded. const downloadInChunks = async ({ bucket, key }) => { const writeStream = createWriteStream( fileURLToPath(new URL(`./${key}`, import.meta.url)), ).on("error", (err) => console.error(err)); let rangeAndLength = { start: -1, end: -1, length: -1 }; while (!isComplete(rangeAndLength)) { const { end } = rangeAndLength; const nextRange = { start: end + 1, end: end + oneMB }; console.log(`Downloading bytes ${nextRange.start} to ${nextRange.end}`); const { ContentRange, Body } = await getObjectRange({ bucket, key, ...nextRange, }); writeStream.write(await Body.transformToByteArray()); rangeAndLength = getRangeAndLength(ContentRange); } }; export const main = async () => { await downloadInChunks({ bucket: "my-cool-bucket", key: "my-cool-object.txt", }); };
Go

使用上傳管理員將資料分成多個部分,並同時上傳它們來上傳大型物件。

// BucketBasics encapsulates the Amazon Simple Storage Service (Amazon S3) actions // used in the examples. // It contains S3Client, an Amazon S3 service client that is used to perform bucket // and object actions. type BucketBasics struct { S3Client *s3.Client }
// UploadLargeObject uses an upload manager to upload data to an object in a bucket. // The upload manager breaks large data into parts and uploads the parts concurrently. func (basics BucketBasics) UploadLargeObject(bucketName string, objectKey string, largeObject []byte) error { largeBuffer := bytes.NewReader(largeObject) var partMiBs int64 = 10 uploader := manager.NewUploader(basics.S3Client, func(u *manager.Uploader) { u.PartSize = partMiBs * 1024 * 1024 }) _, err := uploader.Upload(context.TODO(), &s3.PutObjectInput{ Bucket: aws.String(bucketName), Key: aws.String(objectKey), Body: largeBuffer, }) if err != nil { log.Printf("Couldn't upload large object to %v:%v. Here's why: %v\n", bucketName, objectKey, err) } return err }

使用下載管理員分段取得資料並同時下載它們來下載大型物件。

// DownloadLargeObject uses a download manager to download an object from a bucket. // The download manager gets the data in parts and writes them to a buffer until all of // the data has been downloaded. func (basics BucketBasics) DownloadLargeObject(bucketName string, objectKey string) ([]byte, error) { var partMiBs int64 = 10 downloader := manager.NewDownloader(basics.S3Client, func(d *manager.Downloader) { d.PartSize = partMiBs * 1024 * 1024 }) buffer := manager.NewWriteAtBuffer([]byte{}) _, err := downloader.Download(context.TODO(), buffer, &s3.GetObjectInput{ Bucket: aws.String(bucketName), Key: aws.String(objectKey), }) if err != nil { log.Printf("Couldn't download large object from %v:%v. Here's why: %v\n", bucketName, objectKey, err) } return buffer.Bytes(), err }
PHP

本主題說明如何使用來自多部分檔案上傳 AWS SDK for PHP 的高階Aws\S3\Model\MultipartUpload\UploadBuilder類別。它假設您已經按照的說明進行操作,使用 AWS SDK for PHP 和運行 PHP 示例並已 AWS SDK for PHP 正確安裝。

下列 PHP 範例會將檔案上傳至 Amazon S3 儲存貯體。本範例會示範如何設定 MultipartUploader 物件的參數。

如需執行本指南中 PHP 範例的資訊,請參閱「執行 PHP 範例」。

require 'vendor/autoload.php'; use Aws\Exception\MultipartUploadException; use Aws\S3\MultipartUploader; use Aws\S3\S3Client; $bucket = '*** Your Bucket Name ***'; $keyname = '*** Your Object Key ***'; $s3 = new S3Client([ 'version' => 'latest', 'region' => 'us-east-1' ]); // Prepare the upload parameters. $uploader = new MultipartUploader($s3, '/path/to/large/file.zip', [ 'bucket' => $bucket, 'key' => $keyname ]); // Perform the upload. try { $result = $uploader->upload(); echo "Upload complete: {$result['ObjectURL']}" . PHP_EOL; } catch (MultipartUploadException $e) { echo $e->getMessage() . PHP_EOL; }
Python

下列範例會使用高階分段上傳 Python API (TransferManager 類別) 載入物件。

import sys import threading import boto3 from boto3.s3.transfer import TransferConfig MB = 1024 * 1024 s3 = boto3.resource("s3") class TransferCallback: """ Handle callbacks from the transfer manager. The transfer manager periodically calls the __call__ method throughout the upload and download process so that it can take action, such as displaying progress to the user and collecting data about the transfer. """ def __init__(self, target_size): self._target_size = target_size self._total_transferred = 0 self._lock = threading.Lock() self.thread_info = {} def __call__(self, bytes_transferred): """ The callback method that is called by the transfer manager. Display progress during file transfer and collect per-thread transfer data. This method can be called by multiple threads, so shared instance data is protected by a thread lock. """ thread = threading.current_thread() with self._lock: self._total_transferred += bytes_transferred if thread.ident not in self.thread_info.keys(): self.thread_info[thread.ident] = bytes_transferred else: self.thread_info[thread.ident] += bytes_transferred target = self._target_size * MB sys.stdout.write( f"\r{self._total_transferred} of {target} transferred " f"({(self._total_transferred / target) * 100:.2f}%)." ) sys.stdout.flush() def upload_with_default_configuration( local_file_path, bucket_name, object_key, file_size_mb ): """ Upload a file from a local folder to an Amazon S3 bucket, using the default configuration. """ transfer_callback = TransferCallback(file_size_mb) s3.Bucket(bucket_name).upload_file( local_file_path, object_key, Callback=transfer_callback ) return transfer_callback.thread_info def upload_with_chunksize_and_meta( local_file_path, bucket_name, object_key, file_size_mb, metadata=None ): """ Upload a file from a local folder to an Amazon S3 bucket, setting a multipart chunk size and adding metadata to the Amazon S3 object. The multipart chunk size controls the size of the chunks of data that are sent in the request. A smaller chunk size typically results in the transfer manager using more threads for the upload. The metadata is a set of key-value pairs that are stored with the object in Amazon S3. """ transfer_callback = TransferCallback(file_size_mb) config = TransferConfig(multipart_chunksize=1 * MB) extra_args = {"Metadata": metadata} if metadata else None s3.Bucket(bucket_name).upload_file( local_file_path, object_key, Config=config, ExtraArgs=extra_args, Callback=transfer_callback, ) return transfer_callback.thread_info def upload_with_high_threshold(local_file_path, bucket_name, object_key, file_size_mb): """ Upload a file from a local folder to an Amazon S3 bucket, setting a multipart threshold larger than the size of the file. Setting a multipart threshold larger than the size of the file results in the transfer manager sending the file as a standard upload instead of a multipart upload. """ transfer_callback = TransferCallback(file_size_mb) config = TransferConfig(multipart_threshold=file_size_mb * 2 * MB) s3.Bucket(bucket_name).upload_file( local_file_path, object_key, Config=config, Callback=transfer_callback ) return transfer_callback.thread_info def upload_with_sse( local_file_path, bucket_name, object_key, file_size_mb, sse_key=None ): """ Upload a file from a local folder to an Amazon S3 bucket, adding server-side encryption with customer-provided encryption keys to the object. When this kind of encryption is specified, Amazon S3 encrypts the object at rest and allows downloads only when the expected encryption key is provided in the download request. """ transfer_callback = TransferCallback(file_size_mb) if sse_key: extra_args = {"SSECustomerAlgorithm": "AES256", "SSECustomerKey": sse_key} else: extra_args = None s3.Bucket(bucket_name).upload_file( local_file_path, object_key, ExtraArgs=extra_args, Callback=transfer_callback ) return transfer_callback.thread_info def download_with_default_configuration( bucket_name, object_key, download_file_path, file_size_mb ): """ Download a file from an Amazon S3 bucket to a local folder, using the default configuration. """ transfer_callback = TransferCallback(file_size_mb) s3.Bucket(bucket_name).Object(object_key).download_file( download_file_path, Callback=transfer_callback ) return transfer_callback.thread_info def download_with_single_thread( bucket_name, object_key, download_file_path, file_size_mb ): """ Download a file from an Amazon S3 bucket to a local folder, using a single thread. """ transfer_callback = TransferCallback(file_size_mb) config = TransferConfig(use_threads=False) s3.Bucket(bucket_name).Object(object_key).download_file( download_file_path, Config=config, Callback=transfer_callback ) return transfer_callback.thread_info def download_with_high_threshold( bucket_name, object_key, download_file_path, file_size_mb ): """ Download a file from an Amazon S3 bucket to a local folder, setting a multipart threshold larger than the size of the file. Setting a multipart threshold larger than the size of the file results in the transfer manager sending the file as a standard download instead of a multipart download. """ transfer_callback = TransferCallback(file_size_mb) config = TransferConfig(multipart_threshold=file_size_mb * 2 * MB) s3.Bucket(bucket_name).Object(object_key).download_file( download_file_path, Config=config, Callback=transfer_callback ) return transfer_callback.thread_info def download_with_sse( bucket_name, object_key, download_file_path, file_size_mb, sse_key ): """ Download a file from an Amazon S3 bucket to a local folder, adding a customer-provided encryption key to the request. When this kind of encryption is specified, Amazon S3 encrypts the object at rest and allows downloads only when the expected encryption key is provided in the download request. """ transfer_callback = TransferCallback(file_size_mb) if sse_key: extra_args = {"SSECustomerAlgorithm": "AES256", "SSECustomerKey": sse_key} else: extra_args = None s3.Bucket(bucket_name).Object(object_key).download_file( download_file_path, ExtraArgs=extra_args, Callback=transfer_callback ) return transfer_callback.thread_info

該 AWS 開發套件公開了一個與用於多部分上傳的 Amazon S3 REST API 非常類似的低級 API (請參閱。使用分段上傳來上傳和複製物件 如果您需要暫停和繼續分段上傳、在上傳期間變更零件大小,或是事先不知道上傳資料的大小,請使用低階 API。 如果您沒有這些需求,請使用高階 API (請參閱使用 AWS 軟體開發套件 (高階 API))。

Java

以下範例示範如何使用低階 Java 類別上傳檔案。操作步驟如下:

  • 使用 AmazonS3Client.initiateMultipartUpload() 方法,經過 InitiateMultipartUploadRequest 物件,開始執行分段上傳。

  • 儲存 AmazonS3Client.initiateMultipartUpload() 方法所傳回的上傳 ID。您所提供的上傳 ID,用於後續的每項分段上傳操作。

  • 將物件的部分上傳。請為每一部分,呼叫 AmazonS3Client.uploadPart() 方法。您可以在 UploadPartRequest 物件中提供部分資訊。

  • 對於每一個部分,將 ETag 儲存在清單 AmazonS3Client.uploadPart() 方法的回應中。請您使用 ETag 值完成分段上傳。

  • 呼叫 AmazonS3Client.completeMultipartUpload() 方法,完成分段上傳。

如需建立及測試可行範例的說明,請參閱 測試 Amazon S3 Java 程式碼範例

import com.amazonaws.AmazonServiceException; import com.amazonaws.SdkClientException; import com.amazonaws.auth.profile.ProfileCredentialsProvider; import com.amazonaws.regions.Regions; import com.amazonaws.services.s3.AmazonS3; import com.amazonaws.services.s3.AmazonS3ClientBuilder; import com.amazonaws.services.s3.model.*; import java.io.File; import java.io.IOException; import java.util.ArrayList; import java.util.List; public class LowLevelMultipartUpload { public static void main(String[] args) throws IOException { Regions clientRegion = Regions.DEFAULT_REGION; String bucketName = "*** Bucket name ***"; String keyName = "*** Key name ***"; String filePath = "*** Path to file to upload ***"; File file = new File(filePath); long contentLength = file.length(); long partSize = 5 * 1024 * 1024; // Set part size to 5 MB. try { AmazonS3 s3Client = AmazonS3ClientBuilder.standard() .withRegion(clientRegion) .withCredentials(new ProfileCredentialsProvider()) .build(); // Create a list of ETag objects. You retrieve ETags for each object part // uploaded, // then, after each individual part has been uploaded, pass the list of ETags to // the request to complete the upload. List<PartETag> partETags = new ArrayList<PartETag>(); // Initiate the multipart upload. InitiateMultipartUploadRequest initRequest = new InitiateMultipartUploadRequest(bucketName, keyName); InitiateMultipartUploadResult initResponse = s3Client.initiateMultipartUpload(initRequest); // Upload the file parts. long filePosition = 0; for (int i = 1; filePosition < contentLength; i++) { // Because the last part could be less than 5 MB, adjust the part size as // needed. partSize = Math.min(partSize, (contentLength - filePosition)); // Create the request to upload a part. UploadPartRequest uploadRequest = new UploadPartRequest() .withBucketName(bucketName) .withKey(keyName) .withUploadId(initResponse.getUploadId()) .withPartNumber(i) .withFileOffset(filePosition) .withFile(file) .withPartSize(partSize); // Upload the part and add the response's ETag to our list. UploadPartResult uploadResult = s3Client.uploadPart(uploadRequest); partETags.add(uploadResult.getPartETag()); filePosition += partSize; } // Complete the multipart upload. CompleteMultipartUploadRequest compRequest = new CompleteMultipartUploadRequest(bucketName, keyName, initResponse.getUploadId(), partETags); s3Client.completeMultipartUpload(compRequest); } catch (AmazonServiceException e) { // The call was transmitted successfully, but Amazon S3 couldn't process // it, so it returned an error response. e.printStackTrace(); } catch (SdkClientException e) { // Amazon S3 couldn't be contacted for a response, or the client // couldn't parse the response from Amazon S3. e.printStackTrace(); } } }
.NET

下列 C# 範例示範如何使用低階 AWS SDK for .NET 多部分上傳 API 將檔案上傳至 S3 儲存貯體。如需 Amazon S3 分段上傳的資訊,請參閱 使用分段上傳來上傳和複製物件

注意

當您使用 AWS SDK for .NET API 上傳大型物件時,可能會在將資料寫入要求串流時發生逾時。您可以使用 UploadPartRequest 設定明確逾時。

下列 C# 範例會使用低階分段上傳 API,將檔案上傳至 S3 儲存貯體。如需有關範例與特定版本的相容性的資訊,以 AWS SDK for .NET 及建立和測試工作範例的指示,請參閱執行 Amazon S3 .NET 程式碼範例

using Amazon; using Amazon.Runtime; using Amazon.S3; using Amazon.S3.Model; using System; using System.Collections.Generic; using System.IO; using System.Threading.Tasks; namespace Amazon.DocSamples.S3 { class UploadFileMPULowLevelAPITest { private const string bucketName = "*** provide bucket name ***"; private const string keyName = "*** provide a name for the uploaded object ***"; private const string filePath = "*** provide the full path name of the file to upload ***"; // Specify your bucket region (an example region is shown). private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2; private static IAmazonS3 s3Client; public static void Main() { s3Client = new AmazonS3Client(bucketRegion); Console.WriteLine("Uploading an object"); UploadObjectAsync().Wait(); } private static async Task UploadObjectAsync() { // Create list to store upload part responses. List<UploadPartResponse> uploadResponses = new List<UploadPartResponse>(); // Setup information required to initiate the multipart upload. InitiateMultipartUploadRequest initiateRequest = new InitiateMultipartUploadRequest { BucketName = bucketName, Key = keyName }; // Initiate the upload. InitiateMultipartUploadResponse initResponse = await s3Client.InitiateMultipartUploadAsync(initiateRequest); // Upload parts. long contentLength = new FileInfo(filePath).Length; long partSize = 5 * (long)Math.Pow(2, 20); // 5 MB try { Console.WriteLine("Uploading parts"); long filePosition = 0; for (int i = 1; filePosition < contentLength; i++) { UploadPartRequest uploadRequest = new UploadPartRequest { BucketName = bucketName, Key = keyName, UploadId = initResponse.UploadId, PartNumber = i, PartSize = partSize, FilePosition = filePosition, FilePath = filePath }; // Track upload progress. uploadRequest.StreamTransferProgress += new EventHandler<StreamTransferProgressArgs>(UploadPartProgressEventCallback); // Upload a part and add the response to our list. uploadResponses.Add(await s3Client.UploadPartAsync(uploadRequest)); filePosition += partSize; } // Setup to complete the upload. CompleteMultipartUploadRequest completeRequest = new CompleteMultipartUploadRequest { BucketName = bucketName, Key = keyName, UploadId = initResponse.UploadId }; completeRequest.AddPartETags(uploadResponses); // Complete the upload. CompleteMultipartUploadResponse completeUploadResponse = await s3Client.CompleteMultipartUploadAsync(completeRequest); } catch (Exception exception) { Console.WriteLine("An AmazonS3Exception was thrown: { 0}", exception.Message); // Abort the upload. AbortMultipartUploadRequest abortMPURequest = new AbortMultipartUploadRequest { BucketName = bucketName, Key = keyName, UploadId = initResponse.UploadId }; await s3Client.AbortMultipartUploadAsync(abortMPURequest); } } public static void UploadPartProgressEventCallback(object sender, StreamTransferProgressArgs e) { // Process event. Console.WriteLine("{0}/{1}", e.TransferredBytes, e.TotalBytes); } } }
PHP

本主題說明如何使用版本 3 的低階uploadPart方法, AWS SDK for PHP 以多個部分上傳檔案。它假設您已經按照的說明進行操作,使用 AWS SDK for PHP 和運行 PHP 示例並已 AWS SDK for PHP 正確安裝。

下列 PHP 範例會使用低階 PHP API 分段上傳,將檔案上傳至 Amazon S3 儲存貯體。如需執行本指南中 PHP 範例的資訊,請參閱「執行 PHP 範例」。

require 'vendor/autoload.php'; use Aws\S3\Exception\S3Exception; use Aws\S3\S3Client; $bucket = '*** Your Bucket Name ***'; $keyname = '*** Your Object Key ***'; $filename = '*** Path to and Name of the File to Upload ***'; $s3 = new S3Client([ 'version' => 'latest', 'region' => 'us-east-1' ]); $result = $s3->createMultipartUpload([ 'Bucket' => $bucket, 'Key' => $keyname, 'StorageClass' => 'REDUCED_REDUNDANCY', 'Metadata' => [ 'param1' => 'value 1', 'param2' => 'value 2', 'param3' => 'value 3' ] ]); $uploadId = $result['UploadId']; // Upload the file in parts. try { $file = fopen($filename, 'r'); $partNumber = 1; while (!feof($file)) { $result = $s3->uploadPart([ 'Bucket' => $bucket, 'Key' => $keyname, 'UploadId' => $uploadId, 'PartNumber' => $partNumber, 'Body' => fread($file, 5 * 1024 * 1024), ]); $parts['Parts'][$partNumber] = [ 'PartNumber' => $partNumber, 'ETag' => $result['ETag'], ]; $partNumber++; echo "Uploading part $partNumber of $filename." . PHP_EOL; } fclose($file); } catch (S3Exception $e) { $result = $s3->abortMultipartUpload([ 'Bucket' => $bucket, 'Key' => $keyname, 'UploadId' => $uploadId ]); echo "Upload of $filename failed." . PHP_EOL; } // Complete the multipart upload. $result = $s3->completeMultipartUpload([ 'Bucket' => $bucket, 'Key' => $keyname, 'UploadId' => $uploadId, 'MultipartUpload' => $parts, ]); $url = $result['Location']; echo "Uploaded $filename to $url." . PHP_EOL;

第 3 AWS SDK for Ruby 版以兩種方式支援 Amazon S3 分段上傳。第一個選項是可以使用受管檔案上傳。如需詳細資訊,請參閱 AWS 開發人員部落格中的將檔案上傳至 Amazon S3。受管檔案上傳是上傳檔案至儲存貯體的推薦方式。他們具有以下優點:

  • 管理大於 15 MB 的物件之分段上傳。

  • 以二進位模式正確開啟檔案,避免編碼問題。

  • 使用多執行緒平行分段上傳大型物件。

或者,您可以直接使用以下分段上傳用戶端操作。

如需詳細資訊,請參閱 使用 AWS SDK for Ruby -第 3 版

Amazon Simple Storage Service API 參考》中的下列章節說明了分段上傳的 REST API。

AWS Command Line Interface (AWS CLI) 中的下列各節說明多部分上傳的作業。

您可以使用這些 API 來提出自己的 REST 請求,也可以使用其中一個 AWS SDK。如需 REST API 的詳細資訊,請參閱 使用 REST API。如需 SDK 的詳細資訊,請參閱「使用分段上傳來上傳物件」。