Verwenden Sie PutObject mit einem AWS SDK oder CLI - AWS SDKCode-Beispiele

Weitere AWS SDK Beispiele sind im Repo AWS Doc SDK Examples GitHub verfügbar.

Die vorliegende Übersetzung wurde maschinell erstellt. Im Falle eines Konflikts oder eines Widerspruchs zwischen dieser übersetzten Fassung und der englischen Fassung (einschließlich infolge von Verzögerungen bei der Übersetzung) ist die englische Fassung maßgeblich.

Verwenden Sie PutObject mit einem AWS SDK oder CLI

Die folgenden Codebeispiele zeigen, wie man es benutztPutObject.

Aktionsbeispiele sind Codeauszüge aus größeren Programmen und müssen im Kontext ausgeführt werden. Sie können diese Aktion in den folgenden Codebeispielen im Kontext sehen:

.NET
AWS SDK for .NET
Anmerkung

Es gibt noch mehr dazu GitHub. Sie sehen das vollständige Beispiel und erfahren, wie Sie das AWS -Code-Beispiel-Repository einrichten und ausführen.

/// <summary> /// Shows how to upload a file from the local computer to an Amazon S3 /// bucket. /// </summary> /// <param name="client">An initialized Amazon S3 client object.</param> /// <param name="bucketName">The Amazon S3 bucket to which the object /// will be uploaded.</param> /// <param name="objectName">The object to upload.</param> /// <param name="filePath">The path, including file name, of the object /// on the local computer to upload.</param> /// <returns>A boolean value indicating the success or failure of the /// upload procedure.</returns> public static async Task<bool> UploadFileAsync( IAmazonS3 client, string bucketName, string objectName, string filePath) { var request = new PutObjectRequest { BucketName = bucketName, Key = objectName, FilePath = filePath, }; var response = await client.PutObjectAsync(request); if (response.HttpStatusCode == System.Net.HttpStatusCode.OK) { Console.WriteLine($"Successfully uploaded {objectName} to {bucketName}."); return true; } else { Console.WriteLine($"Could not upload {objectName} to {bucketName}."); return false; } }

Laden Sie ein Objekt mit serverseitiger Verschlüsselung hoch.

using System; using System.Threading.Tasks; using Amazon.S3; using Amazon.S3.Model; /// <summary> /// This example shows how to upload an object to an Amazon Simple Storage /// Service (Amazon S3) bucket with server-side encryption enabled. /// </summary> public class ServerSideEncryption { public static async Task Main() { string bucketName = "amzn-s3-demo-bucket"; string keyName = "samplefile.txt"; // If the AWS Region defined for your default user is different // from the Region where your Amazon S3 bucket is located, // pass the Region name to the Amazon S3 client object's constructor. // For example: RegionEndpoint.USWest2. IAmazonS3 client = new AmazonS3Client(); await WritingAnObjectAsync(client, bucketName, keyName); } /// <summary> /// Upload a sample object include a setting for encryption. /// </summary> /// <param name="client">The initialized Amazon S3 client object used to /// to upload a file and apply server-side encryption.</param> /// <param name="bucketName">The name of the Amazon S3 bucket where the /// encrypted object will reside.</param> /// <param name="keyName">The name for the object that you want to /// create in the supplied bucket.</param> public static async Task WritingAnObjectAsync(IAmazonS3 client, string bucketName, string keyName) { try { var putRequest = new PutObjectRequest { BucketName = bucketName, Key = keyName, ContentBody = "sample text", ServerSideEncryptionMethod = ServerSideEncryptionMethod.AES256, }; var putResponse = await client.PutObjectAsync(putRequest); // Determine the encryption state of an object. GetObjectMetadataRequest metadataRequest = new GetObjectMetadataRequest { BucketName = bucketName, Key = keyName, }; GetObjectMetadataResponse response = await client.GetObjectMetadataAsync(metadataRequest); ServerSideEncryptionMethod objectEncryption = response.ServerSideEncryptionMethod; Console.WriteLine($"Encryption method used: {0}", objectEncryption.ToString()); } catch (AmazonS3Exception ex) { Console.WriteLine($"Error: '{ex.Message}' when writing an object"); } } }
  • APIEinzelheiten finden Sie PutObjectin der AWS SDK for .NET APIReferenz.

Bash
AWS CLI mit Bash-Skript
Anmerkung

Es gibt noch mehr dazu. GitHub Sie sehen das vollständige Beispiel und erfahren, wie Sie das AWS -Code-Beispiel-Repository einrichten und ausführen.

############################################################################### # function errecho # # This function outputs everything sent to it to STDERR (standard error output). ############################################################################### function errecho() { printf "%s\n" "$*" 1>&2 } ############################################################################### # function copy_file_to_bucket # # This function creates a file in the specified bucket. # # Parameters: # $1 - The name of the bucket to copy the file to. # $2 - The path and file name of the local file to copy to the bucket. # $3 - The key (name) to call the copy of the file in the bucket. # # Returns: # 0 - If successful. # 1 - If it fails. ############################################################################### function copy_file_to_bucket() { local response bucket_name source_file destination_file_name bucket_name=$1 source_file=$2 destination_file_name=$3 response=$(aws s3api put-object \ --bucket "$bucket_name" \ --body "$source_file" \ --key "$destination_file_name") # shellcheck disable=SC2181 if [[ ${?} -ne 0 ]]; then errecho "ERROR: AWS reports put-object operation failed.\n$response" return 1 fi }
  • APIEinzelheiten finden Sie PutObjectin der AWS CLI Befehlsreferenz.

C++
SDKfür C++
Anmerkung

Es gibt noch mehr dazu GitHub. Sie sehen das vollständige Beispiel und erfahren, wie Sie das AWS -Code-Beispiel-Repository einrichten und ausführen.

bool AwsDoc::S3::putObject(const Aws::String &bucketName, const Aws::String &fileName, const Aws::S3::S3ClientConfiguration &clientConfig) { Aws::S3::S3Client s3Client(clientConfig); Aws::S3::Model::PutObjectRequest request; request.SetBucket(bucketName); //We are using the name of the file as the key for the object in the bucket. //However, this is just a string and can be set according to your retrieval needs. request.SetKey(fileName); std::shared_ptr<Aws::IOStream> inputData = Aws::MakeShared<Aws::FStream>("SampleAllocationTag", fileName.c_str(), std::ios_base::in | std::ios_base::binary); if (!*inputData) { std::cerr << "Error unable to read file " << fileName << std::endl; return false; } request.SetBody(inputData); Aws::S3::Model::PutObjectOutcome outcome = s3Client.PutObject(request); if (!outcome.IsSuccess()) { std::cerr << "Error: putObject: " << outcome.GetError().GetMessage() << std::endl; } else { std::cout << "Added object '" << fileName << "' to bucket '" << bucketName << "'."; } return outcome.IsSuccess(); }
  • APIEinzelheiten finden Sie PutObjectin der AWS SDK for C++ APIReferenz.

CLI
AWS CLI

Im folgenden Beispiel wird der put-object Befehl verwendet, um ein Objekt auf Amazon S3 hochzuladen:

aws s3api put-object --bucket text-content --key dir-1/my_images.tar.bz2 --body my_images.tar.bz2

Das folgende Beispiel zeigt den Upload einer Videodatei (Die Videodatei wird mithilfe der Windows-Dateisystemsyntax spezifiziert. ):

aws s3api put-object --bucket text-content --key dir-1/big-video-file.mp4 --body e:\media\videos\f-sharp-3-data-services.mp4

Weitere Informationen zum Hochladen von Objekten finden Sie unter Hochladen von Objekten im Amazon S3 Developer Guide.

Go
SDKfür Go V2
Anmerkung

Es gibt noch mehr dazu GitHub. Sie sehen das vollständige Beispiel und erfahren, wie Sie das AWS -Code-Beispiel-Repository einrichten und ausführen.

Lege ein Objekt in einen Eimer, indem du die untere Ebene benutztAPI.

import ( "bytes" "context" "errors" "fmt" "io" "log" "os" "time" "github.com/aws/aws-sdk-go-v2/aws" "github.com/aws/aws-sdk-go-v2/feature/s3/manager" "github.com/aws/aws-sdk-go-v2/service/s3" "github.com/aws/aws-sdk-go-v2/service/s3/types" "github.com/aws/smithy-go" ) // BucketBasics encapsulates the Amazon Simple Storage Service (Amazon S3) actions // used in the examples. // It contains S3Client, an Amazon S3 service client that is used to perform bucket // and object actions. type BucketBasics struct { S3Client *s3.Client } // UploadFile reads from a file and puts the data into an object in a bucket. func (basics BucketBasics) UploadFile(ctx context.Context, bucketName string, objectKey string, fileName string) error { file, err := os.Open(fileName) if err != nil { log.Printf("Couldn't open file %v to upload. Here's why: %v\n", fileName, err) } else { defer file.Close() _, err = basics.S3Client.PutObject(ctx, &s3.PutObjectInput{ Bucket: aws.String(bucketName), Key: aws.String(objectKey), Body: file, }) if err != nil { var apiErr smithy.APIError if errors.As(err, &apiErr) && apiErr.ErrorCode() == "EntityTooLarge" { log.Printf("Error while uploading object to %s. The object is too large.\n"+ "To upload objects larger than 5GB, use the S3 console (160GB max)\n"+ "or the multipart upload API (5TB max).", bucketName) } else { log.Printf("Couldn't upload file %v to %v:%v. Here's why: %v\n", fileName, bucketName, objectKey, err) } } else { err = s3.NewObjectExistsWaiter(basics.S3Client).Wait( ctx, &s3.HeadObjectInput{Bucket: aws.String(bucketName), Key: aws.String(objectKey)}, time.Minute) if err != nil { log.Printf("Failed attempt to wait for object %s to exist.\n", objectKey) } } } return err }

Laden Sie ein Objekt mithilfe eines Transfer-Managers in einen Bucket hoch.

import ( "bytes" "context" "errors" "fmt" "log" "time" "github.com/aws/aws-sdk-go-v2/aws" "github.com/aws/aws-sdk-go-v2/feature/s3/manager" "github.com/aws/aws-sdk-go-v2/service/s3" "github.com/aws/aws-sdk-go-v2/service/s3/types" "github.com/aws/smithy-go" ) // S3Actions wraps S3 service actions. type S3Actions struct { S3Client *s3.Client S3Manager *manager.Uploader } // UploadObject uses the S3 upload manager to upload an object to a bucket. func (actor S3Actions) UploadObject(ctx context.Context, bucket string, key string, contents string) (string, error) { var outKey string input := &s3.PutObjectInput{ Bucket: aws.String(bucket), Key: aws.String(key), Body: bytes.NewReader([]byte(contents)), ChecksumAlgorithm: types.ChecksumAlgorithmSha256, } output, err := actor.S3Manager.Upload(ctx, input) if err != nil { var noBucket *types.NoSuchBucket if errors.As(err, &noBucket) { log.Printf("Bucket %s does not exist.\n", bucket) err = noBucket } } else { err := s3.NewObjectExistsWaiter(actor.S3Client).Wait(ctx, &s3.HeadObjectInput{ Bucket: aws.String(bucket), Key: aws.String(key), }, time.Minute) if err != nil { log.Printf("Failed attempt to wait for object %s to exist in %s.\n", key, bucket) } else { outKey = *output.Key } } return outKey, err }
  • APIEinzelheiten finden Sie PutObjectunter AWS SDK for Go APIReferenz.

Java
SDKfür Java 2.x
Anmerkung

Es gibt noch mehr dazu. GitHub Sie sehen das vollständige Beispiel und erfahren, wie Sie das AWS -Code-Beispiel-Repository einrichten und ausführen.

Laden Sie eine Datei mit einem S3Client in einen Bucket hoch.

/** * Uploads a local file to an AWS S3 bucket asynchronously. * * @param bucketName the name of the S3 bucket to upload the file to * @param key the key (object name) to use for the uploaded file * @param objectPath the local file path of the file to be uploaded * @return a {@link CompletableFuture} that completes with the {@link PutObjectResponse} when the upload is successful, or throws a {@link RuntimeException} if the upload fails */ public CompletableFuture<PutObjectResponse> uploadLocalFileAsync(String bucketName, String key, String objectPath) { PutObjectRequest objectRequest = PutObjectRequest.builder() .bucket(bucketName) .key(key) .build(); CompletableFuture<PutObjectResponse> response = getAsyncClient().putObject(objectRequest, AsyncRequestBody.fromFile(Paths.get(objectPath))); return response.whenComplete((resp, ex) -> { if (ex != null) { throw new RuntimeException("Failed to upload file", ex); } }); }

Verwenden Sie ein S3 TransferManager, um eine Datei in einen Bucket hochzuladen. Sehen Sie sich die vollständige Datei an und testen Sie sie.

import org.slf4j.Logger; import org.slf4j.LoggerFactory; import software.amazon.awssdk.transfer.s3.S3TransferManager; import software.amazon.awssdk.transfer.s3.model.CompletedFileUpload; import software.amazon.awssdk.transfer.s3.model.FileUpload; import software.amazon.awssdk.transfer.s3.model.UploadFileRequest; import software.amazon.awssdk.transfer.s3.progress.LoggingTransferListener; import java.net.URI; import java.net.URISyntaxException; import java.net.URL; import java.nio.file.Paths; import java.util.UUID; public String uploadFile(S3TransferManager transferManager, String bucketName, String key, URI filePathURI) { UploadFileRequest uploadFileRequest = UploadFileRequest.builder() .putObjectRequest(b -> b.bucket(bucketName).key(key)) .source(Paths.get(filePathURI)) .build(); FileUpload fileUpload = transferManager.uploadFile(uploadFileRequest); CompletedFileUpload uploadResult = fileUpload.completionFuture().join(); return uploadResult.response().eTag(); }

Laden Sie ein Objekt mit einem S3Client in einen Bucket hoch und legen Sie Tags fest.

/** * Puts tags on an Amazon S3 object. * * @param s3 An {@link S3Client} object that represents the Amazon S3 client. * @param bucketName The name of the Amazon S3 bucket. * @param objectKey The key of the Amazon S3 object. * @param objectPath The file path of the object to be uploaded. */ public static void putS3ObjectTags(S3Client s3, String bucketName, String objectKey, String objectPath) { try { Tag tag1 = Tag.builder() .key("Tag 1") .value("This is tag 1") .build(); Tag tag2 = Tag.builder() .key("Tag 2") .value("This is tag 2") .build(); List<Tag> tags = new ArrayList<>(); tags.add(tag1); tags.add(tag2); Tagging allTags = Tagging.builder() .tagSet(tags) .build(); PutObjectRequest putOb = PutObjectRequest.builder() .bucket(bucketName) .key(objectKey) .tagging(allTags) .build(); s3.putObject(putOb, RequestBody.fromBytes(getObjectFile(objectPath))); } catch (S3Exception e) { System.err.println(e.getMessage()); System.exit(1); } } /** * Updates the tags associated with an object in an Amazon S3 bucket. * * @param s3 an instance of the S3Client class, which is used to interact with the Amazon S3 service * @param bucketName the name of the S3 bucket containing the object * @param objectKey the key (or name) of the object in the S3 bucket * @throws S3Exception if there is an error updating the object's tags */ public static void updateObjectTags(S3Client s3, String bucketName, String objectKey) { try { GetObjectTaggingRequest taggingRequest = GetObjectTaggingRequest.builder() .bucket(bucketName) .key(objectKey) .build(); GetObjectTaggingResponse getTaggingRes = s3.getObjectTagging(taggingRequest); List<Tag> obTags = getTaggingRes.tagSet(); for (Tag sinTag : obTags) { System.out.println("The tag key is: " + sinTag.key()); System.out.println("The tag value is: " + sinTag.value()); } // Replace the object's tags with two new tags. Tag tag3 = Tag.builder() .key("Tag 3") .value("This is tag 3") .build(); Tag tag4 = Tag.builder() .key("Tag 4") .value("This is tag 4") .build(); List<Tag> tags = new ArrayList<>(); tags.add(tag3); tags.add(tag4); Tagging updatedTags = Tagging.builder() .tagSet(tags) .build(); PutObjectTaggingRequest taggingRequest1 = PutObjectTaggingRequest.builder() .bucket(bucketName) .key(objectKey) .tagging(updatedTags) .build(); s3.putObjectTagging(taggingRequest1); GetObjectTaggingResponse getTaggingRes2 = s3.getObjectTagging(taggingRequest); List<Tag> modTags = getTaggingRes2.tagSet(); for (Tag sinTag : modTags) { System.out.println("The tag key is: " + sinTag.key()); System.out.println("The tag value is: " + sinTag.value()); } } catch (S3Exception e) { System.err.println(e.getMessage()); System.exit(1); } } /** * Retrieves the contents of a file as a byte array. * * @param filePath the path of the file to be read * @return a byte array containing the contents of the file, or null if an error occurs */ private static byte[] getObjectFile(String filePath) { FileInputStream fileInputStream = null; byte[] bytesArray = null; try { File file = new File(filePath); bytesArray = new byte[(int) file.length()]; fileInputStream = new FileInputStream(file); fileInputStream.read(bytesArray); } catch (IOException e) { e.printStackTrace(); } finally { if (fileInputStream != null) { try { fileInputStream.close(); } catch (IOException e) { e.printStackTrace(); } } } return bytesArray; } }

Laden Sie ein Objekt mit einem S3Client in einen Bucket hoch und legen Sie Metadaten fest.

import software.amazon.awssdk.core.sync.RequestBody; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.s3.S3Client; import software.amazon.awssdk.services.s3.model.PutObjectRequest; import software.amazon.awssdk.services.s3.model.S3Exception; import java.io.File; import java.util.HashMap; import java.util.Map; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * <p> * For more information, see the following documentation topic: * <p> * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class PutObjectMetadata { public static void main(String[] args) { final String USAGE = """ Usage: <bucketName> <objectKey> <objectPath>\s Where: bucketName - The Amazon S3 bucket to upload an object into. objectKey - The object to upload (for example, book.pdf). objectPath - The path where the file is located (for example, C:/AWS/book2.pdf).\s """; if (args.length != 3) { System.out.println(USAGE); System.exit(1); } String bucketName = args[0]; String objectKey = args[1]; String objectPath = args[2]; System.out.println("Putting object " + objectKey + " into bucket " + bucketName); System.out.println(" in bucket: " + bucketName); Region region = Region.US_EAST_1; S3Client s3 = S3Client.builder() .region(region) .build(); putS3Object(s3, bucketName, objectKey, objectPath); s3.close(); } /** * Uploads an object to an Amazon S3 bucket with metadata. * * @param s3 the S3Client object used to interact with the Amazon S3 service * @param bucketName the name of the S3 bucket to upload the object to * @param objectKey the name of the object to be uploaded * @param objectPath the local file path of the object to be uploaded */ public static void putS3Object(S3Client s3, String bucketName, String objectKey, String objectPath) { try { Map<String, String> metadata = new HashMap<>(); metadata.put("author", "Mary Doe"); metadata.put("version", "1.0.0.0"); PutObjectRequest putOb = PutObjectRequest.builder() .bucket(bucketName) .key(objectKey) .metadata(metadata) .build(); s3.putObject(putOb, RequestBody.fromFile(new File(objectPath))); System.out.println("Successfully placed " + objectKey + " into bucket " + bucketName); } catch (S3Exception e) { System.err.println(e.getMessage()); System.exit(1); } } }

Laden Sie ein Objekt mit einem S3Client in einen Bucket hoch und legen Sie einen Wert für die Objektaufbewahrung fest.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.s3.S3Client; import software.amazon.awssdk.services.s3.model.PutObjectRetentionRequest; import software.amazon.awssdk.services.s3.model.ObjectLockRetention; import software.amazon.awssdk.services.s3.model.S3Exception; import java.time.Instant; import java.time.LocalDate; import java.time.LocalDateTime; import java.time.ZoneOffset; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * <p> * For more information, see the following documentation topic: * <p> * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class PutObjectRetention { public static void main(String[] args) { final String usage = """ Usage: <key> <bucketName>\s Where: key - The name of the object (for example, book.pdf).\s bucketName - The Amazon S3 bucket name that contains the object (for example, bucket1).\s """; if (args.length != 2) { System.out.println(usage); System.exit(1); } String key = args[0]; String bucketName = args[1]; Region region = Region.US_EAST_1; S3Client s3 = S3Client.builder() .region(region) .build(); setRentionPeriod(s3, key, bucketName); s3.close(); } /** * Sets the retention period for an object in an Amazon S3 bucket. * * @param s3 the S3Client object used to interact with the Amazon S3 service * @param key the key (name) of the object in the S3 bucket * @param bucket the name of the S3 bucket where the object is stored * * @throws S3Exception if an error occurs while setting the object retention period */ public static void setRentionPeriod(S3Client s3, String key, String bucket) { try { LocalDate localDate = LocalDate.parse("2020-07-17"); LocalDateTime localDateTime = localDate.atStartOfDay(); Instant instant = localDateTime.toInstant(ZoneOffset.UTC); ObjectLockRetention lockRetention = ObjectLockRetention.builder() .mode("COMPLIANCE") .retainUntilDate(instant) .build(); PutObjectRetentionRequest retentionRequest = PutObjectRetentionRequest.builder() .bucket(bucket) .key(key) .bypassGovernanceRetention(true) .retention(lockRetention) .build(); // To set Retention on an object, the Amazon S3 bucket must support object // locking, otherwise an exception is thrown. s3.putObjectRetention(retentionRequest); System.out.print("An object retention configuration was successfully placed on the object"); } catch (S3Exception e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } } }
  • APIEinzelheiten finden Sie PutObjectunter AWS SDK for Java 2.x APIReferenz.

JavaScript
SDKfür JavaScript (v3)
Anmerkung

Es gibt noch mehr dazu GitHub. Sie sehen das vollständige Beispiel und erfahren, wie Sie das AWS -Code-Beispiel-Repository einrichten und ausführen.

Laden Sie das Objekt hoch.

import { readFile } from "node:fs/promises"; import { PutObjectCommand, S3Client, S3ServiceException, } from "@aws-sdk/client-s3"; /** * Upload a file to an S3 bucket. * @param {{ bucketName: string, key: string, filePath: string }} */ export const main = async ({ bucketName, key, filePath }) => { const client = new S3Client({}); const command = new PutObjectCommand({ Bucket: bucketName, Key: key, Body: await readFile(filePath), }); try { const response = await client.send(command); console.log(response); } catch (caught) { if ( caught instanceof S3ServiceException && caught.name === "EntityTooLarge" ) { console.error( `Error from S3 while uploading object to ${bucketName}. \ The object was too large. To upload objects larger than 5GB, use the S3 console (160GB max) \ or the multipart upload API (5TB max).`, ); } else if (caught instanceof S3ServiceException) { console.error( `Error from S3 while uploading object to ${bucketName}. ${caught.name}: ${caught.message}`, ); } else { throw caught; } } };
Kotlin
SDKfür Kotlin
Anmerkung

Es gibt noch mehr dazu. GitHub Sie sehen das vollständige Beispiel und erfahren, wie Sie das AWS -Code-Beispiel-Repository einrichten und ausführen.

suspend fun putS3Object( bucketName: String, objectKey: String, objectPath: String, ) { val metadataVal = mutableMapOf<String, String>() metadataVal["myVal"] = "test" val request = PutObjectRequest { bucket = bucketName key = objectKey metadata = metadataVal body = File(objectPath).asByteStream() } S3Client { region = "us-east-1" }.use { s3 -> val response = s3.putObject(request) println("Tag information is ${response.eTag}") } }
  • APIEinzelheiten finden Sie PutObjectin der AWS SDKAPIKotlin-Referenz.

PHP
SDK für PHP
Anmerkung

Es gibt noch mehr dazu. GitHub Sie sehen das vollständige Beispiel und erfahren, wie Sie das AWS -Code-Beispiel-Repository einrichten und ausführen.

Laden Sie ein Objekt in einen Bucket hoch.

$s3client = new Aws\S3\S3Client(['region' => 'us-west-2']); $fileName = __DIR__ . "/local-file-" . uniqid(); try { $this->s3client->putObject([ 'Bucket' => $this->bucketName, 'Key' => $fileName, 'SourceFile' => __DIR__ . '/testfile.txt' ]); echo "Uploaded $fileName to $this->bucketName.\n"; } catch (Exception $exception) { echo "Failed to upload $fileName with error: " . $exception->getMessage(); exit("Please fix error with file upload before continuing."); }
  • APIEinzelheiten finden Sie PutObjectin der AWS SDK for PHP APIReferenz.

PowerShell
Tools für PowerShell

Beispiel 1: Dieser Befehl lädt die einzelne Datei "local-sample.txt" auf Amazon S3 hoch und erstellt ein Objekt mit dem Schlüssel "sample.txt" im Bucket „test-files“.

Write-S3Object -BucketName amzn-s3-demo-bucket -Key "sample.txt" -File .\local-sample.txt

Beispiel 2: Dieser Befehl lädt die einzelne Datei "sample.txt" auf Amazon S3 hoch und erstellt ein Objekt mit dem Schlüssel "sample.txt" im Bucket „test-files“. Wenn der -Key-Parameter nicht angegeben wird, wird der Dateiname als S3-Objektschlüssel verwendet.

Write-S3Object -BucketName amzn-s3-demo-bucket -File .\sample.txt

Beispiel 3: Dieser Befehl lädt die einzelne Datei "local-sample.txt" auf Amazon S3 hoch und erstellt ein Objekt mit dem Schlüssel "prefix/to/sample.txt“ im Bucket „test-files“.

Write-S3Object -BucketName amzn-s3-demo-bucket -Key "prefix/to/sample.txt" -File .\local-sample.txt

Beispiel 4: Dieser Befehl lädt alle Dateien im Unterverzeichnis „Scripts“ in den Bucket „test-files“ hoch und wendet das gemeinsame key prefix "SampleScripts" auf jedes Objekt an. Jede hochgeladene Datei hat den Schlüssel "SampleScripts/filename", wobei 'Dateiname' variiert.

Write-S3Object -BucketName amzn-s3-demo-bucket -Folder .\Scripts -KeyPrefix SampleScripts\

Beispiel 5: Dieser Befehl lädt alle*.ps1-Dateien im lokalen Verzeichnis „Scripts“ in den Bucket „test-files“ hoch und wendet das gemeinsame key prefix "" SampleScripts auf jedes Objekt an. Jede hochgeladene Datei hat den Schlüssel "SampleScripts/filename.ps1", wobei 'Dateiname' variiert.

Write-S3Object -BucketName amzn-s3-demo-bucket -Folder .\Scripts -KeyPrefix SampleScripts\ -SearchPattern *.ps1

Beispiel 6: Dieser Befehl erstellt ein neues S3-Objekt, das die angegebene Inhaltszeichenfolge mit dem Schlüssel 'sample.txt' enthält.

Write-S3Object -BucketName amzn-s3-demo-bucket -Key "sample.txt" -Content "object contents"

Beispiel 7: Dieser Befehl lädt die angegebene Datei hoch (der Dateiname wird als Schlüssel verwendet) und wendet die angegebenen Tags auf das neue Objekt an.

Write-S3Object -BucketName amzn-s3-demo-bucket -File "sample.txt" -TagSet @{Key="key1";Value="value1"},@{Key="key2";Value="value2"}

Beispiel 8: Dieser Befehl lädt den angegebenen Ordner rekursiv hoch und wendet die angegebenen Tags auf alle neuen Objekte an.

Write-S3Object -BucketName amzn-s3-demo-bucket -Folder . -KeyPrefix "TaggedFiles" -Recurse -TagSet @{Key="key1";Value="value1"},@{Key="key2";Value="value2"}
Python
SDKfür Python (Boto3)
Anmerkung

Es gibt noch mehr dazu. GitHub Sie sehen das vollständige Beispiel und erfahren, wie Sie das AWS -Code-Beispiel-Repository einrichten und ausführen.

class ObjectWrapper: """Encapsulates S3 object actions.""" def __init__(self, s3_object): """ :param s3_object: A Boto3 Object resource. This is a high-level resource in Boto3 that wraps object actions in a class-like structure. """ self.object = s3_object self.key = self.object.key def put(self, data): """ Upload data to the object. :param data: The data to upload. This can either be bytes or a string. When this argument is a string, it is interpreted as a file name, which is opened in read bytes mode. """ put_data = data if isinstance(data, str): try: put_data = open(data, "rb") except IOError: logger.exception("Expected file name or binary data, got '%s'.", data) raise try: self.object.put(Body=put_data) self.object.wait_until_exists() logger.info( "Put object '%s' to bucket '%s'.", self.object.key, self.object.bucket_name, ) except ClientError: logger.exception( "Couldn't put object '%s' to bucket '%s'.", self.object.key, self.object.bucket_name, ) raise finally: if getattr(put_data, "close", None): put_data.close()

Laden Sie ein Objekt mithilfe einer bedingten Anfrage hoch.

class S3ConditionalRequests: """Encapsulates S3 conditional request operations.""" def __init__(self, s3_client): self.s3 = s3_client @classmethod def from_client(cls): """ Instantiates this class from a Boto3 client. """ s3_client = boto3.client("s3") return cls(s3_client) def put_object_conditional(self, object_key: str, source_bucket: str, data: bytes): """ Uploads an object to Amazon S3 with a conditional request. Prevents overwrite using an IfNoneMatch condition for the object key. :param object_key: The key of the object to upload. :param source_bucket: The source bucket of the object. :param data: The data to upload. """ try: self.s3.put_object( Bucket=source_bucket, Key=object_key, Body=data, IfNoneMatch="*" ) print( f"\tConditional write successful for key {object_key} in bucket {source_bucket}." ) except ClientError as e: error_code = e.response["Error"]["Code"] if error_code == "PreconditionFailed": print("\tConditional write failed: Precondition failed") else: logger.error(f"Unexpected error: {error_code}") raise
  • APIEinzelheiten finden Sie unter PutObjectPython (Boto3) API -Referenz.AWS SDK

Ruby
SDKfür Ruby
Anmerkung

Es gibt noch mehr dazu GitHub. Sie sehen das vollständige Beispiel und erfahren, wie Sie das AWS -Code-Beispiel-Repository einrichten und ausführen.

Laden Sie eine Datei mit einem verwalteten Uploader (Object.upload_file) hoch.

require 'aws-sdk-s3' # Wraps Amazon S3 object actions. class ObjectUploadFileWrapper attr_reader :object # @param object [Aws::S3::Object] An existing Amazon S3 object. def initialize(object) @object = object end # Uploads a file to an Amazon S3 object by using a managed uploader. # # @param file_path [String] The path to the file to upload. # @return [Boolean] True when the file is uploaded; otherwise false. def upload_file(file_path) @object.upload_file(file_path) true rescue Aws::Errors::ServiceError => e puts "Couldn't upload file #{file_path} to #{@object.key}. Here's why: #{e.message}" false end end # Example usage: def run_demo bucket_name = "amzn-s3-demo-bucket" object_key = "my-uploaded-file" file_path = "object_upload_file.rb" wrapper = ObjectUploadFileWrapper.new(Aws::S3::Object.new(bucket_name, object_key)) return unless wrapper.upload_file(file_path) puts "File #{file_path} successfully uploaded to #{bucket_name}:#{object_key}." end run_demo if $PROGRAM_NAME == __FILE__

Laden Sie eine Datei mithilfe von Object.put hoch.

require 'aws-sdk-s3' # Wraps Amazon S3 object actions. class ObjectPutWrapper attr_reader :object # @param object [Aws::S3::Object] An existing Amazon S3 object. def initialize(object) @object = object end def put_object(source_file_path) File.open(source_file_path, 'rb') do |file| @object.put(body: file) end true rescue Aws::Errors::ServiceError => e puts "Couldn't put #{source_file_path} to #{object.key}. Here's why: #{e.message}" false end end # Example usage: def run_demo bucket_name = "amzn-s3-demo-bucket" object_key = "my-object-key" file_path = "my-local-file.txt" wrapper = ObjectPutWrapper.new(Aws::S3::Object.new(bucket_name, object_key)) success = wrapper.put_object(file_path) return unless success puts "Put file #{file_path} into #{object_key} in #{bucket_name}." end run_demo if $PROGRAM_NAME == __FILE__

Laden Sie eine Datei mithilfe von Object.put hoch und fügen Sie eine serverseitige Verschlüsselung hinzu.

require 'aws-sdk-s3' # Wraps Amazon S3 object actions. class ObjectPutSseWrapper attr_reader :object # @param object [Aws::S3::Object] An existing Amazon S3 object. def initialize(object) @object = object end def put_object_encrypted(object_content, encryption) @object.put(body: object_content, server_side_encryption: encryption) true rescue Aws::Errors::ServiceError => e puts "Couldn't put your content to #{object.key}. Here's why: #{e.message}" false end end # Example usage: def run_demo bucket_name = "amzn-s3-demo-bucket" object_key = "my-encrypted-content" object_content = "This is my super-secret content." encryption = "AES256" wrapper = ObjectPutSseWrapper.new(Aws::S3::Object.new(bucket_name, object_content)) return unless wrapper.put_object_encrypted(object_content, encryption) puts "Put your content into #{bucket_name}:#{object_key} and encrypted it with #{encryption}." end run_demo if $PROGRAM_NAME == __FILE__
  • APIEinzelheiten finden Sie PutObjectin der AWS SDK for Ruby APIReferenz.

Rust
SDKfür Rust
Anmerkung

Es gibt noch mehr dazu GitHub. Sie sehen das vollständige Beispiel und erfahren, wie Sie das AWS -Code-Beispiel-Repository einrichten und ausführen.

pub async fn upload_object( client: &aws_sdk_s3::Client, bucket_name: &str, file_name: &str, key: &str, ) -> Result<aws_sdk_s3::operation::put_object::PutObjectOutput, S3ExampleError> { let body = aws_sdk_s3::primitives::ByteStream::from_path(std::path::Path::new(file_name)).await; client .put_object() .bucket(bucket_name) .key(key) .body(body.unwrap()) .send() .await .map_err(S3ExampleError::from) }
  • APIEinzelheiten finden Sie PutObjectin der AWS SDKAPIRust-Referenz.

SAP ABAP
SDKfür SAP ABAP
Anmerkung

Es gibt noch mehr dazu GitHub. Sie sehen das vollständige Beispiel und erfahren, wie Sie das AWS -Code-Beispiel-Repository einrichten und ausführen.

"Get contents of file from application server." DATA lv_body TYPE xstring. OPEN DATASET iv_file_name FOR INPUT IN BINARY MODE. READ DATASET iv_file_name INTO lv_body. CLOSE DATASET iv_file_name. "Upload/put an object to an S3 bucket." TRY. lo_s3->putobject( iv_bucket = iv_bucket_name iv_key = iv_file_name iv_body = lv_body ). MESSAGE 'Object uploaded to S3 bucket.' TYPE 'I'. CATCH /aws1/cx_s3_nosuchbucket. MESSAGE 'Bucket does not exist.' TYPE 'E'. ENDTRY.
  • APIEinzelheiten finden Sie PutObjectunter AWS SDKSAPABAPAPIals Referenz.

Swift
SDKfür Swift
Anmerkung

Es gibt noch mehr dazu GitHub. Sie sehen das vollständige Beispiel und erfahren, wie Sie das AWS -Code-Beispiel-Repository einrichten und ausführen.

import AWSS3 import Smithy public func uploadFile(bucket: String, key: String, file: String) async throws { let fileUrl = URL(fileURLWithPath: file) do { let fileData = try Data(contentsOf: fileUrl) let dataStream = ByteStream.data(fileData) let input = PutObjectInput( body: dataStream, bucket: bucket, key: key ) _ = try await client.putObject(input: input) } catch { print("ERROR: ", dump(error, name: "Putting an object.")) throw error } }
import AWSS3 import Smithy public func createFile(bucket: String, key: String, withData data: Data) async throws { let dataStream = ByteStream.data(data) let input = PutObjectInput( body: dataStream, bucket: bucket, key: key ) do { _ = try await client.putObject(input: input) } catch { print("ERROR: ", dump(error, name: "Putting an object.")) throw error } }
  • APIEinzelheiten finden Sie PutObjectin der AWS SDKAPISwift-Referenz.