Amazon Rekognition Rekognition-Beispiele für die Verwendung von Java SDK 2.x - AWS SDKCode-Beispiele

Weitere AWS SDK Beispiele sind im Repo AWS Doc SDK Examples GitHub verfügbar.

Die vorliegende Übersetzung wurde maschinell erstellt. Im Falle eines Konflikts oder eines Widerspruchs zwischen dieser übersetzten Fassung und der englischen Fassung (einschließlich infolge von Verzögerungen bei der Übersetzung) ist die englische Fassung maßgeblich.

Amazon Rekognition Rekognition-Beispiele für die Verwendung von Java SDK 2.x

Die folgenden Codebeispiele zeigen Ihnen, wie Sie mithilfe von Amazon Rekognition Aktionen ausführen und allgemeine Szenarien implementieren. AWS SDK for Java 2.x

Aktionen sind Codeauszüge aus größeren Programmen und müssen im Kontext ausgeführt werden. Aktionen zeigen Ihnen zwar, wie Sie einzelne Servicefunktionen aufrufen, aber Sie können Aktionen im Kontext der zugehörigen Szenarien sehen.

Szenarien sind Codebeispiele, die Ihnen zeigen, wie Sie bestimmte Aufgaben ausführen, indem Sie mehrere Funktionen innerhalb eines Dienstes oder in Kombination mit anderen aufrufen AWS-Services.

Jedes Beispiel enthält einen Link zum vollständigen Quellcode, in dem Sie Anweisungen zum Einrichten und Ausführen des Codes im Kontext finden.

Aktionen

Das folgende Codebeispiel zeigt die VerwendungCompareFaces.

Weitere Informationen finden Sie unter Vergleich von Gesichtern in Bildern.

SDKfür Java 2.x
Anmerkung

Es gibt noch mehr dazu. GitHub Sie sehen das vollständige Beispiel und erfahren, wie Sie das AWS -Code-Beispiel-Repository einrichten und ausführen.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.Image; import software.amazon.awssdk.services.rekognition.model.CompareFacesRequest; import software.amazon.awssdk.services.rekognition.model.CompareFacesResponse; import software.amazon.awssdk.services.rekognition.model.CompareFacesMatch; import software.amazon.awssdk.services.rekognition.model.ComparedFace; import software.amazon.awssdk.services.rekognition.model.BoundingBox; import software.amazon.awssdk.core.SdkBytes; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.InputStream; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class CompareFaces { public static void main(String[] args) { final String usage = """ Usage: <pathSource> <pathTarget> Where: pathSource - The path to the source image (for example, C:\\AWS\\pic1.png).\s pathTarget - The path to the target image (for example, C:\\AWS\\pic2.png).\s """; if (args.length != 2) { System.out.println(usage); System.exit(1); } Float similarityThreshold = 70F; String sourceImage = args[0]; String targetImage = args[1]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); compareTwoFaces(rekClient, similarityThreshold, sourceImage, targetImage); rekClient.close(); } public static void compareTwoFaces(RekognitionClient rekClient, Float similarityThreshold, String sourceImage, String targetImage) { try { InputStream sourceStream = new FileInputStream(sourceImage); InputStream tarStream = new FileInputStream(targetImage); SdkBytes sourceBytes = SdkBytes.fromInputStream(sourceStream); SdkBytes targetBytes = SdkBytes.fromInputStream(tarStream); // Create an Image object for the source image. Image souImage = Image.builder() .bytes(sourceBytes) .build(); Image tarImage = Image.builder() .bytes(targetBytes) .build(); CompareFacesRequest facesRequest = CompareFacesRequest.builder() .sourceImage(souImage) .targetImage(tarImage) .similarityThreshold(similarityThreshold) .build(); // Compare the two images. CompareFacesResponse compareFacesResult = rekClient.compareFaces(facesRequest); List<CompareFacesMatch> faceDetails = compareFacesResult.faceMatches(); for (CompareFacesMatch match : faceDetails) { ComparedFace face = match.face(); BoundingBox position = face.boundingBox(); System.out.println("Face at " + position.left().toString() + " " + position.top() + " matches with " + face.confidence().toString() + "% confidence."); } List<ComparedFace> uncompared = compareFacesResult.unmatchedFaces(); System.out.println("There was " + uncompared.size() + " face(s) that did not match"); System.out.println("Source image rotation: " + compareFacesResult.sourceImageOrientationCorrection()); System.out.println("target image rotation: " + compareFacesResult.targetImageOrientationCorrection()); } catch (RekognitionException | FileNotFoundException e) { System.out.println("Failed to load source image " + sourceImage); System.exit(1); } } }
  • APIEinzelheiten finden Sie CompareFacesin der AWS SDK for Java 2.x APIReferenz.

Das folgende Codebeispiel zeigt die VerwendungCreateCollection.

Weitere Informationen finden Sie unter Erstellen einer Sammlung.

SDKfür Java 2.x
Anmerkung

Es gibt noch mehr dazu. GitHub Sie sehen das vollständige Beispiel und erfahren, wie Sie das AWS -Code-Beispiel-Repository einrichten und ausführen.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.CreateCollectionResponse; import software.amazon.awssdk.services.rekognition.model.CreateCollectionRequest; import software.amazon.awssdk.services.rekognition.model.RekognitionException; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class CreateCollection { public static void main(String[] args) { final String usage = """ Usage: <collectionName>\s Where: collectionName - The name of the collection.\s """; if (args.length != 1) { System.out.println(usage); System.exit(1); } String collectionId = args[0]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); System.out.println("Creating collection: " + collectionId); createMyCollection(rekClient, collectionId); rekClient.close(); } public static void createMyCollection(RekognitionClient rekClient, String collectionId) { try { CreateCollectionRequest collectionRequest = CreateCollectionRequest.builder() .collectionId(collectionId) .build(); CreateCollectionResponse collectionResponse = rekClient.createCollection(collectionRequest); System.out.println("CollectionArn: " + collectionResponse.collectionArn()); System.out.println("Status code: " + collectionResponse.statusCode().toString()); } catch (RekognitionException e) { System.out.println(e.getMessage()); System.exit(1); } } }
  • APIEinzelheiten finden Sie CreateCollectionin der AWS SDK for Java 2.x APIReferenz.

Das folgende Codebeispiel zeigt die VerwendungDeleteCollection.

Weitere Informationen finden Sie unter Löschen einer Sammlung.

SDKfür Java 2.x
Anmerkung

Es gibt noch mehr dazu. GitHub Sie sehen das vollständige Beispiel und erfahren, wie Sie das AWS -Code-Beispiel-Repository einrichten und ausführen.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.DeleteCollectionRequest; import software.amazon.awssdk.services.rekognition.model.DeleteCollectionResponse; import software.amazon.awssdk.services.rekognition.model.RekognitionException; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class DeleteCollection { public static void main(String[] args) { final String usage = """ Usage: <collectionId>\s Where: collectionId - The id of the collection to delete.\s """; if (args.length != 1) { System.out.println(usage); System.exit(1); } String collectionId = args[0]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); System.out.println("Deleting collection: " + collectionId); deleteMyCollection(rekClient, collectionId); rekClient.close(); } public static void deleteMyCollection(RekognitionClient rekClient, String collectionId) { try { DeleteCollectionRequest deleteCollectionRequest = DeleteCollectionRequest.builder() .collectionId(collectionId) .build(); DeleteCollectionResponse deleteCollectionResponse = rekClient.deleteCollection(deleteCollectionRequest); System.out.println(collectionId + ": " + deleteCollectionResponse.statusCode().toString()); } catch (RekognitionException e) { System.out.println(e.getMessage()); System.exit(1); } } }
  • APIEinzelheiten finden Sie DeleteCollectionin der AWS SDK for Java 2.x APIReferenz.

Das folgende Codebeispiel zeigt die VerwendungDeleteFaces.

Weitere Informationen finden Sie unter Löschen von Gesichtern aus einer Sammlung.

SDKfür Java 2.x
Anmerkung

Es gibt noch mehr dazu. GitHub Sie sehen das vollständige Beispiel und erfahren, wie Sie das AWS -Code-Beispiel-Repository einrichten und ausführen.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.DeleteFacesRequest; import software.amazon.awssdk.services.rekognition.model.RekognitionException; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class DeleteFacesFromCollection { public static void main(String[] args) { final String usage = """ Usage: <collectionId> <faceId>\s Where: collectionId - The id of the collection from which faces are deleted.\s faceId - The id of the face to delete.\s """; if (args.length != 1) { System.out.println(usage); System.exit(1); } String collectionId = args[0]; String faceId = args[1]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); System.out.println("Deleting collection: " + collectionId); deleteFacesCollection(rekClient, collectionId, faceId); rekClient.close(); } public static void deleteFacesCollection(RekognitionClient rekClient, String collectionId, String faceId) { try { DeleteFacesRequest deleteFacesRequest = DeleteFacesRequest.builder() .collectionId(collectionId) .faceIds(faceId) .build(); rekClient.deleteFaces(deleteFacesRequest); System.out.println("The face was deleted from the collection."); } catch (RekognitionException e) { System.out.println(e.getMessage()); System.exit(1); } } }
  • APIEinzelheiten finden Sie DeleteFacesin der AWS SDK for Java 2.x APIReferenz.

Das folgende Codebeispiel zeigt die VerwendungDescribeCollection.

Weitere Informationen finden Sie unter Beschreiben einer Sammlung.

SDKfür Java 2.x
Anmerkung

Es gibt noch mehr dazu. GitHub Sie sehen das vollständige Beispiel und erfahren, wie Sie das AWS -Code-Beispiel-Repository einrichten und ausführen.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.DescribeCollectionRequest; import software.amazon.awssdk.services.rekognition.model.DescribeCollectionResponse; import software.amazon.awssdk.services.rekognition.model.RekognitionException; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class DescribeCollection { public static void main(String[] args) { final String usage = """ Usage: <collectionName> Where: collectionName - The name of the Amazon Rekognition collection.\s """; if (args.length != 1) { System.out.println(usage); System.exit(1); } String collectionName = args[0]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); describeColl(rekClient, collectionName); rekClient.close(); } public static void describeColl(RekognitionClient rekClient, String collectionName) { try { DescribeCollectionRequest describeCollectionRequest = DescribeCollectionRequest.builder() .collectionId(collectionName) .build(); DescribeCollectionResponse describeCollectionResponse = rekClient .describeCollection(describeCollectionRequest); System.out.println("Collection Arn : " + describeCollectionResponse.collectionARN()); System.out.println("Created : " + describeCollectionResponse.creationTimestamp().toString()); } catch (RekognitionException e) { System.out.println(e.getMessage()); System.exit(1); } } }

Das folgende Codebeispiel zeigt die VerwendungDetectFaces.

Weitere Informationen finden Sie unter Erkennen von Gesichtern in einem Bild.

SDKfür Java 2.x
Anmerkung

Es gibt noch mehr dazu. GitHub Sie sehen das vollständige Beispiel und erfahren, wie Sie das AWS -Code-Beispiel-Repository einrichten und ausführen.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.DetectFacesRequest; import software.amazon.awssdk.services.rekognition.model.DetectFacesResponse; import software.amazon.awssdk.services.rekognition.model.Image; import software.amazon.awssdk.services.rekognition.model.Attribute; import software.amazon.awssdk.services.rekognition.model.FaceDetail; import software.amazon.awssdk.services.rekognition.model.AgeRange; import software.amazon.awssdk.core.SdkBytes; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.InputStream; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class DetectFaces { public static void main(String[] args) { final String usage = """ Usage: <sourceImage> Where: sourceImage - The path to the image (for example, C:\\AWS\\pic1.png).\s """; if (args.length != 1) { System.out.println(usage); System.exit(1); } String sourceImage = args[0]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); detectFacesinImage(rekClient, sourceImage); rekClient.close(); } public static void detectFacesinImage(RekognitionClient rekClient, String sourceImage) { try { InputStream sourceStream = new FileInputStream(sourceImage); SdkBytes sourceBytes = SdkBytes.fromInputStream(sourceStream); // Create an Image object for the source image. Image souImage = Image.builder() .bytes(sourceBytes) .build(); DetectFacesRequest facesRequest = DetectFacesRequest.builder() .attributes(Attribute.ALL) .image(souImage) .build(); DetectFacesResponse facesResponse = rekClient.detectFaces(facesRequest); List<FaceDetail> faceDetails = facesResponse.faceDetails(); for (FaceDetail face : faceDetails) { AgeRange ageRange = face.ageRange(); System.out.println("The detected face is estimated to be between " + ageRange.low().toString() + " and " + ageRange.high().toString() + " years old."); System.out.println("There is a smile : " + face.smile().value().toString()); } } catch (RekognitionException | FileNotFoundException e) { System.out.println(e.getMessage()); System.exit(1); } } }
  • APIEinzelheiten finden Sie DetectFacesin der AWS SDK for Java 2.x APIReferenz.

Das folgende Codebeispiel zeigt die VerwendungDetectLabels.

Weitere Informationen finden Sie unter Erkennen von Labels in einem Bild.

SDKfür Java 2.x
Anmerkung

Es gibt noch mehr dazu. GitHub Sie sehen das vollständige Beispiel und erfahren, wie Sie das AWS -Code-Beispiel-Repository einrichten und ausführen.

import software.amazon.awssdk.core.SdkBytes; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.Image; import software.amazon.awssdk.services.rekognition.model.DetectLabelsRequest; import software.amazon.awssdk.services.rekognition.model.DetectLabelsResponse; import software.amazon.awssdk.services.rekognition.model.Label; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.InputStream; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class DetectLabels { public static void main(String[] args) { final String usage = """ Usage: <sourceImage> Where: sourceImage - The path to the image (for example, C:\\AWS\\pic1.png).\s """; if (args.length != 1) { System.out.println(usage); System.exit(1); } String sourceImage = args[0]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); detectImageLabels(rekClient, sourceImage); rekClient.close(); } public static void detectImageLabels(RekognitionClient rekClient, String sourceImage) { try { InputStream sourceStream = new FileInputStream(sourceImage); SdkBytes sourceBytes = SdkBytes.fromInputStream(sourceStream); // Create an Image object for the source image. Image souImage = Image.builder() .bytes(sourceBytes) .build(); DetectLabelsRequest detectLabelsRequest = DetectLabelsRequest.builder() .image(souImage) .maxLabels(10) .build(); DetectLabelsResponse labelsResponse = rekClient.detectLabels(detectLabelsRequest); List<Label> labels = labelsResponse.labels(); System.out.println("Detected labels for the given photo"); for (Label label : labels) { System.out.println(label.name() + ": " + label.confidence().toString()); } } catch (RekognitionException | FileNotFoundException e) { System.out.println(e.getMessage()); System.exit(1); } } }
  • APIEinzelheiten finden Sie DetectLabelsin der AWS SDK for Java 2.x APIReferenz.

Das folgende Codebeispiel zeigt die VerwendungDetectModerationLabels.

Weitere Informationen finden Sie unter Erkennen von unangemessenen Bildern.

SDKfür Java 2.x
Anmerkung

Es gibt noch mehr dazu. GitHub Sie sehen das vollständige Beispiel und erfahren, wie Sie das AWS -Code-Beispiel-Repository einrichten und ausführen.

import software.amazon.awssdk.core.SdkBytes; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.Image; import software.amazon.awssdk.services.rekognition.model.DetectModerationLabelsRequest; import software.amazon.awssdk.services.rekognition.model.DetectModerationLabelsResponse; import software.amazon.awssdk.services.rekognition.model.ModerationLabel; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.InputStream; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class DetectModerationLabels { public static void main(String[] args) { final String usage = """ Usage: <sourceImage> Where: sourceImage - The path to the image (for example, C:\\AWS\\pic1.png).\s """; if (args.length < 1) { System.out.println(usage); System.exit(1); } String sourceImage = args[0]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); detectModLabels(rekClient, sourceImage); rekClient.close(); } public static void detectModLabels(RekognitionClient rekClient, String sourceImage) { try { InputStream sourceStream = new FileInputStream(sourceImage); SdkBytes sourceBytes = SdkBytes.fromInputStream(sourceStream); Image souImage = Image.builder() .bytes(sourceBytes) .build(); DetectModerationLabelsRequest moderationLabelsRequest = DetectModerationLabelsRequest.builder() .image(souImage) .minConfidence(60F) .build(); DetectModerationLabelsResponse moderationLabelsResponse = rekClient .detectModerationLabels(moderationLabelsRequest); List<ModerationLabel> labels = moderationLabelsResponse.moderationLabels(); System.out.println("Detected labels for image"); for (ModerationLabel label : labels) { System.out.println("Label: " + label.name() + "\n Confidence: " + label.confidence().toString() + "%" + "\n Parent:" + label.parentName()); } } catch (RekognitionException | FileNotFoundException e) { e.printStackTrace(); System.exit(1); } } }

Das folgende Codebeispiel zeigt die VerwendungDetectText.

Weitere Informationen finden Sie unter Erkennen von Text in einem Bild.

SDKfür Java 2.x
Anmerkung

Es gibt noch mehr dazu. GitHub Sie sehen das vollständige Beispiel und erfahren, wie Sie das AWS -Code-Beispiel-Repository einrichten und ausführen.

import software.amazon.awssdk.core.SdkBytes; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.DetectTextRequest; import software.amazon.awssdk.services.rekognition.model.Image; import software.amazon.awssdk.services.rekognition.model.DetectTextResponse; import software.amazon.awssdk.services.rekognition.model.TextDetection; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.InputStream; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class DetectText { public static void main(String[] args) { final String usage = """ Usage: <sourceImage> Where: sourceImage - The path to the image that contains text (for example, C:\\AWS\\pic1.png).\s """; if (args.length != 1) { System.out.println(usage); System.exit(1); } String sourceImage = args[0]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); detectTextLabels(rekClient, sourceImage); rekClient.close(); } public static void detectTextLabels(RekognitionClient rekClient, String sourceImage) { try { InputStream sourceStream = new FileInputStream(sourceImage); SdkBytes sourceBytes = SdkBytes.fromInputStream(sourceStream); Image souImage = Image.builder() .bytes(sourceBytes) .build(); DetectTextRequest textRequest = DetectTextRequest.builder() .image(souImage) .build(); DetectTextResponse textResponse = rekClient.detectText(textRequest); List<TextDetection> textCollection = textResponse.textDetections(); System.out.println("Detected lines and words"); for (TextDetection text : textCollection) { System.out.println("Detected: " + text.detectedText()); System.out.println("Confidence: " + text.confidence().toString()); System.out.println("Id : " + text.id()); System.out.println("Parent Id: " + text.parentId()); System.out.println("Type: " + text.type()); System.out.println(); } } catch (RekognitionException | FileNotFoundException e) { System.out.println(e.getMessage()); System.exit(1); } } }
  • APIEinzelheiten finden Sie DetectTextin der AWS SDK for Java 2.x APIReferenz.

Das folgende Codebeispiel zeigt die VerwendungIndexFaces.

Weitere Informationen finden Sie unter Hinzufügen von Gesichtern zu einer Sammlung.

SDKfür Java 2.x
Anmerkung

Es gibt noch mehr dazu. GitHub Sie sehen das vollständige Beispiel und erfahren, wie Sie das AWS -Code-Beispiel-Repository einrichten und ausführen.

import software.amazon.awssdk.core.SdkBytes; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.IndexFacesResponse; import software.amazon.awssdk.services.rekognition.model.IndexFacesRequest; import software.amazon.awssdk.services.rekognition.model.Image; import software.amazon.awssdk.services.rekognition.model.QualityFilter; import software.amazon.awssdk.services.rekognition.model.Attribute; import software.amazon.awssdk.services.rekognition.model.FaceRecord; import software.amazon.awssdk.services.rekognition.model.UnindexedFace; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.Reason; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.InputStream; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class AddFacesToCollection { public static void main(String[] args) { final String usage = """ Usage: <collectionId> <sourceImage> Where: collectionName - The name of the collection. sourceImage - The path to the image (for example, C:\\AWS\\pic1.png).\s """; if (args.length != 2) { System.out.println(usage); System.exit(1); } String collectionId = args[0]; String sourceImage = args[1]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); addToCollection(rekClient, collectionId, sourceImage); rekClient.close(); } public static void addToCollection(RekognitionClient rekClient, String collectionId, String sourceImage) { try { InputStream sourceStream = new FileInputStream(sourceImage); SdkBytes sourceBytes = SdkBytes.fromInputStream(sourceStream); Image souImage = Image.builder() .bytes(sourceBytes) .build(); IndexFacesRequest facesRequest = IndexFacesRequest.builder() .collectionId(collectionId) .image(souImage) .maxFaces(1) .qualityFilter(QualityFilter.AUTO) .detectionAttributes(Attribute.DEFAULT) .build(); IndexFacesResponse facesResponse = rekClient.indexFaces(facesRequest); System.out.println("Results for the image"); System.out.println("\n Faces indexed:"); List<FaceRecord> faceRecords = facesResponse.faceRecords(); for (FaceRecord faceRecord : faceRecords) { System.out.println(" Face ID: " + faceRecord.face().faceId()); System.out.println(" Location:" + faceRecord.faceDetail().boundingBox().toString()); } List<UnindexedFace> unindexedFaces = facesResponse.unindexedFaces(); System.out.println("Faces not indexed:"); for (UnindexedFace unindexedFace : unindexedFaces) { System.out.println(" Location:" + unindexedFace.faceDetail().boundingBox().toString()); System.out.println(" Reasons:"); for (Reason reason : unindexedFace.reasons()) { System.out.println("Reason: " + reason); } } } catch (RekognitionException | FileNotFoundException e) { System.out.println(e.getMessage()); System.exit(1); } } }
  • APIEinzelheiten finden Sie IndexFacesin der AWS SDK for Java 2.x APIReferenz.

Das folgende Codebeispiel zeigt die VerwendungListCollections.

Weitere Informationen finden Sie unter Sammlungen auflisten.

SDKfür Java 2.x
Anmerkung

Es gibt noch mehr dazu. GitHub Sie sehen das vollständige Beispiel und erfahren, wie Sie das AWS -Code-Beispiel-Repository einrichten und ausführen.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.ListCollectionsRequest; import software.amazon.awssdk.services.rekognition.model.ListCollectionsResponse; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class ListCollections { public static void main(String[] args) { Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); System.out.println("Listing collections"); listAllCollections(rekClient); rekClient.close(); } public static void listAllCollections(RekognitionClient rekClient) { try { ListCollectionsRequest listCollectionsRequest = ListCollectionsRequest.builder() .maxResults(10) .build(); ListCollectionsResponse response = rekClient.listCollections(listCollectionsRequest); List<String> collectionIds = response.collectionIds(); for (String resultId : collectionIds) { System.out.println(resultId); } } catch (RekognitionException e) { System.out.println(e.getMessage()); System.exit(1); } } }
  • APIEinzelheiten finden Sie ListCollectionsin der AWS SDK for Java 2.x APIReferenz.

Das folgende Codebeispiel zeigt die VerwendungListFaces.

Weitere Informationen finden Sie unter Gesichter in einer Sammlung auflisten.

SDKfür Java 2.x
Anmerkung

Es gibt noch mehr dazu. GitHub Sie sehen das vollständige Beispiel und erfahren, wie Sie das AWS -Code-Beispiel-Repository einrichten und ausführen.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.Face; import software.amazon.awssdk.services.rekognition.model.ListFacesRequest; import software.amazon.awssdk.services.rekognition.model.ListFacesResponse; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class ListFacesInCollection { public static void main(String[] args) { final String usage = """ Usage: <collectionId> Where: collectionId - The name of the collection.\s """; if (args.length < 1) { System.out.println(usage); System.exit(1); } String collectionId = args[0]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); System.out.println("Faces in collection " + collectionId); listFacesCollection(rekClient, collectionId); rekClient.close(); } public static void listFacesCollection(RekognitionClient rekClient, String collectionId) { try { ListFacesRequest facesRequest = ListFacesRequest.builder() .collectionId(collectionId) .maxResults(10) .build(); ListFacesResponse facesResponse = rekClient.listFaces(facesRequest); List<Face> faces = facesResponse.faces(); for (Face face : faces) { System.out.println("Confidence level there is a face: " + face.confidence()); System.out.println("The face Id value is " + face.faceId()); } } catch (RekognitionException e) { System.out.println(e.getMessage()); System.exit(1); } } }
  • APIEinzelheiten finden Sie ListFacesin der AWS SDK for Java 2.x APIReferenz.

Das folgende Codebeispiel zeigt die VerwendungRecognizeCelebrities.

Weitere Informationen finden Sie unter Erkennen von Prominenten in einem Bild.

SDKfür Java 2.x
Anmerkung

Es gibt noch mehr dazu. GitHub Sie sehen das vollständige Beispiel und erfahren, wie Sie das AWS -Code-Beispiel-Repository einrichten und ausführen.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.core.SdkBytes; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.InputStream; import java.util.List; import software.amazon.awssdk.services.rekognition.model.RecognizeCelebritiesRequest; import software.amazon.awssdk.services.rekognition.model.RecognizeCelebritiesResponse; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.Image; import software.amazon.awssdk.services.rekognition.model.Celebrity; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class RecognizeCelebrities { public static void main(String[] args) { final String usage = """ Usage: <sourceImage> Where: sourceImage - The path to the image (for example, C:\\AWS\\pic1.png).\s """; if (args.length != 1) { System.out.println(usage); System.exit(1); } String sourceImage = args[0]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); System.out.println("Locating celebrities in " + sourceImage); recognizeAllCelebrities(rekClient, sourceImage); rekClient.close(); } public static void recognizeAllCelebrities(RekognitionClient rekClient, String sourceImage) { try { InputStream sourceStream = new FileInputStream(sourceImage); SdkBytes sourceBytes = SdkBytes.fromInputStream(sourceStream); Image souImage = Image.builder() .bytes(sourceBytes) .build(); RecognizeCelebritiesRequest request = RecognizeCelebritiesRequest.builder() .image(souImage) .build(); RecognizeCelebritiesResponse result = rekClient.recognizeCelebrities(request); List<Celebrity> celebs = result.celebrityFaces(); System.out.println(celebs.size() + " celebrity(s) were recognized.\n"); for (Celebrity celebrity : celebs) { System.out.println("Celebrity recognized: " + celebrity.name()); System.out.println("Celebrity ID: " + celebrity.id()); System.out.println("Further information (if available):"); for (String url : celebrity.urls()) { System.out.println(url); } System.out.println(); } System.out.println(result.unrecognizedFaces().size() + " face(s) were unrecognized."); } catch (RekognitionException | FileNotFoundException e) { System.out.println(e.getMessage()); System.exit(1); } } }

Das folgende Codebeispiel zeigt die VerwendungSearchFaces.

Weitere Informationen finden Sie unter Nach einem Gesicht suchen (Gesichts-ID).

SDKfür Java 2.x
Anmerkung

Es gibt noch mehr dazu. GitHub Sie sehen das vollständige Beispiel und erfahren, wie Sie das AWS -Code-Beispiel-Repository einrichten und ausführen.

import software.amazon.awssdk.core.SdkBytes; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.SearchFacesByImageRequest; import software.amazon.awssdk.services.rekognition.model.Image; import software.amazon.awssdk.services.rekognition.model.SearchFacesByImageResponse; import software.amazon.awssdk.services.rekognition.model.FaceMatch; import java.io.File; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.InputStream; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class SearchFaceMatchingImageCollection { public static void main(String[] args) { final String usage = """ Usage: <collectionId> <sourceImage> Where: collectionId - The id of the collection. \s sourceImage - The path to the image (for example, C:\\AWS\\pic1.png).\s """; if (args.length != 2) { System.out.println(usage); System.exit(1); } String collectionId = args[0]; String sourceImage = args[1]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); System.out.println("Searching for a face in a collections"); searchFaceInCollection(rekClient, collectionId, sourceImage); rekClient.close(); } public static void searchFaceInCollection(RekognitionClient rekClient, String collectionId, String sourceImage) { try { InputStream sourceStream = new FileInputStream(new File(sourceImage)); SdkBytes sourceBytes = SdkBytes.fromInputStream(sourceStream); Image souImage = Image.builder() .bytes(sourceBytes) .build(); SearchFacesByImageRequest facesByImageRequest = SearchFacesByImageRequest.builder() .image(souImage) .maxFaces(10) .faceMatchThreshold(70F) .collectionId(collectionId) .build(); SearchFacesByImageResponse imageResponse = rekClient.searchFacesByImage(facesByImageRequest); System.out.println("Faces matching in the collection"); List<FaceMatch> faceImageMatches = imageResponse.faceMatches(); for (FaceMatch face : faceImageMatches) { System.out.println("The similarity level is " + face.similarity()); System.out.println(); } } catch (RekognitionException | FileNotFoundException e) { System.out.println(e.getMessage()); System.exit(1); } } }
  • APIEinzelheiten finden Sie SearchFacesin der AWS SDK for Java 2.x APIReferenz.

Das folgende Codebeispiel zeigt die VerwendungSearchFacesByImage.

Weitere Informationen finden Sie unter Nach einem Gesicht suchen (Bild).

SDKfür Java 2.x
Anmerkung

Es gibt noch mehr dazu. GitHub Sie sehen das vollständige Beispiel und erfahren, wie Sie das AWS -Code-Beispiel-Repository einrichten und ausführen.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.SearchFacesRequest; import software.amazon.awssdk.services.rekognition.model.SearchFacesResponse; import software.amazon.awssdk.services.rekognition.model.FaceMatch; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class SearchFaceMatchingIdCollection { public static void main(String[] args) { final String usage = """ Usage: <collectionId> <sourceImage> Where: collectionId - The id of the collection. \s sourceImage - The path to the image (for example, C:\\AWS\\pic1.png).\s """; if (args.length != 2) { System.out.println(usage); System.exit(1); } String collectionId = args[0]; String faceId = args[1]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); System.out.println("Searching for a face in a collections"); searchFacebyId(rekClient, collectionId, faceId); rekClient.close(); } public static void searchFacebyId(RekognitionClient rekClient, String collectionId, String faceId) { try { SearchFacesRequest searchFacesRequest = SearchFacesRequest.builder() .collectionId(collectionId) .faceId(faceId) .faceMatchThreshold(70F) .maxFaces(2) .build(); SearchFacesResponse imageResponse = rekClient.searchFaces(searchFacesRequest); System.out.println("Faces matching in the collection"); List<FaceMatch> faceImageMatches = imageResponse.faceMatches(); for (FaceMatch face : faceImageMatches) { System.out.println("The similarity level is " + face.similarity()); System.out.println(); } } catch (RekognitionException e) { System.out.println(e.getMessage()); System.exit(1); } } }

Szenarien

Das folgende Codebeispiel zeigt, wie eine Serverless-Anwendung erstellt wird, mit der Benutzer Fotos mithilfe von Labels erstellen können.

SDKfür Java 2.x

Zeigt, wie eine Anwendung zur Verwaltung von Fotobeständen entwickelt wird, die mithilfe von Amazon Rekognition Labels in Bildern erkennt und sie für einen späteren Abruf speichert.

Den vollständigen Quellcode und Anweisungen zur Einrichtung und Ausführung finden Sie im vollständigen Beispiel unter GitHub.

Einen tiefen Einblick in den Ursprung dieses Beispiels finden Sie im Beitrag in der AWS -Community.

In diesem Beispiel verwendete Dienste
  • APIGateway

  • DynamoDB

  • Lambda

  • Amazon Rekognition

  • Amazon S3

  • Amazon SNS

Das folgende Codebeispiel zeigt, wie Sie eine App erstellen, die Amazon Rekognition verwendet, um persönliche Schutzausrüstung (PPE) in Bildern zu erkennen.

SDKfür Java 2.x

Zeigt, wie eine AWS Lambda Funktion erstellt wird, die Bilder mit persönlicher Schutzausrüstung erkennt.

Den vollständigen Quellcode und Anweisungen zur Einrichtung und Ausführung finden Sie im vollständigen Beispiel unter GitHub.

In diesem Beispiel verwendete Dienste
  • DynamoDB

  • Amazon Rekognition

  • Amazon S3

  • Amazon SES

Wie das aussehen kann, sehen Sie am nachfolgenden Beispielcode:

  • Starten Sie Amazon-Rekognition-Aufträge, um Elemente wie Personen, Objekte und Text in Videos zu erkennen.

  • Überprüfen Sie den Auftragsstatus, bis die Aufträge abgeschlossen sind.

  • Gibt die Liste der von jedem Auftrag erkannten Elemente aus.

SDKfür Java 2.x
Anmerkung

Es gibt noch mehr dazu. GitHub Sie sehen das vollständige Beispiel und erfahren, wie Sie das AWS -Code-Beispiel-Repository einrichten und ausführen.

Abrufen von Informationen aus einem Video, das sich in einem Amazon-S3-Bucket befindet.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.S3Object; import software.amazon.awssdk.services.rekognition.model.NotificationChannel; import software.amazon.awssdk.services.rekognition.model.Video; import software.amazon.awssdk.services.rekognition.model.StartCelebrityRecognitionResponse; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.CelebrityRecognitionSortBy; import software.amazon.awssdk.services.rekognition.model.VideoMetadata; import software.amazon.awssdk.services.rekognition.model.CelebrityRecognition; import software.amazon.awssdk.services.rekognition.model.CelebrityDetail; import software.amazon.awssdk.services.rekognition.model.StartCelebrityRecognitionRequest; import software.amazon.awssdk.services.rekognition.model.GetCelebrityRecognitionRequest; import software.amazon.awssdk.services.rekognition.model.GetCelebrityRecognitionResponse; import java.util.List; /** * To run this code example, ensure that you perform the Prerequisites as stated * in the Amazon Rekognition Guide: * https://docs.aws.amazon.com/rekognition/latest/dg/video-analyzing-with-sqs.html * * Also, ensure that set up your development environment, including your * credentials. * * For information, see this documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class VideoCelebrityDetection { private static String startJobId = ""; public static void main(String[] args) { final String usage = """ Usage: <bucket> <video> <topicArn> <roleArn> Where: bucket - The name of the bucket in which the video is located (for example, (for example, myBucket).\s video - The name of video (for example, people.mp4).\s topicArn - The ARN of the Amazon Simple Notification Service (Amazon SNS) topic.\s roleArn - The ARN of the AWS Identity and Access Management (IAM) role to use.\s """; if (args.length != 4) { System.out.println(usage); System.exit(1); } String bucket = args[0]; String video = args[1]; String topicArn = args[2]; String roleArn = args[3]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); NotificationChannel channel = NotificationChannel.builder() .snsTopicArn(topicArn) .roleArn(roleArn) .build(); startCelebrityDetection(rekClient, channel, bucket, video); getCelebrityDetectionResults(rekClient); System.out.println("This example is done!"); rekClient.close(); } public static void startCelebrityDetection(RekognitionClient rekClient, NotificationChannel channel, String bucket, String video) { try { S3Object s3Obj = S3Object.builder() .bucket(bucket) .name(video) .build(); Video vidOb = Video.builder() .s3Object(s3Obj) .build(); StartCelebrityRecognitionRequest recognitionRequest = StartCelebrityRecognitionRequest.builder() .jobTag("Celebrities") .notificationChannel(channel) .video(vidOb) .build(); StartCelebrityRecognitionResponse startCelebrityRecognitionResult = rekClient .startCelebrityRecognition(recognitionRequest); startJobId = startCelebrityRecognitionResult.jobId(); } catch (RekognitionException e) { System.out.println(e.getMessage()); System.exit(1); } } public static void getCelebrityDetectionResults(RekognitionClient rekClient) { try { String paginationToken = null; GetCelebrityRecognitionResponse recognitionResponse = null; boolean finished = false; String status; int yy = 0; do { if (recognitionResponse != null) paginationToken = recognitionResponse.nextToken(); GetCelebrityRecognitionRequest recognitionRequest = GetCelebrityRecognitionRequest.builder() .jobId(startJobId) .nextToken(paginationToken) .sortBy(CelebrityRecognitionSortBy.TIMESTAMP) .maxResults(10) .build(); // Wait until the job succeeds while (!finished) { recognitionResponse = rekClient.getCelebrityRecognition(recognitionRequest); status = recognitionResponse.jobStatusAsString(); if (status.compareTo("SUCCEEDED") == 0) finished = true; else { System.out.println(yy + " status is: " + status); Thread.sleep(1000); } yy++; } finished = false; // Proceed when the job is done - otherwise VideoMetadata is null. VideoMetadata videoMetaData = recognitionResponse.videoMetadata(); System.out.println("Format: " + videoMetaData.format()); System.out.println("Codec: " + videoMetaData.codec()); System.out.println("Duration: " + videoMetaData.durationMillis()); System.out.println("FrameRate: " + videoMetaData.frameRate()); System.out.println("Job"); List<CelebrityRecognition> celebs = recognitionResponse.celebrities(); for (CelebrityRecognition celeb : celebs) { long seconds = celeb.timestamp() / 1000; System.out.print("Sec: " + seconds + " "); CelebrityDetail details = celeb.celebrity(); System.out.println("Name: " + details.name()); System.out.println("Id: " + details.id()); System.out.println(); } } while (recognitionResponse.nextToken() != null); } catch (RekognitionException | InterruptedException e) { System.out.println(e.getMessage()); System.exit(1); } } }

Erkennen Sie Labels in einem Video mithilfe einer Labelerkennung.

import com.fasterxml.jackson.core.JsonProcessingException; import com.fasterxml.jackson.databind.JsonMappingException; import com.fasterxml.jackson.databind.JsonNode; import com.fasterxml.jackson.databind.ObjectMapper; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.StartLabelDetectionResponse; import software.amazon.awssdk.services.rekognition.model.NotificationChannel; import software.amazon.awssdk.services.rekognition.model.S3Object; import software.amazon.awssdk.services.rekognition.model.Video; import software.amazon.awssdk.services.rekognition.model.StartLabelDetectionRequest; import software.amazon.awssdk.services.rekognition.model.GetLabelDetectionRequest; import software.amazon.awssdk.services.rekognition.model.GetLabelDetectionResponse; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.LabelDetectionSortBy; import software.amazon.awssdk.services.rekognition.model.VideoMetadata; import software.amazon.awssdk.services.rekognition.model.LabelDetection; import software.amazon.awssdk.services.rekognition.model.Label; import software.amazon.awssdk.services.rekognition.model.Instance; import software.amazon.awssdk.services.rekognition.model.Parent; import software.amazon.awssdk.services.sqs.SqsClient; import software.amazon.awssdk.services.sqs.model.Message; import software.amazon.awssdk.services.sqs.model.ReceiveMessageRequest; import software.amazon.awssdk.services.sqs.model.DeleteMessageRequest; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class VideoDetect { private static String startJobId = ""; public static void main(String[] args) { final String usage = """ Usage: <bucket> <video> <queueUrl> <topicArn> <roleArn> Where: bucket - The name of the bucket in which the video is located (for example, (for example, myBucket).\s video - The name of the video (for example, people.mp4).\s queueUrl- The URL of a SQS queue.\s topicArn - The ARN of the Amazon Simple Notification Service (Amazon SNS) topic.\s roleArn - The ARN of the AWS Identity and Access Management (IAM) role to use.\s """; if (args.length != 5) { System.out.println(usage); System.exit(1); } String bucket = args[0]; String video = args[1]; String queueUrl = args[2]; String topicArn = args[3]; String roleArn = args[4]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); SqsClient sqs = SqsClient.builder() .region(Region.US_EAST_1) .build(); NotificationChannel channel = NotificationChannel.builder() .snsTopicArn(topicArn) .roleArn(roleArn) .build(); startLabels(rekClient, channel, bucket, video); getLabelJob(rekClient, sqs, queueUrl); System.out.println("This example is done!"); sqs.close(); rekClient.close(); } public static void startLabels(RekognitionClient rekClient, NotificationChannel channel, String bucket, String video) { try { S3Object s3Obj = S3Object.builder() .bucket(bucket) .name(video) .build(); Video vidOb = Video.builder() .s3Object(s3Obj) .build(); StartLabelDetectionRequest labelDetectionRequest = StartLabelDetectionRequest.builder() .jobTag("DetectingLabels") .notificationChannel(channel) .video(vidOb) .minConfidence(50F) .build(); StartLabelDetectionResponse labelDetectionResponse = rekClient.startLabelDetection(labelDetectionRequest); startJobId = labelDetectionResponse.jobId(); boolean ans = true; String status = ""; int yy = 0; while (ans) { GetLabelDetectionRequest detectionRequest = GetLabelDetectionRequest.builder() .jobId(startJobId) .maxResults(10) .build(); GetLabelDetectionResponse result = rekClient.getLabelDetection(detectionRequest); status = result.jobStatusAsString(); if (status.compareTo("SUCCEEDED") == 0) ans = false; else System.out.println(yy + " status is: " + status); Thread.sleep(1000); yy++; } System.out.println(startJobId + " status is: " + status); } catch (RekognitionException | InterruptedException e) { e.getMessage(); System.exit(1); } } public static void getLabelJob(RekognitionClient rekClient, SqsClient sqs, String queueUrl) { List<Message> messages; ReceiveMessageRequest messageRequest = ReceiveMessageRequest.builder() .queueUrl(queueUrl) .build(); try { messages = sqs.receiveMessage(messageRequest).messages(); if (!messages.isEmpty()) { for (Message message : messages) { String notification = message.body(); // Get the status and job id from the notification ObjectMapper mapper = new ObjectMapper(); JsonNode jsonMessageTree = mapper.readTree(notification); JsonNode messageBodyText = jsonMessageTree.get("Message"); ObjectMapper operationResultMapper = new ObjectMapper(); JsonNode jsonResultTree = operationResultMapper.readTree(messageBodyText.textValue()); JsonNode operationJobId = jsonResultTree.get("JobId"); JsonNode operationStatus = jsonResultTree.get("Status"); System.out.println("Job found in JSON is " + operationJobId); DeleteMessageRequest deleteMessageRequest = DeleteMessageRequest.builder() .queueUrl(queueUrl) .build(); String jobId = operationJobId.textValue(); if (startJobId.compareTo(jobId) == 0) { System.out.println("Job id: " + operationJobId); System.out.println("Status : " + operationStatus.toString()); if (operationStatus.asText().equals("SUCCEEDED")) getResultsLabels(rekClient); else System.out.println("Video analysis failed"); sqs.deleteMessage(deleteMessageRequest); } else { System.out.println("Job received was not job " + startJobId); sqs.deleteMessage(deleteMessageRequest); } } } } catch (RekognitionException e) { e.getMessage(); System.exit(1); } catch (JsonMappingException e) { e.printStackTrace(); } catch (JsonProcessingException e) { e.printStackTrace(); } } // Gets the job results by calling GetLabelDetection private static void getResultsLabels(RekognitionClient rekClient) { int maxResults = 10; String paginationToken = null; GetLabelDetectionResponse labelDetectionResult = null; try { do { if (labelDetectionResult != null) paginationToken = labelDetectionResult.nextToken(); GetLabelDetectionRequest labelDetectionRequest = GetLabelDetectionRequest.builder() .jobId(startJobId) .sortBy(LabelDetectionSortBy.TIMESTAMP) .maxResults(maxResults) .nextToken(paginationToken) .build(); labelDetectionResult = rekClient.getLabelDetection(labelDetectionRequest); VideoMetadata videoMetaData = labelDetectionResult.videoMetadata(); System.out.println("Format: " + videoMetaData.format()); System.out.println("Codec: " + videoMetaData.codec()); System.out.println("Duration: " + videoMetaData.durationMillis()); System.out.println("FrameRate: " + videoMetaData.frameRate()); List<LabelDetection> detectedLabels = labelDetectionResult.labels(); for (LabelDetection detectedLabel : detectedLabels) { long seconds = detectedLabel.timestamp(); Label label = detectedLabel.label(); System.out.println("Millisecond: " + seconds + " "); System.out.println(" Label:" + label.name()); System.out.println(" Confidence:" + detectedLabel.label().confidence().toString()); List<Instance> instances = label.instances(); System.out.println(" Instances of " + label.name()); if (instances.isEmpty()) { System.out.println(" " + "None"); } else { for (Instance instance : instances) { System.out.println(" Confidence: " + instance.confidence().toString()); System.out.println(" Bounding box: " + instance.boundingBox().toString()); } } System.out.println(" Parent labels for " + label.name() + ":"); List<Parent> parents = label.parents(); if (parents.isEmpty()) { System.out.println(" None"); } else { for (Parent parent : parents) { System.out.println(" " + parent.name()); } } System.out.println(); } } while (labelDetectionResult != null && labelDetectionResult.nextToken() != null); } catch (RekognitionException e) { e.getMessage(); System.exit(1); } } }

Erkennen von Gesichtern in einem Video, das in einem Amazon-S3-Bucket gespeichert ist.

import com.fasterxml.jackson.core.JsonProcessingException; import com.fasterxml.jackson.databind.JsonMappingException; import com.fasterxml.jackson.databind.JsonNode; import com.fasterxml.jackson.databind.ObjectMapper; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.StartLabelDetectionResponse; import software.amazon.awssdk.services.rekognition.model.NotificationChannel; import software.amazon.awssdk.services.rekognition.model.S3Object; import software.amazon.awssdk.services.rekognition.model.Video; import software.amazon.awssdk.services.rekognition.model.StartLabelDetectionRequest; import software.amazon.awssdk.services.rekognition.model.GetLabelDetectionRequest; import software.amazon.awssdk.services.rekognition.model.GetLabelDetectionResponse; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.LabelDetectionSortBy; import software.amazon.awssdk.services.rekognition.model.VideoMetadata; import software.amazon.awssdk.services.rekognition.model.LabelDetection; import software.amazon.awssdk.services.rekognition.model.Label; import software.amazon.awssdk.services.rekognition.model.Instance; import software.amazon.awssdk.services.rekognition.model.Parent; import software.amazon.awssdk.services.sqs.SqsClient; import software.amazon.awssdk.services.sqs.model.Message; import software.amazon.awssdk.services.sqs.model.ReceiveMessageRequest; import software.amazon.awssdk.services.sqs.model.DeleteMessageRequest; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class VideoDetect { private static String startJobId = ""; public static void main(String[] args) { final String usage = """ Usage: <bucket> <video> <queueUrl> <topicArn> <roleArn> Where: bucket - The name of the bucket in which the video is located (for example, (for example, myBucket).\s video - The name of the video (for example, people.mp4).\s queueUrl- The URL of a SQS queue.\s topicArn - The ARN of the Amazon Simple Notification Service (Amazon SNS) topic.\s roleArn - The ARN of the AWS Identity and Access Management (IAM) role to use.\s """; if (args.length != 5) { System.out.println(usage); System.exit(1); } String bucket = args[0]; String video = args[1]; String queueUrl = args[2]; String topicArn = args[3]; String roleArn = args[4]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); SqsClient sqs = SqsClient.builder() .region(Region.US_EAST_1) .build(); NotificationChannel channel = NotificationChannel.builder() .snsTopicArn(topicArn) .roleArn(roleArn) .build(); startLabels(rekClient, channel, bucket, video); getLabelJob(rekClient, sqs, queueUrl); System.out.println("This example is done!"); sqs.close(); rekClient.close(); } public static void startLabels(RekognitionClient rekClient, NotificationChannel channel, String bucket, String video) { try { S3Object s3Obj = S3Object.builder() .bucket(bucket) .name(video) .build(); Video vidOb = Video.builder() .s3Object(s3Obj) .build(); StartLabelDetectionRequest labelDetectionRequest = StartLabelDetectionRequest.builder() .jobTag("DetectingLabels") .notificationChannel(channel) .video(vidOb) .minConfidence(50F) .build(); StartLabelDetectionResponse labelDetectionResponse = rekClient.startLabelDetection(labelDetectionRequest); startJobId = labelDetectionResponse.jobId(); boolean ans = true; String status = ""; int yy = 0; while (ans) { GetLabelDetectionRequest detectionRequest = GetLabelDetectionRequest.builder() .jobId(startJobId) .maxResults(10) .build(); GetLabelDetectionResponse result = rekClient.getLabelDetection(detectionRequest); status = result.jobStatusAsString(); if (status.compareTo("SUCCEEDED") == 0) ans = false; else System.out.println(yy + " status is: " + status); Thread.sleep(1000); yy++; } System.out.println(startJobId + " status is: " + status); } catch (RekognitionException | InterruptedException e) { e.getMessage(); System.exit(1); } } public static void getLabelJob(RekognitionClient rekClient, SqsClient sqs, String queueUrl) { List<Message> messages; ReceiveMessageRequest messageRequest = ReceiveMessageRequest.builder() .queueUrl(queueUrl) .build(); try { messages = sqs.receiveMessage(messageRequest).messages(); if (!messages.isEmpty()) { for (Message message : messages) { String notification = message.body(); // Get the status and job id from the notification ObjectMapper mapper = new ObjectMapper(); JsonNode jsonMessageTree = mapper.readTree(notification); JsonNode messageBodyText = jsonMessageTree.get("Message"); ObjectMapper operationResultMapper = new ObjectMapper(); JsonNode jsonResultTree = operationResultMapper.readTree(messageBodyText.textValue()); JsonNode operationJobId = jsonResultTree.get("JobId"); JsonNode operationStatus = jsonResultTree.get("Status"); System.out.println("Job found in JSON is " + operationJobId); DeleteMessageRequest deleteMessageRequest = DeleteMessageRequest.builder() .queueUrl(queueUrl) .build(); String jobId = operationJobId.textValue(); if (startJobId.compareTo(jobId) == 0) { System.out.println("Job id: " + operationJobId); System.out.println("Status : " + operationStatus.toString()); if (operationStatus.asText().equals("SUCCEEDED")) getResultsLabels(rekClient); else System.out.println("Video analysis failed"); sqs.deleteMessage(deleteMessageRequest); } else { System.out.println("Job received was not job " + startJobId); sqs.deleteMessage(deleteMessageRequest); } } } } catch (RekognitionException e) { e.getMessage(); System.exit(1); } catch (JsonMappingException e) { e.printStackTrace(); } catch (JsonProcessingException e) { e.printStackTrace(); } } // Gets the job results by calling GetLabelDetection private static void getResultsLabels(RekognitionClient rekClient) { int maxResults = 10; String paginationToken = null; GetLabelDetectionResponse labelDetectionResult = null; try { do { if (labelDetectionResult != null) paginationToken = labelDetectionResult.nextToken(); GetLabelDetectionRequest labelDetectionRequest = GetLabelDetectionRequest.builder() .jobId(startJobId) .sortBy(LabelDetectionSortBy.TIMESTAMP) .maxResults(maxResults) .nextToken(paginationToken) .build(); labelDetectionResult = rekClient.getLabelDetection(labelDetectionRequest); VideoMetadata videoMetaData = labelDetectionResult.videoMetadata(); System.out.println("Format: " + videoMetaData.format()); System.out.println("Codec: " + videoMetaData.codec()); System.out.println("Duration: " + videoMetaData.durationMillis()); System.out.println("FrameRate: " + videoMetaData.frameRate()); List<LabelDetection> detectedLabels = labelDetectionResult.labels(); for (LabelDetection detectedLabel : detectedLabels) { long seconds = detectedLabel.timestamp(); Label label = detectedLabel.label(); System.out.println("Millisecond: " + seconds + " "); System.out.println(" Label:" + label.name()); System.out.println(" Confidence:" + detectedLabel.label().confidence().toString()); List<Instance> instances = label.instances(); System.out.println(" Instances of " + label.name()); if (instances.isEmpty()) { System.out.println(" " + "None"); } else { for (Instance instance : instances) { System.out.println(" Confidence: " + instance.confidence().toString()); System.out.println(" Bounding box: " + instance.boundingBox().toString()); } } System.out.println(" Parent labels for " + label.name() + ":"); List<Parent> parents = label.parents(); if (parents.isEmpty()) { System.out.println(" None"); } else { for (Parent parent : parents) { System.out.println(" " + parent.name()); } } System.out.println(); } } while (labelDetectionResult != null && labelDetectionResult.nextToken() != null); } catch (RekognitionException e) { e.getMessage(); System.exit(1); } } }

Erkennen von unangemessenen oder anstößigen Inhalten in einem Video, das in einem Amazon-S3-Bucket gespeichert ist.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.NotificationChannel; import software.amazon.awssdk.services.rekognition.model.S3Object; import software.amazon.awssdk.services.rekognition.model.Video; import software.amazon.awssdk.services.rekognition.model.StartContentModerationRequest; import software.amazon.awssdk.services.rekognition.model.StartContentModerationResponse; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.GetContentModerationResponse; import software.amazon.awssdk.services.rekognition.model.GetContentModerationRequest; import software.amazon.awssdk.services.rekognition.model.VideoMetadata; import software.amazon.awssdk.services.rekognition.model.ContentModerationDetection; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class VideoDetectInappropriate { private static String startJobId = ""; public static void main(String[] args) { final String usage = """ Usage: <bucket> <video> <topicArn> <roleArn> Where: bucket - The name of the bucket in which the video is located (for example, (for example, myBucket).\s video - The name of video (for example, people.mp4).\s topicArn - The ARN of the Amazon Simple Notification Service (Amazon SNS) topic.\s roleArn - The ARN of the AWS Identity and Access Management (IAM) role to use.\s """; if (args.length != 4) { System.out.println(usage); System.exit(1); } String bucket = args[0]; String video = args[1]; String topicArn = args[2]; String roleArn = args[3]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); NotificationChannel channel = NotificationChannel.builder() .snsTopicArn(topicArn) .roleArn(roleArn) .build(); startModerationDetection(rekClient, channel, bucket, video); getModResults(rekClient); System.out.println("This example is done!"); rekClient.close(); } public static void startModerationDetection(RekognitionClient rekClient, NotificationChannel channel, String bucket, String video) { try { S3Object s3Obj = S3Object.builder() .bucket(bucket) .name(video) .build(); Video vidOb = Video.builder() .s3Object(s3Obj) .build(); StartContentModerationRequest modDetectionRequest = StartContentModerationRequest.builder() .jobTag("Moderation") .notificationChannel(channel) .video(vidOb) .build(); StartContentModerationResponse startModDetectionResult = rekClient .startContentModeration(modDetectionRequest); startJobId = startModDetectionResult.jobId(); } catch (RekognitionException e) { System.out.println(e.getMessage()); System.exit(1); } } public static void getModResults(RekognitionClient rekClient) { try { String paginationToken = null; GetContentModerationResponse modDetectionResponse = null; boolean finished = false; String status; int yy = 0; do { if (modDetectionResponse != null) paginationToken = modDetectionResponse.nextToken(); GetContentModerationRequest modRequest = GetContentModerationRequest.builder() .jobId(startJobId) .nextToken(paginationToken) .maxResults(10) .build(); // Wait until the job succeeds. while (!finished) { modDetectionResponse = rekClient.getContentModeration(modRequest); status = modDetectionResponse.jobStatusAsString(); if (status.compareTo("SUCCEEDED") == 0) finished = true; else { System.out.println(yy + " status is: " + status); Thread.sleep(1000); } yy++; } finished = false; // Proceed when the job is done - otherwise VideoMetadata is null. VideoMetadata videoMetaData = modDetectionResponse.videoMetadata(); System.out.println("Format: " + videoMetaData.format()); System.out.println("Codec: " + videoMetaData.codec()); System.out.println("Duration: " + videoMetaData.durationMillis()); System.out.println("FrameRate: " + videoMetaData.frameRate()); System.out.println("Job"); List<ContentModerationDetection> mods = modDetectionResponse.moderationLabels(); for (ContentModerationDetection mod : mods) { long seconds = mod.timestamp() / 1000; System.out.print("Mod label: " + seconds + " "); System.out.println(mod.moderationLabel().toString()); System.out.println(); } } while (modDetectionResponse != null && modDetectionResponse.nextToken() != null); } catch (RekognitionException | InterruptedException e) { System.out.println(e.getMessage()); System.exit(1); } } }

Erkennen Sie technische Signal-Segmente und Einstellungserkennungssegmente in einem Video, das in einem Amazon-S3-Bucket gespeichert ist.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.S3Object; import software.amazon.awssdk.services.rekognition.model.NotificationChannel; import software.amazon.awssdk.services.rekognition.model.Video; import software.amazon.awssdk.services.rekognition.model.StartShotDetectionFilter; import software.amazon.awssdk.services.rekognition.model.StartTechnicalCueDetectionFilter; import software.amazon.awssdk.services.rekognition.model.StartSegmentDetectionFilters; import software.amazon.awssdk.services.rekognition.model.StartSegmentDetectionRequest; import software.amazon.awssdk.services.rekognition.model.StartSegmentDetectionResponse; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.GetSegmentDetectionResponse; import software.amazon.awssdk.services.rekognition.model.GetSegmentDetectionRequest; import software.amazon.awssdk.services.rekognition.model.VideoMetadata; import software.amazon.awssdk.services.rekognition.model.SegmentDetection; import software.amazon.awssdk.services.rekognition.model.TechnicalCueSegment; import software.amazon.awssdk.services.rekognition.model.ShotSegment; import software.amazon.awssdk.services.rekognition.model.SegmentType; import software.amazon.awssdk.services.sqs.SqsClient; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class VideoDetectSegment { private static String startJobId = ""; public static void main(String[] args) { final String usage = """ Usage: <bucket> <video> <topicArn> <roleArn> Where: bucket - The name of the bucket in which the video is located (for example, (for example, myBucket).\s video - The name of video (for example, people.mp4).\s topicArn - The ARN of the Amazon Simple Notification Service (Amazon SNS) topic.\s roleArn - The ARN of the AWS Identity and Access Management (IAM) role to use.\s """; if (args.length != 4) { System.out.println(usage); System.exit(1); } String bucket = args[0]; String video = args[1]; String topicArn = args[2]; String roleArn = args[3]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); SqsClient sqs = SqsClient.builder() .region(Region.US_EAST_1) .build(); NotificationChannel channel = NotificationChannel.builder() .snsTopicArn(topicArn) .roleArn(roleArn) .build(); startSegmentDetection(rekClient, channel, bucket, video); getSegmentResults(rekClient); System.out.println("This example is done!"); sqs.close(); rekClient.close(); } public static void startSegmentDetection(RekognitionClient rekClient, NotificationChannel channel, String bucket, String video) { try { S3Object s3Obj = S3Object.builder() .bucket(bucket) .name(video) .build(); Video vidOb = Video.builder() .s3Object(s3Obj) .build(); StartShotDetectionFilter cueDetectionFilter = StartShotDetectionFilter.builder() .minSegmentConfidence(60F) .build(); StartTechnicalCueDetectionFilter technicalCueDetectionFilter = StartTechnicalCueDetectionFilter.builder() .minSegmentConfidence(60F) .build(); StartSegmentDetectionFilters filters = StartSegmentDetectionFilters.builder() .shotFilter(cueDetectionFilter) .technicalCueFilter(technicalCueDetectionFilter) .build(); StartSegmentDetectionRequest segDetectionRequest = StartSegmentDetectionRequest.builder() .jobTag("DetectingLabels") .notificationChannel(channel) .segmentTypes(SegmentType.TECHNICAL_CUE, SegmentType.SHOT) .video(vidOb) .filters(filters) .build(); StartSegmentDetectionResponse segDetectionResponse = rekClient.startSegmentDetection(segDetectionRequest); startJobId = segDetectionResponse.jobId(); } catch (RekognitionException e) { e.getMessage(); System.exit(1); } } public static void getSegmentResults(RekognitionClient rekClient) { try { String paginationToken = null; GetSegmentDetectionResponse segDetectionResponse = null; boolean finished = false; String status; int yy = 0; do { if (segDetectionResponse != null) paginationToken = segDetectionResponse.nextToken(); GetSegmentDetectionRequest recognitionRequest = GetSegmentDetectionRequest.builder() .jobId(startJobId) .nextToken(paginationToken) .maxResults(10) .build(); // Wait until the job succeeds. while (!finished) { segDetectionResponse = rekClient.getSegmentDetection(recognitionRequest); status = segDetectionResponse.jobStatusAsString(); if (status.compareTo("SUCCEEDED") == 0) finished = true; else { System.out.println(yy + " status is: " + status); Thread.sleep(1000); } yy++; } finished = false; // Proceed when the job is done - otherwise VideoMetadata is null. List<VideoMetadata> videoMetaData = segDetectionResponse.videoMetadata(); for (VideoMetadata metaData : videoMetaData) { System.out.println("Format: " + metaData.format()); System.out.println("Codec: " + metaData.codec()); System.out.println("Duration: " + metaData.durationMillis()); System.out.println("FrameRate: " + metaData.frameRate()); System.out.println("Job"); } List<SegmentDetection> detectedSegments = segDetectionResponse.segments(); for (SegmentDetection detectedSegment : detectedSegments) { String type = detectedSegment.type().toString(); if (type.contains(SegmentType.TECHNICAL_CUE.toString())) { System.out.println("Technical Cue"); TechnicalCueSegment segmentCue = detectedSegment.technicalCueSegment(); System.out.println("\tType: " + segmentCue.type()); System.out.println("\tConfidence: " + segmentCue.confidence().toString()); } if (type.contains(SegmentType.SHOT.toString())) { System.out.println("Shot"); ShotSegment segmentShot = detectedSegment.shotSegment(); System.out.println("\tIndex " + segmentShot.index()); System.out.println("\tConfidence: " + segmentShot.confidence().toString()); } long seconds = detectedSegment.durationMillis(); System.out.println("\tDuration : " + seconds + " milliseconds"); System.out.println("\tStart time code: " + detectedSegment.startTimecodeSMPTE()); System.out.println("\tEnd time code: " + detectedSegment.endTimecodeSMPTE()); System.out.println("\tDuration time code: " + detectedSegment.durationSMPTE()); System.out.println(); } } while (segDetectionResponse != null && segDetectionResponse.nextToken() != null); } catch (RekognitionException | InterruptedException e) { System.out.println(e.getMessage()); System.exit(1); } } }

Erkennen Sie Text in einem Video, das in einem Amazon-S3-Bucket gespeichert ist.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.S3Object; import software.amazon.awssdk.services.rekognition.model.NotificationChannel; import software.amazon.awssdk.services.rekognition.model.Video; import software.amazon.awssdk.services.rekognition.model.StartTextDetectionRequest; import software.amazon.awssdk.services.rekognition.model.StartTextDetectionResponse; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.GetTextDetectionResponse; import software.amazon.awssdk.services.rekognition.model.GetTextDetectionRequest; import software.amazon.awssdk.services.rekognition.model.VideoMetadata; import software.amazon.awssdk.services.rekognition.model.TextDetectionResult; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class VideoDetectText { private static String startJobId = ""; public static void main(String[] args) { final String usage = """ Usage: <bucket> <video> <topicArn> <roleArn> Where: bucket - The name of the bucket in which the video is located (for example, (for example, myBucket).\s video - The name of video (for example, people.mp4).\s topicArn - The ARN of the Amazon Simple Notification Service (Amazon SNS) topic.\s roleArn - The ARN of the AWS Identity and Access Management (IAM) role to use.\s """; if (args.length != 4) { System.out.println(usage); System.exit(1); } String bucket = args[0]; String video = args[1]; String topicArn = args[2]; String roleArn = args[3]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); NotificationChannel channel = NotificationChannel.builder() .snsTopicArn(topicArn) .roleArn(roleArn) .build(); startTextLabels(rekClient, channel, bucket, video); getTextResults(rekClient); System.out.println("This example is done!"); rekClient.close(); } public static void startTextLabels(RekognitionClient rekClient, NotificationChannel channel, String bucket, String video) { try { S3Object s3Obj = S3Object.builder() .bucket(bucket) .name(video) .build(); Video vidOb = Video.builder() .s3Object(s3Obj) .build(); StartTextDetectionRequest labelDetectionRequest = StartTextDetectionRequest.builder() .jobTag("DetectingLabels") .notificationChannel(channel) .video(vidOb) .build(); StartTextDetectionResponse labelDetectionResponse = rekClient.startTextDetection(labelDetectionRequest); startJobId = labelDetectionResponse.jobId(); } catch (RekognitionException e) { System.out.println(e.getMessage()); System.exit(1); } } public static void getTextResults(RekognitionClient rekClient) { try { String paginationToken = null; GetTextDetectionResponse textDetectionResponse = null; boolean finished = false; String status; int yy = 0; do { if (textDetectionResponse != null) paginationToken = textDetectionResponse.nextToken(); GetTextDetectionRequest recognitionRequest = GetTextDetectionRequest.builder() .jobId(startJobId) .nextToken(paginationToken) .maxResults(10) .build(); // Wait until the job succeeds. while (!finished) { textDetectionResponse = rekClient.getTextDetection(recognitionRequest); status = textDetectionResponse.jobStatusAsString(); if (status.compareTo("SUCCEEDED") == 0) finished = true; else { System.out.println(yy + " status is: " + status); Thread.sleep(1000); } yy++; } finished = false; // Proceed when the job is done - otherwise VideoMetadata is null. VideoMetadata videoMetaData = textDetectionResponse.videoMetadata(); System.out.println("Format: " + videoMetaData.format()); System.out.println("Codec: " + videoMetaData.codec()); System.out.println("Duration: " + videoMetaData.durationMillis()); System.out.println("FrameRate: " + videoMetaData.frameRate()); System.out.println("Job"); List<TextDetectionResult> labels = textDetectionResponse.textDetections(); for (TextDetectionResult detectedText : labels) { System.out.println("Confidence: " + detectedText.textDetection().confidence().toString()); System.out.println("Id : " + detectedText.textDetection().id()); System.out.println("Parent Id: " + detectedText.textDetection().parentId()); System.out.println("Type: " + detectedText.textDetection().type()); System.out.println("Text: " + detectedText.textDetection().detectedText()); System.out.println(); } } while (textDetectionResponse != null && textDetectionResponse.nextToken() != null); } catch (RekognitionException | InterruptedException e) { System.out.println(e.getMessage()); System.exit(1); } } }

Erkennen Sie Personen in einem Video, das in einem Amazon-S3-Bucket gespeichert ist.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.S3Object; import software.amazon.awssdk.services.rekognition.model.NotificationChannel; import software.amazon.awssdk.services.rekognition.model.StartPersonTrackingRequest; import software.amazon.awssdk.services.rekognition.model.Video; import software.amazon.awssdk.services.rekognition.model.StartPersonTrackingResponse; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.GetPersonTrackingResponse; import software.amazon.awssdk.services.rekognition.model.GetPersonTrackingRequest; import software.amazon.awssdk.services.rekognition.model.VideoMetadata; import software.amazon.awssdk.services.rekognition.model.PersonDetection; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class VideoPersonDetection { private static String startJobId = ""; public static void main(String[] args) { final String usage = """ Usage: <bucket> <video> <topicArn> <roleArn> Where: bucket - The name of the bucket in which the video is located (for example, (for example, myBucket).\s video - The name of video (for example, people.mp4).\s topicArn - The ARN of the Amazon Simple Notification Service (Amazon SNS) topic.\s roleArn - The ARN of the AWS Identity and Access Management (IAM) role to use.\s """; if (args.length != 4) { System.out.println(usage); System.exit(1); } String bucket = args[0]; String video = args[1]; String topicArn = args[2]; String roleArn = args[3]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); NotificationChannel channel = NotificationChannel.builder() .snsTopicArn(topicArn) .roleArn(roleArn) .build(); startPersonLabels(rekClient, channel, bucket, video); getPersonDetectionResults(rekClient); System.out.println("This example is done!"); rekClient.close(); } public static void startPersonLabels(RekognitionClient rekClient, NotificationChannel channel, String bucket, String video) { try { S3Object s3Obj = S3Object.builder() .bucket(bucket) .name(video) .build(); Video vidOb = Video.builder() .s3Object(s3Obj) .build(); StartPersonTrackingRequest personTrackingRequest = StartPersonTrackingRequest.builder() .jobTag("DetectingLabels") .video(vidOb) .notificationChannel(channel) .build(); StartPersonTrackingResponse labelDetectionResponse = rekClient.startPersonTracking(personTrackingRequest); startJobId = labelDetectionResponse.jobId(); } catch (RekognitionException e) { System.out.println(e.getMessage()); System.exit(1); } } public static void getPersonDetectionResults(RekognitionClient rekClient) { try { String paginationToken = null; GetPersonTrackingResponse personTrackingResult = null; boolean finished = false; String status; int yy = 0; do { if (personTrackingResult != null) paginationToken = personTrackingResult.nextToken(); GetPersonTrackingRequest recognitionRequest = GetPersonTrackingRequest.builder() .jobId(startJobId) .nextToken(paginationToken) .maxResults(10) .build(); // Wait until the job succeeds while (!finished) { personTrackingResult = rekClient.getPersonTracking(recognitionRequest); status = personTrackingResult.jobStatusAsString(); if (status.compareTo("SUCCEEDED") == 0) finished = true; else { System.out.println(yy + " status is: " + status); Thread.sleep(1000); } yy++; } finished = false; // Proceed when the job is done - otherwise VideoMetadata is null. VideoMetadata videoMetaData = personTrackingResult.videoMetadata(); System.out.println("Format: " + videoMetaData.format()); System.out.println("Codec: " + videoMetaData.codec()); System.out.println("Duration: " + videoMetaData.durationMillis()); System.out.println("FrameRate: " + videoMetaData.frameRate()); System.out.println("Job"); List<PersonDetection> detectedPersons = personTrackingResult.persons(); for (PersonDetection detectedPerson : detectedPersons) { long seconds = detectedPerson.timestamp() / 1000; System.out.print("Sec: " + seconds + " "); System.out.println("Person Identifier: " + detectedPerson.person().index()); System.out.println(); } } while (personTrackingResult != null && personTrackingResult.nextToken() != null); } catch (RekognitionException | InterruptedException e) { System.out.println(e.getMessage()); System.exit(1); } } }

Das folgende Codebeispiel zeigt, wie Sie eine App erstellen, die Amazon Rekognition verwendet, um Objekte nach Kategorien in Bildern zu erkennen.

SDKfür Java 2.x

Zeigt, wie Amazon Rekognition Java verwendet wird, um eine App API zu erstellen, die Amazon Rekognition verwendet, um Objekte nach Kategorien in Bildern zu identifizieren, die sich in einem Amazon Simple Storage Service (Amazon S3) -Bucket befinden. Die App sendet dem Administrator mithilfe von Amazon Simple Email Service (AmazonSES) eine E-Mail-Benachrichtigung mit den Ergebnissen.

Den vollständigen Quellcode und Anweisungen zur Einrichtung und Ausführung finden Sie im vollständigen Beispiel unter GitHub.

In diesem Beispiel verwendete Dienste
  • Amazon Rekognition

  • Amazon S3

  • Amazon SES

Das folgende Codebeispiel zeigt, wie Personen und Objekte in einem Video mit Amazon Rekognition erkannt werden.

SDKfür Java 2.x

Zeigt, wie Amazon Rekognition Java verwendet wird, API um eine App zur Erkennung von Gesichtern und Objekten in Videos zu erstellen, die sich in einem Amazon Simple Storage Service (Amazon S3) -Bucket befinden. Die App sendet dem Administrator mithilfe von Amazon Simple Email Service (AmazonSES) eine E-Mail-Benachrichtigung mit den Ergebnissen.

Den vollständigen Quellcode und Anweisungen zur Einrichtung und Ausführung finden Sie im vollständigen Beispiel unter GitHub.

In diesem Beispiel verwendete Dienste
  • Amazon Rekognition

  • Amazon S3

  • Amazon SES