Ejemplos de Amazon Rekognition con SDK Java 2.x - AWS SDKEjemplos de código

Hay más AWS SDK ejemplos disponibles en el GitHub repositorio de AWS Doc SDK Examples.

Las traducciones son generadas a través de traducción automática. En caso de conflicto entre la traducción y la version original de inglés, prevalecerá la version en inglés.

Ejemplos de Amazon Rekognition con SDK Java 2.x

Los siguientes ejemplos de código muestran cómo realizar acciones e implementar situaciones comunes AWS SDK for Java 2.x con Amazon Rekognition.

Las acciones son extractos de código de programas más grandes y deben ejecutarse en contexto. Mientras las acciones muestran cómo llamar a las funciones de servicio individuales, es posible ver las acciones en contexto en los escenarios relacionados.

Los escenarios son ejemplos de código que muestran cómo llevar a cabo una tarea específica a través de llamadas a varias funciones dentro del servicio o combinado con otros Servicios de AWS.

Cada ejemplo incluye un enlace al código fuente completo, donde puede encontrar instrucciones sobre cómo configurar y ejecutar el código en su contexto.

Acciones

En el siguiente ejemplo de código se muestra cómo usar CompareFaces.

Para obtener información, consulte Comparación de rostros en imágenes.

SDKpara Java 2.x
nota

Hay más información. GitHub Busque el ejemplo completo y aprenda a configurar y ejecutar en el Repositorio de ejemplos de código de AWS.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.Image; import software.amazon.awssdk.services.rekognition.model.CompareFacesRequest; import software.amazon.awssdk.services.rekognition.model.CompareFacesResponse; import software.amazon.awssdk.services.rekognition.model.CompareFacesMatch; import software.amazon.awssdk.services.rekognition.model.ComparedFace; import software.amazon.awssdk.services.rekognition.model.BoundingBox; import software.amazon.awssdk.core.SdkBytes; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.InputStream; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class CompareFaces { public static void main(String[] args) { final String usage = """ Usage: <pathSource> <pathTarget> Where: pathSource - The path to the source image (for example, C:\\AWS\\pic1.png).\s pathTarget - The path to the target image (for example, C:\\AWS\\pic2.png).\s """; if (args.length != 2) { System.out.println(usage); System.exit(1); } Float similarityThreshold = 70F; String sourceImage = args[0]; String targetImage = args[1]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); compareTwoFaces(rekClient, similarityThreshold, sourceImage, targetImage); rekClient.close(); } public static void compareTwoFaces(RekognitionClient rekClient, Float similarityThreshold, String sourceImage, String targetImage) { try { InputStream sourceStream = new FileInputStream(sourceImage); InputStream tarStream = new FileInputStream(targetImage); SdkBytes sourceBytes = SdkBytes.fromInputStream(sourceStream); SdkBytes targetBytes = SdkBytes.fromInputStream(tarStream); // Create an Image object for the source image. Image souImage = Image.builder() .bytes(sourceBytes) .build(); Image tarImage = Image.builder() .bytes(targetBytes) .build(); CompareFacesRequest facesRequest = CompareFacesRequest.builder() .sourceImage(souImage) .targetImage(tarImage) .similarityThreshold(similarityThreshold) .build(); // Compare the two images. CompareFacesResponse compareFacesResult = rekClient.compareFaces(facesRequest); List<CompareFacesMatch> faceDetails = compareFacesResult.faceMatches(); for (CompareFacesMatch match : faceDetails) { ComparedFace face = match.face(); BoundingBox position = face.boundingBox(); System.out.println("Face at " + position.left().toString() + " " + position.top() + " matches with " + face.confidence().toString() + "% confidence."); } List<ComparedFace> uncompared = compareFacesResult.unmatchedFaces(); System.out.println("There was " + uncompared.size() + " face(s) that did not match"); System.out.println("Source image rotation: " + compareFacesResult.sourceImageOrientationCorrection()); System.out.println("target image rotation: " + compareFacesResult.targetImageOrientationCorrection()); } catch (RekognitionException | FileNotFoundException e) { System.out.println("Failed to load source image " + sourceImage); System.exit(1); } } }
  • Para API obtener más información, consulte CompareFacesla AWS SDK for Java 2.x APIReferencia.

En el siguiente ejemplo de código se muestra cómo usar CreateCollection.

Para obtener información, consulte Creación de una colección.

SDKpara Java 2.x
nota

Hay más información. GitHub Busque el ejemplo completo y aprenda a configurar y ejecutar en el Repositorio de ejemplos de código de AWS.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.CreateCollectionResponse; import software.amazon.awssdk.services.rekognition.model.CreateCollectionRequest; import software.amazon.awssdk.services.rekognition.model.RekognitionException; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class CreateCollection { public static void main(String[] args) { final String usage = """ Usage: <collectionName>\s Where: collectionName - The name of the collection.\s """; if (args.length != 1) { System.out.println(usage); System.exit(1); } String collectionId = args[0]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); System.out.println("Creating collection: " + collectionId); createMyCollection(rekClient, collectionId); rekClient.close(); } public static void createMyCollection(RekognitionClient rekClient, String collectionId) { try { CreateCollectionRequest collectionRequest = CreateCollectionRequest.builder() .collectionId(collectionId) .build(); CreateCollectionResponse collectionResponse = rekClient.createCollection(collectionRequest); System.out.println("CollectionArn: " + collectionResponse.collectionArn()); System.out.println("Status code: " + collectionResponse.statusCode().toString()); } catch (RekognitionException e) { System.out.println(e.getMessage()); System.exit(1); } } }
  • Para API obtener más información, consulte CreateCollectionla AWS SDK for Java 2.x APIReferencia.

En el siguiente ejemplo de código se muestra cómo usar DeleteCollection.

Para obtener información, consulte Eliminación de una colección.

SDKpara Java 2.x
nota

Hay más información. GitHub Busque el ejemplo completo y aprenda a configurar y ejecutar en el Repositorio de ejemplos de código de AWS.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.DeleteCollectionRequest; import software.amazon.awssdk.services.rekognition.model.DeleteCollectionResponse; import software.amazon.awssdk.services.rekognition.model.RekognitionException; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class DeleteCollection { public static void main(String[] args) { final String usage = """ Usage: <collectionId>\s Where: collectionId - The id of the collection to delete.\s """; if (args.length != 1) { System.out.println(usage); System.exit(1); } String collectionId = args[0]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); System.out.println("Deleting collection: " + collectionId); deleteMyCollection(rekClient, collectionId); rekClient.close(); } public static void deleteMyCollection(RekognitionClient rekClient, String collectionId) { try { DeleteCollectionRequest deleteCollectionRequest = DeleteCollectionRequest.builder() .collectionId(collectionId) .build(); DeleteCollectionResponse deleteCollectionResponse = rekClient.deleteCollection(deleteCollectionRequest); System.out.println(collectionId + ": " + deleteCollectionResponse.statusCode().toString()); } catch (RekognitionException e) { System.out.println(e.getMessage()); System.exit(1); } } }
  • Para API obtener más información, consulte DeleteCollectionla AWS SDK for Java 2.x APIReferencia.

En el siguiente ejemplo de código se muestra cómo usar DeleteFaces.

Para obtener información, consulte Eliminación de rostros de una colección.

SDKpara Java 2.x
nota

Hay más información. GitHub Busque el ejemplo completo y aprenda a configurar y ejecutar en el Repositorio de ejemplos de código de AWS.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.DeleteFacesRequest; import software.amazon.awssdk.services.rekognition.model.RekognitionException; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class DeleteFacesFromCollection { public static void main(String[] args) { final String usage = """ Usage: <collectionId> <faceId>\s Where: collectionId - The id of the collection from which faces are deleted.\s faceId - The id of the face to delete.\s """; if (args.length != 1) { System.out.println(usage); System.exit(1); } String collectionId = args[0]; String faceId = args[1]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); System.out.println("Deleting collection: " + collectionId); deleteFacesCollection(rekClient, collectionId, faceId); rekClient.close(); } public static void deleteFacesCollection(RekognitionClient rekClient, String collectionId, String faceId) { try { DeleteFacesRequest deleteFacesRequest = DeleteFacesRequest.builder() .collectionId(collectionId) .faceIds(faceId) .build(); rekClient.deleteFaces(deleteFacesRequest); System.out.println("The face was deleted from the collection."); } catch (RekognitionException e) { System.out.println(e.getMessage()); System.exit(1); } } }
  • Para API obtener más información, consulte DeleteFacesla AWS SDK for Java 2.x APIReferencia.

En el siguiente ejemplo de código se muestra cómo usar DescribeCollection.

Para obtener información, consulte Descripción de una colección.

SDKpara Java 2.x
nota

Hay más información. GitHub Busque el ejemplo completo y aprenda a configurar y ejecutar en el Repositorio de ejemplos de código de AWS.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.DescribeCollectionRequest; import software.amazon.awssdk.services.rekognition.model.DescribeCollectionResponse; import software.amazon.awssdk.services.rekognition.model.RekognitionException; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class DescribeCollection { public static void main(String[] args) { final String usage = """ Usage: <collectionName> Where: collectionName - The name of the Amazon Rekognition collection.\s """; if (args.length != 1) { System.out.println(usage); System.exit(1); } String collectionName = args[0]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); describeColl(rekClient, collectionName); rekClient.close(); } public static void describeColl(RekognitionClient rekClient, String collectionName) { try { DescribeCollectionRequest describeCollectionRequest = DescribeCollectionRequest.builder() .collectionId(collectionName) .build(); DescribeCollectionResponse describeCollectionResponse = rekClient .describeCollection(describeCollectionRequest); System.out.println("Collection Arn : " + describeCollectionResponse.collectionARN()); System.out.println("Created : " + describeCollectionResponse.creationTimestamp().toString()); } catch (RekognitionException e) { System.out.println(e.getMessage()); System.exit(1); } } }
  • Para API obtener más información, consulte DescribeCollectionla AWS SDK for Java 2.x APIReferencia.

En el siguiente ejemplo de código se muestra cómo usar DetectFaces.

Para obtener información, consulte Detección de rostros en una imagen.

SDKpara Java 2.x
nota

Hay más información. GitHub Busque el ejemplo completo y aprenda a configurar y ejecutar en el Repositorio de ejemplos de código de AWS.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.DetectFacesRequest; import software.amazon.awssdk.services.rekognition.model.DetectFacesResponse; import software.amazon.awssdk.services.rekognition.model.Image; import software.amazon.awssdk.services.rekognition.model.Attribute; import software.amazon.awssdk.services.rekognition.model.FaceDetail; import software.amazon.awssdk.services.rekognition.model.AgeRange; import software.amazon.awssdk.core.SdkBytes; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.InputStream; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class DetectFaces { public static void main(String[] args) { final String usage = """ Usage: <sourceImage> Where: sourceImage - The path to the image (for example, C:\\AWS\\pic1.png).\s """; if (args.length != 1) { System.out.println(usage); System.exit(1); } String sourceImage = args[0]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); detectFacesinImage(rekClient, sourceImage); rekClient.close(); } public static void detectFacesinImage(RekognitionClient rekClient, String sourceImage) { try { InputStream sourceStream = new FileInputStream(sourceImage); SdkBytes sourceBytes = SdkBytes.fromInputStream(sourceStream); // Create an Image object for the source image. Image souImage = Image.builder() .bytes(sourceBytes) .build(); DetectFacesRequest facesRequest = DetectFacesRequest.builder() .attributes(Attribute.ALL) .image(souImage) .build(); DetectFacesResponse facesResponse = rekClient.detectFaces(facesRequest); List<FaceDetail> faceDetails = facesResponse.faceDetails(); for (FaceDetail face : faceDetails) { AgeRange ageRange = face.ageRange(); System.out.println("The detected face is estimated to be between " + ageRange.low().toString() + " and " + ageRange.high().toString() + " years old."); System.out.println("There is a smile : " + face.smile().value().toString()); } } catch (RekognitionException | FileNotFoundException e) { System.out.println(e.getMessage()); System.exit(1); } } }
  • Para API obtener más información, consulte DetectFacesla AWS SDK for Java 2.x APIReferencia.

En el siguiente ejemplo de código se muestra cómo usar DetectLabels.

Para obtener información, consulte Detección de etiquetas en una imagen.

SDKpara Java 2.x
nota

Hay más información. GitHub Busque el ejemplo completo y aprenda a configurar y ejecutar en el Repositorio de ejemplos de código de AWS.

import software.amazon.awssdk.core.SdkBytes; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.Image; import software.amazon.awssdk.services.rekognition.model.DetectLabelsRequest; import software.amazon.awssdk.services.rekognition.model.DetectLabelsResponse; import software.amazon.awssdk.services.rekognition.model.Label; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.InputStream; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class DetectLabels { public static void main(String[] args) { final String usage = """ Usage: <sourceImage> Where: sourceImage - The path to the image (for example, C:\\AWS\\pic1.png).\s """; if (args.length != 1) { System.out.println(usage); System.exit(1); } String sourceImage = args[0]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); detectImageLabels(rekClient, sourceImage); rekClient.close(); } public static void detectImageLabels(RekognitionClient rekClient, String sourceImage) { try { InputStream sourceStream = new FileInputStream(sourceImage); SdkBytes sourceBytes = SdkBytes.fromInputStream(sourceStream); // Create an Image object for the source image. Image souImage = Image.builder() .bytes(sourceBytes) .build(); DetectLabelsRequest detectLabelsRequest = DetectLabelsRequest.builder() .image(souImage) .maxLabels(10) .build(); DetectLabelsResponse labelsResponse = rekClient.detectLabels(detectLabelsRequest); List<Label> labels = labelsResponse.labels(); System.out.println("Detected labels for the given photo"); for (Label label : labels) { System.out.println(label.name() + ": " + label.confidence().toString()); } } catch (RekognitionException | FileNotFoundException e) { System.out.println(e.getMessage()); System.exit(1); } } }
  • Para API obtener más información, consulte DetectLabelsla AWS SDK for Java 2.x APIReferencia.

En el siguiente ejemplo de código se muestra cómo usar DetectModerationLabels.

Para obtener información, consulte Detección de imágenes inapropiadas.

SDKpara Java 2.x
nota

Hay más información. GitHub Busque el ejemplo completo y aprenda a configurar y ejecutar en el Repositorio de ejemplos de código de AWS.

import software.amazon.awssdk.core.SdkBytes; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.Image; import software.amazon.awssdk.services.rekognition.model.DetectModerationLabelsRequest; import software.amazon.awssdk.services.rekognition.model.DetectModerationLabelsResponse; import software.amazon.awssdk.services.rekognition.model.ModerationLabel; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.InputStream; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class DetectModerationLabels { public static void main(String[] args) { final String usage = """ Usage: <sourceImage> Where: sourceImage - The path to the image (for example, C:\\AWS\\pic1.png).\s """; if (args.length < 1) { System.out.println(usage); System.exit(1); } String sourceImage = args[0]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); detectModLabels(rekClient, sourceImage); rekClient.close(); } public static void detectModLabels(RekognitionClient rekClient, String sourceImage) { try { InputStream sourceStream = new FileInputStream(sourceImage); SdkBytes sourceBytes = SdkBytes.fromInputStream(sourceStream); Image souImage = Image.builder() .bytes(sourceBytes) .build(); DetectModerationLabelsRequest moderationLabelsRequest = DetectModerationLabelsRequest.builder() .image(souImage) .minConfidence(60F) .build(); DetectModerationLabelsResponse moderationLabelsResponse = rekClient .detectModerationLabels(moderationLabelsRequest); List<ModerationLabel> labels = moderationLabelsResponse.moderationLabels(); System.out.println("Detected labels for image"); for (ModerationLabel label : labels) { System.out.println("Label: " + label.name() + "\n Confidence: " + label.confidence().toString() + "%" + "\n Parent:" + label.parentName()); } } catch (RekognitionException | FileNotFoundException e) { e.printStackTrace(); System.exit(1); } } }

En el siguiente ejemplo de código se muestra cómo usar DetectText.

Para obtener información, consulte Detección de texto en una imagen.

SDKpara Java 2.x
nota

Hay más información. GitHub Busque el ejemplo completo y aprenda a configurar y ejecutar en el Repositorio de ejemplos de código de AWS.

import software.amazon.awssdk.core.SdkBytes; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.DetectTextRequest; import software.amazon.awssdk.services.rekognition.model.Image; import software.amazon.awssdk.services.rekognition.model.DetectTextResponse; import software.amazon.awssdk.services.rekognition.model.TextDetection; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.InputStream; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class DetectText { public static void main(String[] args) { final String usage = """ Usage: <sourceImage> Where: sourceImage - The path to the image that contains text (for example, C:\\AWS\\pic1.png).\s """; if (args.length != 1) { System.out.println(usage); System.exit(1); } String sourceImage = args[0]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); detectTextLabels(rekClient, sourceImage); rekClient.close(); } public static void detectTextLabels(RekognitionClient rekClient, String sourceImage) { try { InputStream sourceStream = new FileInputStream(sourceImage); SdkBytes sourceBytes = SdkBytes.fromInputStream(sourceStream); Image souImage = Image.builder() .bytes(sourceBytes) .build(); DetectTextRequest textRequest = DetectTextRequest.builder() .image(souImage) .build(); DetectTextResponse textResponse = rekClient.detectText(textRequest); List<TextDetection> textCollection = textResponse.textDetections(); System.out.println("Detected lines and words"); for (TextDetection text : textCollection) { System.out.println("Detected: " + text.detectedText()); System.out.println("Confidence: " + text.confidence().toString()); System.out.println("Id : " + text.id()); System.out.println("Parent Id: " + text.parentId()); System.out.println("Type: " + text.type()); System.out.println(); } } catch (RekognitionException | FileNotFoundException e) { System.out.println(e.getMessage()); System.exit(1); } } }
  • Para API obtener más información, consulte DetectTextla AWS SDK for Java 2.x APIReferencia.

En el siguiente ejemplo de código se muestra cómo usar IndexFaces.

Para obtener información, consulte Adición de rostros a una colección.

SDKpara Java 2.x
nota

Hay más información. GitHub Busque el ejemplo completo y aprenda a configurar y ejecutar en el Repositorio de ejemplos de código de AWS.

import software.amazon.awssdk.core.SdkBytes; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.IndexFacesResponse; import software.amazon.awssdk.services.rekognition.model.IndexFacesRequest; import software.amazon.awssdk.services.rekognition.model.Image; import software.amazon.awssdk.services.rekognition.model.QualityFilter; import software.amazon.awssdk.services.rekognition.model.Attribute; import software.amazon.awssdk.services.rekognition.model.FaceRecord; import software.amazon.awssdk.services.rekognition.model.UnindexedFace; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.Reason; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.InputStream; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class AddFacesToCollection { public static void main(String[] args) { final String usage = """ Usage: <collectionId> <sourceImage> Where: collectionName - The name of the collection. sourceImage - The path to the image (for example, C:\\AWS\\pic1.png).\s """; if (args.length != 2) { System.out.println(usage); System.exit(1); } String collectionId = args[0]; String sourceImage = args[1]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); addToCollection(rekClient, collectionId, sourceImage); rekClient.close(); } public static void addToCollection(RekognitionClient rekClient, String collectionId, String sourceImage) { try { InputStream sourceStream = new FileInputStream(sourceImage); SdkBytes sourceBytes = SdkBytes.fromInputStream(sourceStream); Image souImage = Image.builder() .bytes(sourceBytes) .build(); IndexFacesRequest facesRequest = IndexFacesRequest.builder() .collectionId(collectionId) .image(souImage) .maxFaces(1) .qualityFilter(QualityFilter.AUTO) .detectionAttributes(Attribute.DEFAULT) .build(); IndexFacesResponse facesResponse = rekClient.indexFaces(facesRequest); System.out.println("Results for the image"); System.out.println("\n Faces indexed:"); List<FaceRecord> faceRecords = facesResponse.faceRecords(); for (FaceRecord faceRecord : faceRecords) { System.out.println(" Face ID: " + faceRecord.face().faceId()); System.out.println(" Location:" + faceRecord.faceDetail().boundingBox().toString()); } List<UnindexedFace> unindexedFaces = facesResponse.unindexedFaces(); System.out.println("Faces not indexed:"); for (UnindexedFace unindexedFace : unindexedFaces) { System.out.println(" Location:" + unindexedFace.faceDetail().boundingBox().toString()); System.out.println(" Reasons:"); for (Reason reason : unindexedFace.reasons()) { System.out.println("Reason: " + reason); } } } catch (RekognitionException | FileNotFoundException e) { System.out.println(e.getMessage()); System.exit(1); } } }
  • Para API obtener más información, consulte IndexFacesla AWS SDK for Java 2.x APIReferencia.

En el siguiente ejemplo de código se muestra cómo usar ListCollections.

Para obtener información, consulte Enumerar colecciones.

SDKpara Java 2.x
nota

Hay más información. GitHub Busque el ejemplo completo y aprenda a configurar y ejecutar en el Repositorio de ejemplos de código de AWS.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.ListCollectionsRequest; import software.amazon.awssdk.services.rekognition.model.ListCollectionsResponse; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class ListCollections { public static void main(String[] args) { Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); System.out.println("Listing collections"); listAllCollections(rekClient); rekClient.close(); } public static void listAllCollections(RekognitionClient rekClient) { try { ListCollectionsRequest listCollectionsRequest = ListCollectionsRequest.builder() .maxResults(10) .build(); ListCollectionsResponse response = rekClient.listCollections(listCollectionsRequest); List<String> collectionIds = response.collectionIds(); for (String resultId : collectionIds) { System.out.println(resultId); } } catch (RekognitionException e) { System.out.println(e.getMessage()); System.exit(1); } } }
  • Para API obtener más información, consulte ListCollectionsla AWS SDK for Java 2.x APIReferencia.

En el siguiente ejemplo de código se muestra cómo usar ListFaces.

Para obtener información, consulte Enumerar rostros en una colección.

SDKpara Java 2.x
nota

Hay más información. GitHub Busque el ejemplo completo y aprenda a configurar y ejecutar en el Repositorio de ejemplos de código de AWS.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.Face; import software.amazon.awssdk.services.rekognition.model.ListFacesRequest; import software.amazon.awssdk.services.rekognition.model.ListFacesResponse; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class ListFacesInCollection { public static void main(String[] args) { final String usage = """ Usage: <collectionId> Where: collectionId - The name of the collection.\s """; if (args.length < 1) { System.out.println(usage); System.exit(1); } String collectionId = args[0]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); System.out.println("Faces in collection " + collectionId); listFacesCollection(rekClient, collectionId); rekClient.close(); } public static void listFacesCollection(RekognitionClient rekClient, String collectionId) { try { ListFacesRequest facesRequest = ListFacesRequest.builder() .collectionId(collectionId) .maxResults(10) .build(); ListFacesResponse facesResponse = rekClient.listFaces(facesRequest); List<Face> faces = facesResponse.faces(); for (Face face : faces) { System.out.println("Confidence level there is a face: " + face.confidence()); System.out.println("The face Id value is " + face.faceId()); } } catch (RekognitionException e) { System.out.println(e.getMessage()); System.exit(1); } } }
  • Para API obtener más información, consulte ListFacesla AWS SDK for Java 2.x APIReferencia.

En el siguiente ejemplo de código se muestra cómo usar RecognizeCelebrities.

Para obtener información, consulte Reconocimiento de famosos en una imagen.

SDKpara Java 2.x
nota

Hay más información. GitHub Busque el ejemplo completo y aprenda a configurar y ejecutar en el Repositorio de ejemplos de código de AWS.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.core.SdkBytes; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.InputStream; import java.util.List; import software.amazon.awssdk.services.rekognition.model.RecognizeCelebritiesRequest; import software.amazon.awssdk.services.rekognition.model.RecognizeCelebritiesResponse; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.Image; import software.amazon.awssdk.services.rekognition.model.Celebrity; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class RecognizeCelebrities { public static void main(String[] args) { final String usage = """ Usage: <sourceImage> Where: sourceImage - The path to the image (for example, C:\\AWS\\pic1.png).\s """; if (args.length != 1) { System.out.println(usage); System.exit(1); } String sourceImage = args[0]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); System.out.println("Locating celebrities in " + sourceImage); recognizeAllCelebrities(rekClient, sourceImage); rekClient.close(); } public static void recognizeAllCelebrities(RekognitionClient rekClient, String sourceImage) { try { InputStream sourceStream = new FileInputStream(sourceImage); SdkBytes sourceBytes = SdkBytes.fromInputStream(sourceStream); Image souImage = Image.builder() .bytes(sourceBytes) .build(); RecognizeCelebritiesRequest request = RecognizeCelebritiesRequest.builder() .image(souImage) .build(); RecognizeCelebritiesResponse result = rekClient.recognizeCelebrities(request); List<Celebrity> celebs = result.celebrityFaces(); System.out.println(celebs.size() + " celebrity(s) were recognized.\n"); for (Celebrity celebrity : celebs) { System.out.println("Celebrity recognized: " + celebrity.name()); System.out.println("Celebrity ID: " + celebrity.id()); System.out.println("Further information (if available):"); for (String url : celebrity.urls()) { System.out.println(url); } System.out.println(); } System.out.println(result.unrecognizedFaces().size() + " face(s) were unrecognized."); } catch (RekognitionException | FileNotFoundException e) { System.out.println(e.getMessage()); System.exit(1); } } }
  • Para API obtener más información, consulte RecognizeCelebritiesla AWS SDK for Java 2.x APIReferencia.

En el siguiente ejemplo de código se muestra cómo usar SearchFaces.

Para obtener información, consulte Búsqueda de un rostro (ID de rostro).

SDKpara Java 2.x
nota

Hay más información. GitHub Busque el ejemplo completo y aprenda a configurar y ejecutar en el Repositorio de ejemplos de código de AWS.

import software.amazon.awssdk.core.SdkBytes; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.SearchFacesByImageRequest; import software.amazon.awssdk.services.rekognition.model.Image; import software.amazon.awssdk.services.rekognition.model.SearchFacesByImageResponse; import software.amazon.awssdk.services.rekognition.model.FaceMatch; import java.io.File; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.InputStream; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class SearchFaceMatchingImageCollection { public static void main(String[] args) { final String usage = """ Usage: <collectionId> <sourceImage> Where: collectionId - The id of the collection. \s sourceImage - The path to the image (for example, C:\\AWS\\pic1.png).\s """; if (args.length != 2) { System.out.println(usage); System.exit(1); } String collectionId = args[0]; String sourceImage = args[1]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); System.out.println("Searching for a face in a collections"); searchFaceInCollection(rekClient, collectionId, sourceImage); rekClient.close(); } public static void searchFaceInCollection(RekognitionClient rekClient, String collectionId, String sourceImage) { try { InputStream sourceStream = new FileInputStream(new File(sourceImage)); SdkBytes sourceBytes = SdkBytes.fromInputStream(sourceStream); Image souImage = Image.builder() .bytes(sourceBytes) .build(); SearchFacesByImageRequest facesByImageRequest = SearchFacesByImageRequest.builder() .image(souImage) .maxFaces(10) .faceMatchThreshold(70F) .collectionId(collectionId) .build(); SearchFacesByImageResponse imageResponse = rekClient.searchFacesByImage(facesByImageRequest); System.out.println("Faces matching in the collection"); List<FaceMatch> faceImageMatches = imageResponse.faceMatches(); for (FaceMatch face : faceImageMatches) { System.out.println("The similarity level is " + face.similarity()); System.out.println(); } } catch (RekognitionException | FileNotFoundException e) { System.out.println(e.getMessage()); System.exit(1); } } }
  • Para API obtener más información, consulte SearchFacesla AWS SDK for Java 2.x APIReferencia.

En el siguiente ejemplo de código se muestra cómo usar SearchFacesByImage.

Para obtener información, consulte Búsqueda de un rostro (imagen).

SDKpara Java 2.x
nota

Hay más información. GitHub Busque el ejemplo completo y aprenda a configurar y ejecutar en el Repositorio de ejemplos de código de AWS.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.SearchFacesRequest; import software.amazon.awssdk.services.rekognition.model.SearchFacesResponse; import software.amazon.awssdk.services.rekognition.model.FaceMatch; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class SearchFaceMatchingIdCollection { public static void main(String[] args) { final String usage = """ Usage: <collectionId> <sourceImage> Where: collectionId - The id of the collection. \s sourceImage - The path to the image (for example, C:\\AWS\\pic1.png).\s """; if (args.length != 2) { System.out.println(usage); System.exit(1); } String collectionId = args[0]; String faceId = args[1]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); System.out.println("Searching for a face in a collections"); searchFacebyId(rekClient, collectionId, faceId); rekClient.close(); } public static void searchFacebyId(RekognitionClient rekClient, String collectionId, String faceId) { try { SearchFacesRequest searchFacesRequest = SearchFacesRequest.builder() .collectionId(collectionId) .faceId(faceId) .faceMatchThreshold(70F) .maxFaces(2) .build(); SearchFacesResponse imageResponse = rekClient.searchFaces(searchFacesRequest); System.out.println("Faces matching in the collection"); List<FaceMatch> faceImageMatches = imageResponse.faceMatches(); for (FaceMatch face : faceImageMatches) { System.out.println("The similarity level is " + face.similarity()); System.out.println(); } } catch (RekognitionException e) { System.out.println(e.getMessage()); System.exit(1); } } }
  • Para API obtener más información, consulte SearchFacesByImagela AWS SDK for Java 2.x APIReferencia.

Escenarios

En el siguiente ejemplo de código se muestra cómo crear una aplicación sin servidor que permita a los usuarios administrar fotos mediante etiquetas.

SDKpara Java 2.x

Muestra cómo desarrollar una aplicación de gestión de activos fotográficos que detecte las etiquetas de las imágenes mediante Amazon Rekognition y las almacene para su posterior recuperación.

Para obtener el código fuente completo y las instrucciones sobre cómo configurarlo y ejecutarlo, consulte el ejemplo completo en GitHub.

Para profundizar en el origen de este ejemplo, consulte la publicación en Comunidad de AWS.

Servicios utilizados en este ejemplo
  • APIPuerta de enlace

  • DynamoDB

  • Lambda

  • Amazon Rekognition

  • Amazon S3

  • Amazon SNS

El siguiente ejemplo de código muestra cómo crear una aplicación que utilice Amazon Rekognition para detectar el equipo de protección PPE personal () en las imágenes.

SDKpara Java 2.x

Muestra cómo crear una AWS Lambda función que detecte imágenes con un equipo de protección individual.

Para obtener el código fuente completo y las instrucciones sobre cómo configurarlo y ejecutarlo, consulte el ejemplo completo en GitHub.

Servicios utilizados en este ejemplo
  • DynamoDB

  • Amazon Rekognition

  • Amazon S3

  • Amazon SES

En el siguiente ejemplo de código, se muestra cómo:

  • Iniciar trabajos de Amazon Rekognition para detectar elementos como personas, objetos y texto en los videos.

  • Compruebe el estado de los trabajos hasta que se terminan.

  • Obtener la lista de elementos detectados por cada trabajo.

SDKpara Java 2.x
nota

Hay más información. GitHub Busque el ejemplo completo y aprenda a configurar y ejecutar en el Repositorio de ejemplos de código de AWS.

Obtenga información sobre famosos a partir de un vídeo ubicado en un bucket de Amazon S3.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.S3Object; import software.amazon.awssdk.services.rekognition.model.NotificationChannel; import software.amazon.awssdk.services.rekognition.model.Video; import software.amazon.awssdk.services.rekognition.model.StartCelebrityRecognitionResponse; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.CelebrityRecognitionSortBy; import software.amazon.awssdk.services.rekognition.model.VideoMetadata; import software.amazon.awssdk.services.rekognition.model.CelebrityRecognition; import software.amazon.awssdk.services.rekognition.model.CelebrityDetail; import software.amazon.awssdk.services.rekognition.model.StartCelebrityRecognitionRequest; import software.amazon.awssdk.services.rekognition.model.GetCelebrityRecognitionRequest; import software.amazon.awssdk.services.rekognition.model.GetCelebrityRecognitionResponse; import java.util.List; /** * To run this code example, ensure that you perform the Prerequisites as stated * in the Amazon Rekognition Guide: * https://docs.aws.amazon.com/rekognition/latest/dg/video-analyzing-with-sqs.html * * Also, ensure that set up your development environment, including your * credentials. * * For information, see this documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class VideoCelebrityDetection { private static String startJobId = ""; public static void main(String[] args) { final String usage = """ Usage: <bucket> <video> <topicArn> <roleArn> Where: bucket - The name of the bucket in which the video is located (for example, (for example, myBucket).\s video - The name of video (for example, people.mp4).\s topicArn - The ARN of the Amazon Simple Notification Service (Amazon SNS) topic.\s roleArn - The ARN of the AWS Identity and Access Management (IAM) role to use.\s """; if (args.length != 4) { System.out.println(usage); System.exit(1); } String bucket = args[0]; String video = args[1]; String topicArn = args[2]; String roleArn = args[3]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); NotificationChannel channel = NotificationChannel.builder() .snsTopicArn(topicArn) .roleArn(roleArn) .build(); startCelebrityDetection(rekClient, channel, bucket, video); getCelebrityDetectionResults(rekClient); System.out.println("This example is done!"); rekClient.close(); } public static void startCelebrityDetection(RekognitionClient rekClient, NotificationChannel channel, String bucket, String video) { try { S3Object s3Obj = S3Object.builder() .bucket(bucket) .name(video) .build(); Video vidOb = Video.builder() .s3Object(s3Obj) .build(); StartCelebrityRecognitionRequest recognitionRequest = StartCelebrityRecognitionRequest.builder() .jobTag("Celebrities") .notificationChannel(channel) .video(vidOb) .build(); StartCelebrityRecognitionResponse startCelebrityRecognitionResult = rekClient .startCelebrityRecognition(recognitionRequest); startJobId = startCelebrityRecognitionResult.jobId(); } catch (RekognitionException e) { System.out.println(e.getMessage()); System.exit(1); } } public static void getCelebrityDetectionResults(RekognitionClient rekClient) { try { String paginationToken = null; GetCelebrityRecognitionResponse recognitionResponse = null; boolean finished = false; String status; int yy = 0; do { if (recognitionResponse != null) paginationToken = recognitionResponse.nextToken(); GetCelebrityRecognitionRequest recognitionRequest = GetCelebrityRecognitionRequest.builder() .jobId(startJobId) .nextToken(paginationToken) .sortBy(CelebrityRecognitionSortBy.TIMESTAMP) .maxResults(10) .build(); // Wait until the job succeeds while (!finished) { recognitionResponse = rekClient.getCelebrityRecognition(recognitionRequest); status = recognitionResponse.jobStatusAsString(); if (status.compareTo("SUCCEEDED") == 0) finished = true; else { System.out.println(yy + " status is: " + status); Thread.sleep(1000); } yy++; } finished = false; // Proceed when the job is done - otherwise VideoMetadata is null. VideoMetadata videoMetaData = recognitionResponse.videoMetadata(); System.out.println("Format: " + videoMetaData.format()); System.out.println("Codec: " + videoMetaData.codec()); System.out.println("Duration: " + videoMetaData.durationMillis()); System.out.println("FrameRate: " + videoMetaData.frameRate()); System.out.println("Job"); List<CelebrityRecognition> celebs = recognitionResponse.celebrities(); for (CelebrityRecognition celeb : celebs) { long seconds = celeb.timestamp() / 1000; System.out.print("Sec: " + seconds + " "); CelebrityDetail details = celeb.celebrity(); System.out.println("Name: " + details.name()); System.out.println("Id: " + details.id()); System.out.println(); } } while (recognitionResponse.nextToken() != null); } catch (RekognitionException | InterruptedException e) { System.out.println(e.getMessage()); System.exit(1); } } }

Detecte etiquetas en un vídeo mediante una operación de detección de etiquetas.

import com.fasterxml.jackson.core.JsonProcessingException; import com.fasterxml.jackson.databind.JsonMappingException; import com.fasterxml.jackson.databind.JsonNode; import com.fasterxml.jackson.databind.ObjectMapper; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.StartLabelDetectionResponse; import software.amazon.awssdk.services.rekognition.model.NotificationChannel; import software.amazon.awssdk.services.rekognition.model.S3Object; import software.amazon.awssdk.services.rekognition.model.Video; import software.amazon.awssdk.services.rekognition.model.StartLabelDetectionRequest; import software.amazon.awssdk.services.rekognition.model.GetLabelDetectionRequest; import software.amazon.awssdk.services.rekognition.model.GetLabelDetectionResponse; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.LabelDetectionSortBy; import software.amazon.awssdk.services.rekognition.model.VideoMetadata; import software.amazon.awssdk.services.rekognition.model.LabelDetection; import software.amazon.awssdk.services.rekognition.model.Label; import software.amazon.awssdk.services.rekognition.model.Instance; import software.amazon.awssdk.services.rekognition.model.Parent; import software.amazon.awssdk.services.sqs.SqsClient; import software.amazon.awssdk.services.sqs.model.Message; import software.amazon.awssdk.services.sqs.model.ReceiveMessageRequest; import software.amazon.awssdk.services.sqs.model.DeleteMessageRequest; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class VideoDetect { private static String startJobId = ""; public static void main(String[] args) { final String usage = """ Usage: <bucket> <video> <queueUrl> <topicArn> <roleArn> Where: bucket - The name of the bucket in which the video is located (for example, (for example, myBucket).\s video - The name of the video (for example, people.mp4).\s queueUrl- The URL of a SQS queue.\s topicArn - The ARN of the Amazon Simple Notification Service (Amazon SNS) topic.\s roleArn - The ARN of the AWS Identity and Access Management (IAM) role to use.\s """; if (args.length != 5) { System.out.println(usage); System.exit(1); } String bucket = args[0]; String video = args[1]; String queueUrl = args[2]; String topicArn = args[3]; String roleArn = args[4]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); SqsClient sqs = SqsClient.builder() .region(Region.US_EAST_1) .build(); NotificationChannel channel = NotificationChannel.builder() .snsTopicArn(topicArn) .roleArn(roleArn) .build(); startLabels(rekClient, channel, bucket, video); getLabelJob(rekClient, sqs, queueUrl); System.out.println("This example is done!"); sqs.close(); rekClient.close(); } public static void startLabels(RekognitionClient rekClient, NotificationChannel channel, String bucket, String video) { try { S3Object s3Obj = S3Object.builder() .bucket(bucket) .name(video) .build(); Video vidOb = Video.builder() .s3Object(s3Obj) .build(); StartLabelDetectionRequest labelDetectionRequest = StartLabelDetectionRequest.builder() .jobTag("DetectingLabels") .notificationChannel(channel) .video(vidOb) .minConfidence(50F) .build(); StartLabelDetectionResponse labelDetectionResponse = rekClient.startLabelDetection(labelDetectionRequest); startJobId = labelDetectionResponse.jobId(); boolean ans = true; String status = ""; int yy = 0; while (ans) { GetLabelDetectionRequest detectionRequest = GetLabelDetectionRequest.builder() .jobId(startJobId) .maxResults(10) .build(); GetLabelDetectionResponse result = rekClient.getLabelDetection(detectionRequest); status = result.jobStatusAsString(); if (status.compareTo("SUCCEEDED") == 0) ans = false; else System.out.println(yy + " status is: " + status); Thread.sleep(1000); yy++; } System.out.println(startJobId + " status is: " + status); } catch (RekognitionException | InterruptedException e) { e.getMessage(); System.exit(1); } } public static void getLabelJob(RekognitionClient rekClient, SqsClient sqs, String queueUrl) { List<Message> messages; ReceiveMessageRequest messageRequest = ReceiveMessageRequest.builder() .queueUrl(queueUrl) .build(); try { messages = sqs.receiveMessage(messageRequest).messages(); if (!messages.isEmpty()) { for (Message message : messages) { String notification = message.body(); // Get the status and job id from the notification ObjectMapper mapper = new ObjectMapper(); JsonNode jsonMessageTree = mapper.readTree(notification); JsonNode messageBodyText = jsonMessageTree.get("Message"); ObjectMapper operationResultMapper = new ObjectMapper(); JsonNode jsonResultTree = operationResultMapper.readTree(messageBodyText.textValue()); JsonNode operationJobId = jsonResultTree.get("JobId"); JsonNode operationStatus = jsonResultTree.get("Status"); System.out.println("Job found in JSON is " + operationJobId); DeleteMessageRequest deleteMessageRequest = DeleteMessageRequest.builder() .queueUrl(queueUrl) .build(); String jobId = operationJobId.textValue(); if (startJobId.compareTo(jobId) == 0) { System.out.println("Job id: " + operationJobId); System.out.println("Status : " + operationStatus.toString()); if (operationStatus.asText().equals("SUCCEEDED")) getResultsLabels(rekClient); else System.out.println("Video analysis failed"); sqs.deleteMessage(deleteMessageRequest); } else { System.out.println("Job received was not job " + startJobId); sqs.deleteMessage(deleteMessageRequest); } } } } catch (RekognitionException e) { e.getMessage(); System.exit(1); } catch (JsonMappingException e) { e.printStackTrace(); } catch (JsonProcessingException e) { e.printStackTrace(); } } // Gets the job results by calling GetLabelDetection private static void getResultsLabels(RekognitionClient rekClient) { int maxResults = 10; String paginationToken = null; GetLabelDetectionResponse labelDetectionResult = null; try { do { if (labelDetectionResult != null) paginationToken = labelDetectionResult.nextToken(); GetLabelDetectionRequest labelDetectionRequest = GetLabelDetectionRequest.builder() .jobId(startJobId) .sortBy(LabelDetectionSortBy.TIMESTAMP) .maxResults(maxResults) .nextToken(paginationToken) .build(); labelDetectionResult = rekClient.getLabelDetection(labelDetectionRequest); VideoMetadata videoMetaData = labelDetectionResult.videoMetadata(); System.out.println("Format: " + videoMetaData.format()); System.out.println("Codec: " + videoMetaData.codec()); System.out.println("Duration: " + videoMetaData.durationMillis()); System.out.println("FrameRate: " + videoMetaData.frameRate()); List<LabelDetection> detectedLabels = labelDetectionResult.labels(); for (LabelDetection detectedLabel : detectedLabels) { long seconds = detectedLabel.timestamp(); Label label = detectedLabel.label(); System.out.println("Millisecond: " + seconds + " "); System.out.println(" Label:" + label.name()); System.out.println(" Confidence:" + detectedLabel.label().confidence().toString()); List<Instance> instances = label.instances(); System.out.println(" Instances of " + label.name()); if (instances.isEmpty()) { System.out.println(" " + "None"); } else { for (Instance instance : instances) { System.out.println(" Confidence: " + instance.confidence().toString()); System.out.println(" Bounding box: " + instance.boundingBox().toString()); } } System.out.println(" Parent labels for " + label.name() + ":"); List<Parent> parents = label.parents(); if (parents.isEmpty()) { System.out.println(" None"); } else { for (Parent parent : parents) { System.out.println(" " + parent.name()); } } System.out.println(); } } while (labelDetectionResult != null && labelDetectionResult.nextToken() != null); } catch (RekognitionException e) { e.getMessage(); System.exit(1); } } }

Detecte rostros en un vídeo almacenado en un bucket de Amazon S3.

import com.fasterxml.jackson.core.JsonProcessingException; import com.fasterxml.jackson.databind.JsonMappingException; import com.fasterxml.jackson.databind.JsonNode; import com.fasterxml.jackson.databind.ObjectMapper; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.StartLabelDetectionResponse; import software.amazon.awssdk.services.rekognition.model.NotificationChannel; import software.amazon.awssdk.services.rekognition.model.S3Object; import software.amazon.awssdk.services.rekognition.model.Video; import software.amazon.awssdk.services.rekognition.model.StartLabelDetectionRequest; import software.amazon.awssdk.services.rekognition.model.GetLabelDetectionRequest; import software.amazon.awssdk.services.rekognition.model.GetLabelDetectionResponse; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.LabelDetectionSortBy; import software.amazon.awssdk.services.rekognition.model.VideoMetadata; import software.amazon.awssdk.services.rekognition.model.LabelDetection; import software.amazon.awssdk.services.rekognition.model.Label; import software.amazon.awssdk.services.rekognition.model.Instance; import software.amazon.awssdk.services.rekognition.model.Parent; import software.amazon.awssdk.services.sqs.SqsClient; import software.amazon.awssdk.services.sqs.model.Message; import software.amazon.awssdk.services.sqs.model.ReceiveMessageRequest; import software.amazon.awssdk.services.sqs.model.DeleteMessageRequest; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class VideoDetect { private static String startJobId = ""; public static void main(String[] args) { final String usage = """ Usage: <bucket> <video> <queueUrl> <topicArn> <roleArn> Where: bucket - The name of the bucket in which the video is located (for example, (for example, myBucket).\s video - The name of the video (for example, people.mp4).\s queueUrl- The URL of a SQS queue.\s topicArn - The ARN of the Amazon Simple Notification Service (Amazon SNS) topic.\s roleArn - The ARN of the AWS Identity and Access Management (IAM) role to use.\s """; if (args.length != 5) { System.out.println(usage); System.exit(1); } String bucket = args[0]; String video = args[1]; String queueUrl = args[2]; String topicArn = args[3]; String roleArn = args[4]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); SqsClient sqs = SqsClient.builder() .region(Region.US_EAST_1) .build(); NotificationChannel channel = NotificationChannel.builder() .snsTopicArn(topicArn) .roleArn(roleArn) .build(); startLabels(rekClient, channel, bucket, video); getLabelJob(rekClient, sqs, queueUrl); System.out.println("This example is done!"); sqs.close(); rekClient.close(); } public static void startLabels(RekognitionClient rekClient, NotificationChannel channel, String bucket, String video) { try { S3Object s3Obj = S3Object.builder() .bucket(bucket) .name(video) .build(); Video vidOb = Video.builder() .s3Object(s3Obj) .build(); StartLabelDetectionRequest labelDetectionRequest = StartLabelDetectionRequest.builder() .jobTag("DetectingLabels") .notificationChannel(channel) .video(vidOb) .minConfidence(50F) .build(); StartLabelDetectionResponse labelDetectionResponse = rekClient.startLabelDetection(labelDetectionRequest); startJobId = labelDetectionResponse.jobId(); boolean ans = true; String status = ""; int yy = 0; while (ans) { GetLabelDetectionRequest detectionRequest = GetLabelDetectionRequest.builder() .jobId(startJobId) .maxResults(10) .build(); GetLabelDetectionResponse result = rekClient.getLabelDetection(detectionRequest); status = result.jobStatusAsString(); if (status.compareTo("SUCCEEDED") == 0) ans = false; else System.out.println(yy + " status is: " + status); Thread.sleep(1000); yy++; } System.out.println(startJobId + " status is: " + status); } catch (RekognitionException | InterruptedException e) { e.getMessage(); System.exit(1); } } public static void getLabelJob(RekognitionClient rekClient, SqsClient sqs, String queueUrl) { List<Message> messages; ReceiveMessageRequest messageRequest = ReceiveMessageRequest.builder() .queueUrl(queueUrl) .build(); try { messages = sqs.receiveMessage(messageRequest).messages(); if (!messages.isEmpty()) { for (Message message : messages) { String notification = message.body(); // Get the status and job id from the notification ObjectMapper mapper = new ObjectMapper(); JsonNode jsonMessageTree = mapper.readTree(notification); JsonNode messageBodyText = jsonMessageTree.get("Message"); ObjectMapper operationResultMapper = new ObjectMapper(); JsonNode jsonResultTree = operationResultMapper.readTree(messageBodyText.textValue()); JsonNode operationJobId = jsonResultTree.get("JobId"); JsonNode operationStatus = jsonResultTree.get("Status"); System.out.println("Job found in JSON is " + operationJobId); DeleteMessageRequest deleteMessageRequest = DeleteMessageRequest.builder() .queueUrl(queueUrl) .build(); String jobId = operationJobId.textValue(); if (startJobId.compareTo(jobId) == 0) { System.out.println("Job id: " + operationJobId); System.out.println("Status : " + operationStatus.toString()); if (operationStatus.asText().equals("SUCCEEDED")) getResultsLabels(rekClient); else System.out.println("Video analysis failed"); sqs.deleteMessage(deleteMessageRequest); } else { System.out.println("Job received was not job " + startJobId); sqs.deleteMessage(deleteMessageRequest); } } } } catch (RekognitionException e) { e.getMessage(); System.exit(1); } catch (JsonMappingException e) { e.printStackTrace(); } catch (JsonProcessingException e) { e.printStackTrace(); } } // Gets the job results by calling GetLabelDetection private static void getResultsLabels(RekognitionClient rekClient) { int maxResults = 10; String paginationToken = null; GetLabelDetectionResponse labelDetectionResult = null; try { do { if (labelDetectionResult != null) paginationToken = labelDetectionResult.nextToken(); GetLabelDetectionRequest labelDetectionRequest = GetLabelDetectionRequest.builder() .jobId(startJobId) .sortBy(LabelDetectionSortBy.TIMESTAMP) .maxResults(maxResults) .nextToken(paginationToken) .build(); labelDetectionResult = rekClient.getLabelDetection(labelDetectionRequest); VideoMetadata videoMetaData = labelDetectionResult.videoMetadata(); System.out.println("Format: " + videoMetaData.format()); System.out.println("Codec: " + videoMetaData.codec()); System.out.println("Duration: " + videoMetaData.durationMillis()); System.out.println("FrameRate: " + videoMetaData.frameRate()); List<LabelDetection> detectedLabels = labelDetectionResult.labels(); for (LabelDetection detectedLabel : detectedLabels) { long seconds = detectedLabel.timestamp(); Label label = detectedLabel.label(); System.out.println("Millisecond: " + seconds + " "); System.out.println(" Label:" + label.name()); System.out.println(" Confidence:" + detectedLabel.label().confidence().toString()); List<Instance> instances = label.instances(); System.out.println(" Instances of " + label.name()); if (instances.isEmpty()) { System.out.println(" " + "None"); } else { for (Instance instance : instances) { System.out.println(" Confidence: " + instance.confidence().toString()); System.out.println(" Bounding box: " + instance.boundingBox().toString()); } } System.out.println(" Parent labels for " + label.name() + ":"); List<Parent> parents = label.parents(); if (parents.isEmpty()) { System.out.println(" None"); } else { for (Parent parent : parents) { System.out.println(" " + parent.name()); } } System.out.println(); } } while (labelDetectionResult != null && labelDetectionResult.nextToken() != null); } catch (RekognitionException e) { e.getMessage(); System.exit(1); } } }

Detecte contenido inapropiado u ofensivo en un vídeo almacenado en un bucket de Amazon S3.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.NotificationChannel; import software.amazon.awssdk.services.rekognition.model.S3Object; import software.amazon.awssdk.services.rekognition.model.Video; import software.amazon.awssdk.services.rekognition.model.StartContentModerationRequest; import software.amazon.awssdk.services.rekognition.model.StartContentModerationResponse; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.GetContentModerationResponse; import software.amazon.awssdk.services.rekognition.model.GetContentModerationRequest; import software.amazon.awssdk.services.rekognition.model.VideoMetadata; import software.amazon.awssdk.services.rekognition.model.ContentModerationDetection; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class VideoDetectInappropriate { private static String startJobId = ""; public static void main(String[] args) { final String usage = """ Usage: <bucket> <video> <topicArn> <roleArn> Where: bucket - The name of the bucket in which the video is located (for example, (for example, myBucket).\s video - The name of video (for example, people.mp4).\s topicArn - The ARN of the Amazon Simple Notification Service (Amazon SNS) topic.\s roleArn - The ARN of the AWS Identity and Access Management (IAM) role to use.\s """; if (args.length != 4) { System.out.println(usage); System.exit(1); } String bucket = args[0]; String video = args[1]; String topicArn = args[2]; String roleArn = args[3]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); NotificationChannel channel = NotificationChannel.builder() .snsTopicArn(topicArn) .roleArn(roleArn) .build(); startModerationDetection(rekClient, channel, bucket, video); getModResults(rekClient); System.out.println("This example is done!"); rekClient.close(); } public static void startModerationDetection(RekognitionClient rekClient, NotificationChannel channel, String bucket, String video) { try { S3Object s3Obj = S3Object.builder() .bucket(bucket) .name(video) .build(); Video vidOb = Video.builder() .s3Object(s3Obj) .build(); StartContentModerationRequest modDetectionRequest = StartContentModerationRequest.builder() .jobTag("Moderation") .notificationChannel(channel) .video(vidOb) .build(); StartContentModerationResponse startModDetectionResult = rekClient .startContentModeration(modDetectionRequest); startJobId = startModDetectionResult.jobId(); } catch (RekognitionException e) { System.out.println(e.getMessage()); System.exit(1); } } public static void getModResults(RekognitionClient rekClient) { try { String paginationToken = null; GetContentModerationResponse modDetectionResponse = null; boolean finished = false; String status; int yy = 0; do { if (modDetectionResponse != null) paginationToken = modDetectionResponse.nextToken(); GetContentModerationRequest modRequest = GetContentModerationRequest.builder() .jobId(startJobId) .nextToken(paginationToken) .maxResults(10) .build(); // Wait until the job succeeds. while (!finished) { modDetectionResponse = rekClient.getContentModeration(modRequest); status = modDetectionResponse.jobStatusAsString(); if (status.compareTo("SUCCEEDED") == 0) finished = true; else { System.out.println(yy + " status is: " + status); Thread.sleep(1000); } yy++; } finished = false; // Proceed when the job is done - otherwise VideoMetadata is null. VideoMetadata videoMetaData = modDetectionResponse.videoMetadata(); System.out.println("Format: " + videoMetaData.format()); System.out.println("Codec: " + videoMetaData.codec()); System.out.println("Duration: " + videoMetaData.durationMillis()); System.out.println("FrameRate: " + videoMetaData.frameRate()); System.out.println("Job"); List<ContentModerationDetection> mods = modDetectionResponse.moderationLabels(); for (ContentModerationDetection mod : mods) { long seconds = mod.timestamp() / 1000; System.out.print("Mod label: " + seconds + " "); System.out.println(mod.moderationLabel().toString()); System.out.println(); } } while (modDetectionResponse != null && modDetectionResponse.nextToken() != null); } catch (RekognitionException | InterruptedException e) { System.out.println(e.getMessage()); System.exit(1); } } }

Detecte segmentos de señales técnicas y segmentos de detección de tomas en un vídeo almacenado en un bucket de Amazon S3.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.S3Object; import software.amazon.awssdk.services.rekognition.model.NotificationChannel; import software.amazon.awssdk.services.rekognition.model.Video; import software.amazon.awssdk.services.rekognition.model.StartShotDetectionFilter; import software.amazon.awssdk.services.rekognition.model.StartTechnicalCueDetectionFilter; import software.amazon.awssdk.services.rekognition.model.StartSegmentDetectionFilters; import software.amazon.awssdk.services.rekognition.model.StartSegmentDetectionRequest; import software.amazon.awssdk.services.rekognition.model.StartSegmentDetectionResponse; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.GetSegmentDetectionResponse; import software.amazon.awssdk.services.rekognition.model.GetSegmentDetectionRequest; import software.amazon.awssdk.services.rekognition.model.VideoMetadata; import software.amazon.awssdk.services.rekognition.model.SegmentDetection; import software.amazon.awssdk.services.rekognition.model.TechnicalCueSegment; import software.amazon.awssdk.services.rekognition.model.ShotSegment; import software.amazon.awssdk.services.rekognition.model.SegmentType; import software.amazon.awssdk.services.sqs.SqsClient; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class VideoDetectSegment { private static String startJobId = ""; public static void main(String[] args) { final String usage = """ Usage: <bucket> <video> <topicArn> <roleArn> Where: bucket - The name of the bucket in which the video is located (for example, (for example, myBucket).\s video - The name of video (for example, people.mp4).\s topicArn - The ARN of the Amazon Simple Notification Service (Amazon SNS) topic.\s roleArn - The ARN of the AWS Identity and Access Management (IAM) role to use.\s """; if (args.length != 4) { System.out.println(usage); System.exit(1); } String bucket = args[0]; String video = args[1]; String topicArn = args[2]; String roleArn = args[3]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); SqsClient sqs = SqsClient.builder() .region(Region.US_EAST_1) .build(); NotificationChannel channel = NotificationChannel.builder() .snsTopicArn(topicArn) .roleArn(roleArn) .build(); startSegmentDetection(rekClient, channel, bucket, video); getSegmentResults(rekClient); System.out.println("This example is done!"); sqs.close(); rekClient.close(); } public static void startSegmentDetection(RekognitionClient rekClient, NotificationChannel channel, String bucket, String video) { try { S3Object s3Obj = S3Object.builder() .bucket(bucket) .name(video) .build(); Video vidOb = Video.builder() .s3Object(s3Obj) .build(); StartShotDetectionFilter cueDetectionFilter = StartShotDetectionFilter.builder() .minSegmentConfidence(60F) .build(); StartTechnicalCueDetectionFilter technicalCueDetectionFilter = StartTechnicalCueDetectionFilter.builder() .minSegmentConfidence(60F) .build(); StartSegmentDetectionFilters filters = StartSegmentDetectionFilters.builder() .shotFilter(cueDetectionFilter) .technicalCueFilter(technicalCueDetectionFilter) .build(); StartSegmentDetectionRequest segDetectionRequest = StartSegmentDetectionRequest.builder() .jobTag("DetectingLabels") .notificationChannel(channel) .segmentTypes(SegmentType.TECHNICAL_CUE, SegmentType.SHOT) .video(vidOb) .filters(filters) .build(); StartSegmentDetectionResponse segDetectionResponse = rekClient.startSegmentDetection(segDetectionRequest); startJobId = segDetectionResponse.jobId(); } catch (RekognitionException e) { e.getMessage(); System.exit(1); } } public static void getSegmentResults(RekognitionClient rekClient) { try { String paginationToken = null; GetSegmentDetectionResponse segDetectionResponse = null; boolean finished = false; String status; int yy = 0; do { if (segDetectionResponse != null) paginationToken = segDetectionResponse.nextToken(); GetSegmentDetectionRequest recognitionRequest = GetSegmentDetectionRequest.builder() .jobId(startJobId) .nextToken(paginationToken) .maxResults(10) .build(); // Wait until the job succeeds. while (!finished) { segDetectionResponse = rekClient.getSegmentDetection(recognitionRequest); status = segDetectionResponse.jobStatusAsString(); if (status.compareTo("SUCCEEDED") == 0) finished = true; else { System.out.println(yy + " status is: " + status); Thread.sleep(1000); } yy++; } finished = false; // Proceed when the job is done - otherwise VideoMetadata is null. List<VideoMetadata> videoMetaData = segDetectionResponse.videoMetadata(); for (VideoMetadata metaData : videoMetaData) { System.out.println("Format: " + metaData.format()); System.out.println("Codec: " + metaData.codec()); System.out.println("Duration: " + metaData.durationMillis()); System.out.println("FrameRate: " + metaData.frameRate()); System.out.println("Job"); } List<SegmentDetection> detectedSegments = segDetectionResponse.segments(); for (SegmentDetection detectedSegment : detectedSegments) { String type = detectedSegment.type().toString(); if (type.contains(SegmentType.TECHNICAL_CUE.toString())) { System.out.println("Technical Cue"); TechnicalCueSegment segmentCue = detectedSegment.technicalCueSegment(); System.out.println("\tType: " + segmentCue.type()); System.out.println("\tConfidence: " + segmentCue.confidence().toString()); } if (type.contains(SegmentType.SHOT.toString())) { System.out.println("Shot"); ShotSegment segmentShot = detectedSegment.shotSegment(); System.out.println("\tIndex " + segmentShot.index()); System.out.println("\tConfidence: " + segmentShot.confidence().toString()); } long seconds = detectedSegment.durationMillis(); System.out.println("\tDuration : " + seconds + " milliseconds"); System.out.println("\tStart time code: " + detectedSegment.startTimecodeSMPTE()); System.out.println("\tEnd time code: " + detectedSegment.endTimecodeSMPTE()); System.out.println("\tDuration time code: " + detectedSegment.durationSMPTE()); System.out.println(); } } while (segDetectionResponse != null && segDetectionResponse.nextToken() != null); } catch (RekognitionException | InterruptedException e) { System.out.println(e.getMessage()); System.exit(1); } } }

Detecte texto en un vídeo almacenado en un bucket de Amazon S3.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.S3Object; import software.amazon.awssdk.services.rekognition.model.NotificationChannel; import software.amazon.awssdk.services.rekognition.model.Video; import software.amazon.awssdk.services.rekognition.model.StartTextDetectionRequest; import software.amazon.awssdk.services.rekognition.model.StartTextDetectionResponse; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.GetTextDetectionResponse; import software.amazon.awssdk.services.rekognition.model.GetTextDetectionRequest; import software.amazon.awssdk.services.rekognition.model.VideoMetadata; import software.amazon.awssdk.services.rekognition.model.TextDetectionResult; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class VideoDetectText { private static String startJobId = ""; public static void main(String[] args) { final String usage = """ Usage: <bucket> <video> <topicArn> <roleArn> Where: bucket - The name of the bucket in which the video is located (for example, (for example, myBucket).\s video - The name of video (for example, people.mp4).\s topicArn - The ARN of the Amazon Simple Notification Service (Amazon SNS) topic.\s roleArn - The ARN of the AWS Identity and Access Management (IAM) role to use.\s """; if (args.length != 4) { System.out.println(usage); System.exit(1); } String bucket = args[0]; String video = args[1]; String topicArn = args[2]; String roleArn = args[3]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); NotificationChannel channel = NotificationChannel.builder() .snsTopicArn(topicArn) .roleArn(roleArn) .build(); startTextLabels(rekClient, channel, bucket, video); getTextResults(rekClient); System.out.println("This example is done!"); rekClient.close(); } public static void startTextLabels(RekognitionClient rekClient, NotificationChannel channel, String bucket, String video) { try { S3Object s3Obj = S3Object.builder() .bucket(bucket) .name(video) .build(); Video vidOb = Video.builder() .s3Object(s3Obj) .build(); StartTextDetectionRequest labelDetectionRequest = StartTextDetectionRequest.builder() .jobTag("DetectingLabels") .notificationChannel(channel) .video(vidOb) .build(); StartTextDetectionResponse labelDetectionResponse = rekClient.startTextDetection(labelDetectionRequest); startJobId = labelDetectionResponse.jobId(); } catch (RekognitionException e) { System.out.println(e.getMessage()); System.exit(1); } } public static void getTextResults(RekognitionClient rekClient) { try { String paginationToken = null; GetTextDetectionResponse textDetectionResponse = null; boolean finished = false; String status; int yy = 0; do { if (textDetectionResponse != null) paginationToken = textDetectionResponse.nextToken(); GetTextDetectionRequest recognitionRequest = GetTextDetectionRequest.builder() .jobId(startJobId) .nextToken(paginationToken) .maxResults(10) .build(); // Wait until the job succeeds. while (!finished) { textDetectionResponse = rekClient.getTextDetection(recognitionRequest); status = textDetectionResponse.jobStatusAsString(); if (status.compareTo("SUCCEEDED") == 0) finished = true; else { System.out.println(yy + " status is: " + status); Thread.sleep(1000); } yy++; } finished = false; // Proceed when the job is done - otherwise VideoMetadata is null. VideoMetadata videoMetaData = textDetectionResponse.videoMetadata(); System.out.println("Format: " + videoMetaData.format()); System.out.println("Codec: " + videoMetaData.codec()); System.out.println("Duration: " + videoMetaData.durationMillis()); System.out.println("FrameRate: " + videoMetaData.frameRate()); System.out.println("Job"); List<TextDetectionResult> labels = textDetectionResponse.textDetections(); for (TextDetectionResult detectedText : labels) { System.out.println("Confidence: " + detectedText.textDetection().confidence().toString()); System.out.println("Id : " + detectedText.textDetection().id()); System.out.println("Parent Id: " + detectedText.textDetection().parentId()); System.out.println("Type: " + detectedText.textDetection().type()); System.out.println("Text: " + detectedText.textDetection().detectedText()); System.out.println(); } } while (textDetectionResponse != null && textDetectionResponse.nextToken() != null); } catch (RekognitionException | InterruptedException e) { System.out.println(e.getMessage()); System.exit(1); } } }

Detecte personas en un vídeo almacenado en un bucket de Amazon S3.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.S3Object; import software.amazon.awssdk.services.rekognition.model.NotificationChannel; import software.amazon.awssdk.services.rekognition.model.StartPersonTrackingRequest; import software.amazon.awssdk.services.rekognition.model.Video; import software.amazon.awssdk.services.rekognition.model.StartPersonTrackingResponse; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.GetPersonTrackingResponse; import software.amazon.awssdk.services.rekognition.model.GetPersonTrackingRequest; import software.amazon.awssdk.services.rekognition.model.VideoMetadata; import software.amazon.awssdk.services.rekognition.model.PersonDetection; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class VideoPersonDetection { private static String startJobId = ""; public static void main(String[] args) { final String usage = """ Usage: <bucket> <video> <topicArn> <roleArn> Where: bucket - The name of the bucket in which the video is located (for example, (for example, myBucket).\s video - The name of video (for example, people.mp4).\s topicArn - The ARN of the Amazon Simple Notification Service (Amazon SNS) topic.\s roleArn - The ARN of the AWS Identity and Access Management (IAM) role to use.\s """; if (args.length != 4) { System.out.println(usage); System.exit(1); } String bucket = args[0]; String video = args[1]; String topicArn = args[2]; String roleArn = args[3]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); NotificationChannel channel = NotificationChannel.builder() .snsTopicArn(topicArn) .roleArn(roleArn) .build(); startPersonLabels(rekClient, channel, bucket, video); getPersonDetectionResults(rekClient); System.out.println("This example is done!"); rekClient.close(); } public static void startPersonLabels(RekognitionClient rekClient, NotificationChannel channel, String bucket, String video) { try { S3Object s3Obj = S3Object.builder() .bucket(bucket) .name(video) .build(); Video vidOb = Video.builder() .s3Object(s3Obj) .build(); StartPersonTrackingRequest personTrackingRequest = StartPersonTrackingRequest.builder() .jobTag("DetectingLabels") .video(vidOb) .notificationChannel(channel) .build(); StartPersonTrackingResponse labelDetectionResponse = rekClient.startPersonTracking(personTrackingRequest); startJobId = labelDetectionResponse.jobId(); } catch (RekognitionException e) { System.out.println(e.getMessage()); System.exit(1); } } public static void getPersonDetectionResults(RekognitionClient rekClient) { try { String paginationToken = null; GetPersonTrackingResponse personTrackingResult = null; boolean finished = false; String status; int yy = 0; do { if (personTrackingResult != null) paginationToken = personTrackingResult.nextToken(); GetPersonTrackingRequest recognitionRequest = GetPersonTrackingRequest.builder() .jobId(startJobId) .nextToken(paginationToken) .maxResults(10) .build(); // Wait until the job succeeds while (!finished) { personTrackingResult = rekClient.getPersonTracking(recognitionRequest); status = personTrackingResult.jobStatusAsString(); if (status.compareTo("SUCCEEDED") == 0) finished = true; else { System.out.println(yy + " status is: " + status); Thread.sleep(1000); } yy++; } finished = false; // Proceed when the job is done - otherwise VideoMetadata is null. VideoMetadata videoMetaData = personTrackingResult.videoMetadata(); System.out.println("Format: " + videoMetaData.format()); System.out.println("Codec: " + videoMetaData.codec()); System.out.println("Duration: " + videoMetaData.durationMillis()); System.out.println("FrameRate: " + videoMetaData.frameRate()); System.out.println("Job"); List<PersonDetection> detectedPersons = personTrackingResult.persons(); for (PersonDetection detectedPerson : detectedPersons) { long seconds = detectedPerson.timestamp() / 1000; System.out.print("Sec: " + seconds + " "); System.out.println("Person Identifier: " + detectedPerson.person().index()); System.out.println(); } } while (personTrackingResult != null && personTrackingResult.nextToken() != null); } catch (RekognitionException | InterruptedException e) { System.out.println(e.getMessage()); System.exit(1); } } }

El siguiente ejemplo de código muestra cómo crear una aplicación que utilice Amazon Rekognition para detectar objetos por categoría en las imágenes.

SDKpara Java 2.x

Muestra cómo utilizar Amazon API Rekognition Java para crear una aplicación que utilice Amazon Rekognition para identificar objetos por categoría en imágenes ubicadas en un bucket de Amazon Simple Storage Service (Amazon S3). La aplicación envía al administrador una notificación por correo electrónico con los resultados mediante Amazon Simple Email Service (AmazonSES).

Para obtener el código fuente completo y las instrucciones sobre cómo configurarla y ejecutarla, consulta el ejemplo completo en GitHub.

Servicios utilizados en este ejemplo
  • Amazon Rekognition

  • Amazon S3

  • Amazon SES

El siguiente ejemplo de código muestra cómo detectar personas y objetos en un vídeo con Amazon Rekognition.

SDKpara Java 2.x

Muestra cómo usar Amazon API Rekognition (Java) para crear una aplicación que detecte rostros y objetos en vídeos ubicados en un bucket de Amazon Simple Storage Service (Amazon S3). La aplicación envía al administrador una notificación por correo electrónico con los resultados mediante Amazon Simple Email Service (AmazonSES).

Para obtener el código fuente completo y las instrucciones sobre cómo configurarla y ejecutarla, consulta el ejemplo completo en GitHub.

Servicios utilizados en este ejemplo
  • Amazon Rekognition

  • Amazon S3

  • Amazon SES