SDK for Java 2.x를 사용한 Amazon Rekognition 예제 - AWS SDK 코드 예제

AWS Doc SDK ExamplesWord AWS SDK 리포지토리에는 더 많은 GitHub 예제가 있습니다.

기계 번역으로 제공되는 번역입니다. 제공된 번역과 원본 영어의 내용이 상충하는 경우에는 영어 버전이 우선합니다.

SDK for Java 2.x를 사용한 Amazon Rekognition 예제

다음 코드 예제에서는 Amazon Rekognition과 AWS SDK for Java 2.x 함께를 사용하여 작업을 수행하고 일반적인 시나리오를 구현하는 방법을 보여줍니다.

작업은 대규모 프로그램에서 발췌한 코드이며 컨텍스트에 맞춰 실행해야 합니다. 작업은 개별 서비스 함수를 직접적으로 호출하는 방법을 보여주며 관련 시나리오의 컨텍스트에 맞는 작업을 볼 수 있습니다.

시나리오는 동일한 서비스 내에서 또는 다른 AWS 서비스와 결합된 상태에서 여러 함수를 호출하여 특정 태스크를 수행하는 방법을 보여주는 코드 예제입니다.

각 예제에는 컨텍스트에서 코드를 설정하고 실행하는 방법에 대한 지침을 찾을 수 있는 전체 소스 코드에 대한 링크가 포함되어 있습니다.

작업

다음 코드 예시에서는 CompareFaces을 사용하는 방법을 보여 줍니다.

자세한 내용은 이미지 내 얼굴 비교를 참조하세요.

Java 2.x용 SDK
참고

더 많은 on GitHub가 있습니다. AWS 코드 예시 리포지토리에서 전체 예시를 찾고 설정 및 실행하는 방법을 배워보세요.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.Image; import software.amazon.awssdk.services.rekognition.model.CompareFacesRequest; import software.amazon.awssdk.services.rekognition.model.CompareFacesResponse; import software.amazon.awssdk.services.rekognition.model.CompareFacesMatch; import software.amazon.awssdk.services.rekognition.model.ComparedFace; import software.amazon.awssdk.services.rekognition.model.BoundingBox; import software.amazon.awssdk.core.SdkBytes; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.InputStream; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class CompareFaces { public static void main(String[] args) { final String usage = """ Usage: <pathSource> <pathTarget> Where: pathSource - The path to the source image (for example, C:\\AWS\\pic1.png).\s pathTarget - The path to the target image (for example, C:\\AWS\\pic2.png).\s """; if (args.length != 2) { System.out.println(usage); System.exit(1); } Float similarityThreshold = 70F; String sourceImage = args[0]; String targetImage = args[1]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); compareTwoFaces(rekClient, similarityThreshold, sourceImage, targetImage); rekClient.close(); } public static void compareTwoFaces(RekognitionClient rekClient, Float similarityThreshold, String sourceImage, String targetImage) { try { InputStream sourceStream = new FileInputStream(sourceImage); InputStream tarStream = new FileInputStream(targetImage); SdkBytes sourceBytes = SdkBytes.fromInputStream(sourceStream); SdkBytes targetBytes = SdkBytes.fromInputStream(tarStream); // Create an Image object for the source image. Image souImage = Image.builder() .bytes(sourceBytes) .build(); Image tarImage = Image.builder() .bytes(targetBytes) .build(); CompareFacesRequest facesRequest = CompareFacesRequest.builder() .sourceImage(souImage) .targetImage(tarImage) .similarityThreshold(similarityThreshold) .build(); // Compare the two images. CompareFacesResponse compareFacesResult = rekClient.compareFaces(facesRequest); List<CompareFacesMatch> faceDetails = compareFacesResult.faceMatches(); for (CompareFacesMatch match : faceDetails) { ComparedFace face = match.face(); BoundingBox position = face.boundingBox(); System.out.println("Face at " + position.left().toString() + " " + position.top() + " matches with " + face.confidence().toString() + "% confidence."); } List<ComparedFace> uncompared = compareFacesResult.unmatchedFaces(); System.out.println("There was " + uncompared.size() + " face(s) that did not match"); System.out.println("Source image rotation: " + compareFacesResult.sourceImageOrientationCorrection()); System.out.println("target image rotation: " + compareFacesResult.targetImageOrientationCorrection()); } catch (RekognitionException | FileNotFoundException e) { System.out.println("Failed to load source image " + sourceImage); System.exit(1); } } }
  • API 세부 정보는 CompareFaces AWS SDK for Java 2.x 참조의 API를 참조하세요.

다음 코드 예시에서는 CreateCollection을 사용하는 방법을 보여 줍니다.

자세한 내용은 컬렉션 생성을 참조하세요.

Java 2.x용 SDK
참고

더 많은 on GitHub가 있습니다. AWS 코드 예시 리포지토리에서 전체 예시를 찾고 설정 및 실행하는 방법을 배워보세요.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.CreateCollectionResponse; import software.amazon.awssdk.services.rekognition.model.CreateCollectionRequest; import software.amazon.awssdk.services.rekognition.model.RekognitionException; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class CreateCollection { public static void main(String[] args) { final String usage = """ Usage: <collectionName>\s Where: collectionName - The name of the collection.\s """; if (args.length != 1) { System.out.println(usage); System.exit(1); } String collectionId = args[0]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); System.out.println("Creating collection: " + collectionId); createMyCollection(rekClient, collectionId); rekClient.close(); } public static void createMyCollection(RekognitionClient rekClient, String collectionId) { try { CreateCollectionRequest collectionRequest = CreateCollectionRequest.builder() .collectionId(collectionId) .build(); CreateCollectionResponse collectionResponse = rekClient.createCollection(collectionRequest); System.out.println("CollectionArn: " + collectionResponse.collectionArn()); System.out.println("Status code: " + collectionResponse.statusCode().toString()); } catch (RekognitionException e) { System.out.println(e.getMessage()); System.exit(1); } } }
  • API 세부 정보는 CreateCollection AWS SDK for Java 2.x 참조의 API를 참조하세요.

다음 코드 예시에서는 DeleteCollection을 사용하는 방법을 보여 줍니다.

자세한 내용은 컬렉션을 삭제를 참조하세요.

Java 2.x용 SDK
참고

더 많은 on GitHub가 있습니다. AWS 코드 예시 리포지토리에서 전체 예시를 찾고 설정 및 실행하는 방법을 배워보세요.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.DeleteCollectionRequest; import software.amazon.awssdk.services.rekognition.model.DeleteCollectionResponse; import software.amazon.awssdk.services.rekognition.model.RekognitionException; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class DeleteCollection { public static void main(String[] args) { final String usage = """ Usage: <collectionId>\s Where: collectionId - The id of the collection to delete.\s """; if (args.length != 1) { System.out.println(usage); System.exit(1); } String collectionId = args[0]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); System.out.println("Deleting collection: " + collectionId); deleteMyCollection(rekClient, collectionId); rekClient.close(); } public static void deleteMyCollection(RekognitionClient rekClient, String collectionId) { try { DeleteCollectionRequest deleteCollectionRequest = DeleteCollectionRequest.builder() .collectionId(collectionId) .build(); DeleteCollectionResponse deleteCollectionResponse = rekClient.deleteCollection(deleteCollectionRequest); System.out.println(collectionId + ": " + deleteCollectionResponse.statusCode().toString()); } catch (RekognitionException e) { System.out.println(e.getMessage()); System.exit(1); } } }
  • API 세부 정보는 DeleteCollection AWS SDK for Java 2.x 참조의 API를 참조하세요.

다음 코드 예시에서는 DeleteFaces을 사용하는 방법을 보여 줍니다.

자세한 내용은 컬렉션에서 얼굴 삭제를 참조하십시오.

Java 2.x용 SDK
참고

더 많은 on GitHub가 있습니다. AWS 코드 예시 리포지토리에서 전체 예시를 찾고 설정 및 실행하는 방법을 배워보세요.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.DeleteFacesRequest; import software.amazon.awssdk.services.rekognition.model.RekognitionException; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class DeleteFacesFromCollection { public static void main(String[] args) { final String usage = """ Usage: <collectionId> <faceId>\s Where: collectionId - The id of the collection from which faces are deleted.\s faceId - The id of the face to delete.\s """; if (args.length != 1) { System.out.println(usage); System.exit(1); } String collectionId = args[0]; String faceId = args[1]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); System.out.println("Deleting collection: " + collectionId); deleteFacesCollection(rekClient, collectionId, faceId); rekClient.close(); } public static void deleteFacesCollection(RekognitionClient rekClient, String collectionId, String faceId) { try { DeleteFacesRequest deleteFacesRequest = DeleteFacesRequest.builder() .collectionId(collectionId) .faceIds(faceId) .build(); rekClient.deleteFaces(deleteFacesRequest); System.out.println("The face was deleted from the collection."); } catch (RekognitionException e) { System.out.println(e.getMessage()); System.exit(1); } } }
  • API 세부 정보는 DeleteFaces AWS SDK for Java 2.x 참조의 API를 참조하세요.

다음 코드 예시에서는 DescribeCollection을 사용하는 방법을 보여 줍니다.

자세한 내용은 컬렉션 설명을 참조하세요.

Java 2.x용 SDK
참고

더 많은 on GitHub가 있습니다. AWS 코드 예시 리포지토리에서 전체 예시를 찾고 설정 및 실행하는 방법을 배워보세요.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.DescribeCollectionRequest; import software.amazon.awssdk.services.rekognition.model.DescribeCollectionResponse; import software.amazon.awssdk.services.rekognition.model.RekognitionException; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class DescribeCollection { public static void main(String[] args) { final String usage = """ Usage: <collectionName> Where: collectionName - The name of the Amazon Rekognition collection.\s """; if (args.length != 1) { System.out.println(usage); System.exit(1); } String collectionName = args[0]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); describeColl(rekClient, collectionName); rekClient.close(); } public static void describeColl(RekognitionClient rekClient, String collectionName) { try { DescribeCollectionRequest describeCollectionRequest = DescribeCollectionRequest.builder() .collectionId(collectionName) .build(); DescribeCollectionResponse describeCollectionResponse = rekClient .describeCollection(describeCollectionRequest); System.out.println("Collection Arn : " + describeCollectionResponse.collectionARN()); System.out.println("Created : " + describeCollectionResponse.creationTimestamp().toString()); } catch (RekognitionException e) { System.out.println(e.getMessage()); System.exit(1); } } }
  • API 세부 정보는 DescribeCollection AWS SDK for Java 2.x 참조의 API를 참조하세요.

다음 코드 예시에서는 DetectFaces을 사용하는 방법을 보여 줍니다.

자세한 내용은 이미지에서 얼굴 감지를 참조하십시오.

Java 2.x용 SDK
참고

더 많은 on GitHub가 있습니다. AWS 코드 예시 리포지토리에서 전체 예시를 찾고 설정 및 실행하는 방법을 배워보세요.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.DetectFacesRequest; import software.amazon.awssdk.services.rekognition.model.DetectFacesResponse; import software.amazon.awssdk.services.rekognition.model.Image; import software.amazon.awssdk.services.rekognition.model.Attribute; import software.amazon.awssdk.services.rekognition.model.FaceDetail; import software.amazon.awssdk.services.rekognition.model.AgeRange; import software.amazon.awssdk.core.SdkBytes; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.InputStream; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class DetectFaces { public static void main(String[] args) { final String usage = """ Usage: <sourceImage> Where: sourceImage - The path to the image (for example, C:\\AWS\\pic1.png).\s """; if (args.length != 1) { System.out.println(usage); System.exit(1); } String sourceImage = args[0]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); detectFacesinImage(rekClient, sourceImage); rekClient.close(); } public static void detectFacesinImage(RekognitionClient rekClient, String sourceImage) { try { InputStream sourceStream = new FileInputStream(sourceImage); SdkBytes sourceBytes = SdkBytes.fromInputStream(sourceStream); // Create an Image object for the source image. Image souImage = Image.builder() .bytes(sourceBytes) .build(); DetectFacesRequest facesRequest = DetectFacesRequest.builder() .attributes(Attribute.ALL) .image(souImage) .build(); DetectFacesResponse facesResponse = rekClient.detectFaces(facesRequest); List<FaceDetail> faceDetails = facesResponse.faceDetails(); for (FaceDetail face : faceDetails) { AgeRange ageRange = face.ageRange(); System.out.println("The detected face is estimated to be between " + ageRange.low().toString() + " and " + ageRange.high().toString() + " years old."); System.out.println("There is a smile : " + face.smile().value().toString()); } } catch (RekognitionException | FileNotFoundException e) { System.out.println(e.getMessage()); System.exit(1); } } }
  • API 세부 정보는 DetectFaces AWS SDK for Java 2.x 참조의 API를 참조하세요.

다음 코드 예시에서는 DetectLabels을 사용하는 방법을 보여 줍니다.

자세한 내용은 이미지에서 레이블 감지를 참조하세요.

Java 2.x용 SDK
참고

더 많은 on GitHub가 있습니다. AWS 코드 예시 리포지토리에서 전체 예시를 찾고 설정 및 실행하는 방법을 배워보세요.

import software.amazon.awssdk.core.SdkBytes; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.Image; import software.amazon.awssdk.services.rekognition.model.DetectLabelsRequest; import software.amazon.awssdk.services.rekognition.model.DetectLabelsResponse; import software.amazon.awssdk.services.rekognition.model.Label; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.InputStream; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class DetectLabels { public static void main(String[] args) { final String usage = """ Usage: <sourceImage> Where: sourceImage - The path to the image (for example, C:\\AWS\\pic1.png).\s """; if (args.length != 1) { System.out.println(usage); System.exit(1); } String sourceImage = args[0]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); detectImageLabels(rekClient, sourceImage); rekClient.close(); } public static void detectImageLabels(RekognitionClient rekClient, String sourceImage) { try { InputStream sourceStream = new FileInputStream(sourceImage); SdkBytes sourceBytes = SdkBytes.fromInputStream(sourceStream); // Create an Image object for the source image. Image souImage = Image.builder() .bytes(sourceBytes) .build(); DetectLabelsRequest detectLabelsRequest = DetectLabelsRequest.builder() .image(souImage) .maxLabels(10) .build(); DetectLabelsResponse labelsResponse = rekClient.detectLabels(detectLabelsRequest); List<Label> labels = labelsResponse.labels(); System.out.println("Detected labels for the given photo"); for (Label label : labels) { System.out.println(label.name() + ": " + label.confidence().toString()); } } catch (RekognitionException | FileNotFoundException e) { System.out.println(e.getMessage()); System.exit(1); } } }
  • API 세부 정보는 DetectLabels AWS SDK for Java 2.x 참조의 API를 참조하세요.

다음 코드 예시에서는 DetectModerationLabels을 사용하는 방법을 보여 줍니다.

자세한 내용은 부적절한 이미지 감지를 참조하세요.

Java 2.x용 SDK
참고

더 많은 on GitHub가 있습니다. AWS 코드 예시 리포지토리에서 전체 예시를 찾고 설정 및 실행하는 방법을 배워보세요.

import software.amazon.awssdk.core.SdkBytes; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.Image; import software.amazon.awssdk.services.rekognition.model.DetectModerationLabelsRequest; import software.amazon.awssdk.services.rekognition.model.DetectModerationLabelsResponse; import software.amazon.awssdk.services.rekognition.model.ModerationLabel; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.InputStream; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class DetectModerationLabels { public static void main(String[] args) { final String usage = """ Usage: <sourceImage> Where: sourceImage - The path to the image (for example, C:\\AWS\\pic1.png).\s """; if (args.length < 1) { System.out.println(usage); System.exit(1); } String sourceImage = args[0]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); detectModLabels(rekClient, sourceImage); rekClient.close(); } public static void detectModLabels(RekognitionClient rekClient, String sourceImage) { try { InputStream sourceStream = new FileInputStream(sourceImage); SdkBytes sourceBytes = SdkBytes.fromInputStream(sourceStream); Image souImage = Image.builder() .bytes(sourceBytes) .build(); DetectModerationLabelsRequest moderationLabelsRequest = DetectModerationLabelsRequest.builder() .image(souImage) .minConfidence(60F) .build(); DetectModerationLabelsResponse moderationLabelsResponse = rekClient .detectModerationLabels(moderationLabelsRequest); List<ModerationLabel> labels = moderationLabelsResponse.moderationLabels(); System.out.println("Detected labels for image"); for (ModerationLabel label : labels) { System.out.println("Label: " + label.name() + "\n Confidence: " + label.confidence().toString() + "%" + "\n Parent:" + label.parentName()); } } catch (RekognitionException | FileNotFoundException e) { e.printStackTrace(); System.exit(1); } } }

다음 코드 예시에서는 DetectText을 사용하는 방법을 보여 줍니다.

자세한 내용은 이미지에서 텍스트 감지를 참조하세요.

Java 2.x용 SDK
참고

더 많은 on GitHub가 있습니다. AWS 코드 예시 리포지토리에서 전체 예시를 찾고 설정 및 실행하는 방법을 배워보세요.

import software.amazon.awssdk.core.SdkBytes; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.DetectTextRequest; import software.amazon.awssdk.services.rekognition.model.Image; import software.amazon.awssdk.services.rekognition.model.DetectTextResponse; import software.amazon.awssdk.services.rekognition.model.TextDetection; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.InputStream; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class DetectText { public static void main(String[] args) { final String usage = """ Usage: <sourceImage> Where: sourceImage - The path to the image that contains text (for example, C:\\AWS\\pic1.png).\s """; if (args.length != 1) { System.out.println(usage); System.exit(1); } String sourceImage = args[0]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); detectTextLabels(rekClient, sourceImage); rekClient.close(); } public static void detectTextLabels(RekognitionClient rekClient, String sourceImage) { try { InputStream sourceStream = new FileInputStream(sourceImage); SdkBytes sourceBytes = SdkBytes.fromInputStream(sourceStream); Image souImage = Image.builder() .bytes(sourceBytes) .build(); DetectTextRequest textRequest = DetectTextRequest.builder() .image(souImage) .build(); DetectTextResponse textResponse = rekClient.detectText(textRequest); List<TextDetection> textCollection = textResponse.textDetections(); System.out.println("Detected lines and words"); for (TextDetection text : textCollection) { System.out.println("Detected: " + text.detectedText()); System.out.println("Confidence: " + text.confidence().toString()); System.out.println("Id : " + text.id()); System.out.println("Parent Id: " + text.parentId()); System.out.println("Type: " + text.type()); System.out.println(); } } catch (RekognitionException | FileNotFoundException e) { System.out.println(e.getMessage()); System.exit(1); } } }
  • API 세부 정보는 DetectText AWS SDK for Java 2.x 참조의 API를 참조하세요.

다음 코드 예시에서는 IndexFaces을 사용하는 방법을 보여 줍니다.

자세한 내용은 컬렉션에 얼굴 추가를 참조하십시오.

Java 2.x용 SDK
참고

더 많은 on GitHub가 있습니다. AWS 코드 예시 리포지토리에서 전체 예시를 찾고 설정 및 실행하는 방법을 배워보세요.

import software.amazon.awssdk.core.SdkBytes; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.IndexFacesResponse; import software.amazon.awssdk.services.rekognition.model.IndexFacesRequest; import software.amazon.awssdk.services.rekognition.model.Image; import software.amazon.awssdk.services.rekognition.model.QualityFilter; import software.amazon.awssdk.services.rekognition.model.Attribute; import software.amazon.awssdk.services.rekognition.model.FaceRecord; import software.amazon.awssdk.services.rekognition.model.UnindexedFace; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.Reason; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.InputStream; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class AddFacesToCollection { public static void main(String[] args) { final String usage = """ Usage: <collectionId> <sourceImage> Where: collectionName - The name of the collection. sourceImage - The path to the image (for example, C:\\AWS\\pic1.png).\s """; if (args.length != 2) { System.out.println(usage); System.exit(1); } String collectionId = args[0]; String sourceImage = args[1]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); addToCollection(rekClient, collectionId, sourceImage); rekClient.close(); } public static void addToCollection(RekognitionClient rekClient, String collectionId, String sourceImage) { try { InputStream sourceStream = new FileInputStream(sourceImage); SdkBytes sourceBytes = SdkBytes.fromInputStream(sourceStream); Image souImage = Image.builder() .bytes(sourceBytes) .build(); IndexFacesRequest facesRequest = IndexFacesRequest.builder() .collectionId(collectionId) .image(souImage) .maxFaces(1) .qualityFilter(QualityFilter.AUTO) .detectionAttributes(Attribute.DEFAULT) .build(); IndexFacesResponse facesResponse = rekClient.indexFaces(facesRequest); System.out.println("Results for the image"); System.out.println("\n Faces indexed:"); List<FaceRecord> faceRecords = facesResponse.faceRecords(); for (FaceRecord faceRecord : faceRecords) { System.out.println(" Face ID: " + faceRecord.face().faceId()); System.out.println(" Location:" + faceRecord.faceDetail().boundingBox().toString()); } List<UnindexedFace> unindexedFaces = facesResponse.unindexedFaces(); System.out.println("Faces not indexed:"); for (UnindexedFace unindexedFace : unindexedFaces) { System.out.println(" Location:" + unindexedFace.faceDetail().boundingBox().toString()); System.out.println(" Reasons:"); for (Reason reason : unindexedFace.reasons()) { System.out.println("Reason: " + reason); } } } catch (RekognitionException | FileNotFoundException e) { System.out.println(e.getMessage()); System.exit(1); } } }
  • API 세부 정보는 IndexFaces AWS SDK for Java 2.x 참조의 API를 참조하세요.

다음 코드 예시에서는 ListCollections을 사용하는 방법을 보여 줍니다.

자세한 내용은 컬렉션 나열을 참조하세요.

Java 2.x용 SDK
참고

더 많은 on GitHub가 있습니다. AWS 코드 예시 리포지토리에서 전체 예시를 찾고 설정 및 실행하는 방법을 배워보세요.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.ListCollectionsRequest; import software.amazon.awssdk.services.rekognition.model.ListCollectionsResponse; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class ListCollections { public static void main(String[] args) { Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); System.out.println("Listing collections"); listAllCollections(rekClient); rekClient.close(); } public static void listAllCollections(RekognitionClient rekClient) { try { ListCollectionsRequest listCollectionsRequest = ListCollectionsRequest.builder() .maxResults(10) .build(); ListCollectionsResponse response = rekClient.listCollections(listCollectionsRequest); List<String> collectionIds = response.collectionIds(); for (String resultId : collectionIds) { System.out.println(resultId); } } catch (RekognitionException e) { System.out.println(e.getMessage()); System.exit(1); } } }
  • API 세부 정보는 ListCollections AWS SDK for Java 2.x 참조의 API를 참조하세요.

다음 코드 예시에서는 ListFaces을 사용하는 방법을 보여 줍니다.

자세한 내용은 컬렉션에서 얼굴 나열을 참조하세요.

Java 2.x용 SDK
참고

더 많은 on GitHub가 있습니다. AWS 코드 예시 리포지토리에서 전체 예시를 찾고 설정 및 실행하는 방법을 배워보세요.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.Face; import software.amazon.awssdk.services.rekognition.model.ListFacesRequest; import software.amazon.awssdk.services.rekognition.model.ListFacesResponse; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class ListFacesInCollection { public static void main(String[] args) { final String usage = """ Usage: <collectionId> Where: collectionId - The name of the collection.\s """; if (args.length < 1) { System.out.println(usage); System.exit(1); } String collectionId = args[0]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); System.out.println("Faces in collection " + collectionId); listFacesCollection(rekClient, collectionId); rekClient.close(); } public static void listFacesCollection(RekognitionClient rekClient, String collectionId) { try { ListFacesRequest facesRequest = ListFacesRequest.builder() .collectionId(collectionId) .maxResults(10) .build(); ListFacesResponse facesResponse = rekClient.listFaces(facesRequest); List<Face> faces = facesResponse.faces(); for (Face face : faces) { System.out.println("Confidence level there is a face: " + face.confidence()); System.out.println("The face Id value is " + face.faceId()); } } catch (RekognitionException e) { System.out.println(e.getMessage()); System.exit(1); } } }
  • API 세부 정보는 ListFaces AWS SDK for Java 2.x 참조의 API를 참조하세요.

다음 코드 예시에서는 RecognizeCelebrities을 사용하는 방법을 보여 줍니다.

자세한 내용은 유명 인사 인식을 참조하세요.

Java 2.x용 SDK
참고

더 많은 on GitHub가 있습니다. AWS 코드 예시 리포지토리에서 전체 예시를 찾고 설정 및 실행하는 방법을 배워보세요.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.core.SdkBytes; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.InputStream; import java.util.List; import software.amazon.awssdk.services.rekognition.model.RecognizeCelebritiesRequest; import software.amazon.awssdk.services.rekognition.model.RecognizeCelebritiesResponse; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.Image; import software.amazon.awssdk.services.rekognition.model.Celebrity; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class RecognizeCelebrities { public static void main(String[] args) { final String usage = """ Usage: <sourceImage> Where: sourceImage - The path to the image (for example, C:\\AWS\\pic1.png).\s """; if (args.length != 1) { System.out.println(usage); System.exit(1); } String sourceImage = args[0]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); System.out.println("Locating celebrities in " + sourceImage); recognizeAllCelebrities(rekClient, sourceImage); rekClient.close(); } public static void recognizeAllCelebrities(RekognitionClient rekClient, String sourceImage) { try { InputStream sourceStream = new FileInputStream(sourceImage); SdkBytes sourceBytes = SdkBytes.fromInputStream(sourceStream); Image souImage = Image.builder() .bytes(sourceBytes) .build(); RecognizeCelebritiesRequest request = RecognizeCelebritiesRequest.builder() .image(souImage) .build(); RecognizeCelebritiesResponse result = rekClient.recognizeCelebrities(request); List<Celebrity> celebs = result.celebrityFaces(); System.out.println(celebs.size() + " celebrity(s) were recognized.\n"); for (Celebrity celebrity : celebs) { System.out.println("Celebrity recognized: " + celebrity.name()); System.out.println("Celebrity ID: " + celebrity.id()); System.out.println("Further information (if available):"); for (String url : celebrity.urls()) { System.out.println(url); } System.out.println(); } System.out.println(result.unrecognizedFaces().size() + " face(s) were unrecognized."); } catch (RekognitionException | FileNotFoundException e) { System.out.println(e.getMessage()); System.exit(1); } } }

다음 코드 예시에서는 SearchFaces을 사용하는 방법을 보여 줍니다.

자세한 내용은 얼굴 검색(face ID)을 참조하세요.

Java 2.x용 SDK
참고

더 많은 on GitHub가 있습니다. AWS 코드 예시 리포지토리에서 전체 예시를 찾고 설정 및 실행하는 방법을 배워보세요.

import software.amazon.awssdk.core.SdkBytes; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.SearchFacesByImageRequest; import software.amazon.awssdk.services.rekognition.model.Image; import software.amazon.awssdk.services.rekognition.model.SearchFacesByImageResponse; import software.amazon.awssdk.services.rekognition.model.FaceMatch; import java.io.File; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.InputStream; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class SearchFaceMatchingImageCollection { public static void main(String[] args) { final String usage = """ Usage: <collectionId> <sourceImage> Where: collectionId - The id of the collection. \s sourceImage - The path to the image (for example, C:\\AWS\\pic1.png).\s """; if (args.length != 2) { System.out.println(usage); System.exit(1); } String collectionId = args[0]; String sourceImage = args[1]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); System.out.println("Searching for a face in a collections"); searchFaceInCollection(rekClient, collectionId, sourceImage); rekClient.close(); } public static void searchFaceInCollection(RekognitionClient rekClient, String collectionId, String sourceImage) { try { InputStream sourceStream = new FileInputStream(new File(sourceImage)); SdkBytes sourceBytes = SdkBytes.fromInputStream(sourceStream); Image souImage = Image.builder() .bytes(sourceBytes) .build(); SearchFacesByImageRequest facesByImageRequest = SearchFacesByImageRequest.builder() .image(souImage) .maxFaces(10) .faceMatchThreshold(70F) .collectionId(collectionId) .build(); SearchFacesByImageResponse imageResponse = rekClient.searchFacesByImage(facesByImageRequest); System.out.println("Faces matching in the collection"); List<FaceMatch> faceImageMatches = imageResponse.faceMatches(); for (FaceMatch face : faceImageMatches) { System.out.println("The similarity level is " + face.similarity()); System.out.println(); } } catch (RekognitionException | FileNotFoundException e) { System.out.println(e.getMessage()); System.exit(1); } } }
  • API 세부 정보는 SearchFaces AWS SDK for Java 2.x 참조의 API를 참조하세요.

다음 코드 예시에서는 SearchFacesByImage을 사용하는 방법을 보여 줍니다.

자세한 내용은 얼굴(이미지) 검색을 참조하세요.

Java 2.x용 SDK
참고

더 많은 on GitHub가 있습니다. AWS 코드 예시 리포지토리에서 전체 예시를 찾고 설정 및 실행하는 방법을 배워보세요.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.SearchFacesRequest; import software.amazon.awssdk.services.rekognition.model.SearchFacesResponse; import software.amazon.awssdk.services.rekognition.model.FaceMatch; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class SearchFaceMatchingIdCollection { public static void main(String[] args) { final String usage = """ Usage: <collectionId> <sourceImage> Where: collectionId - The id of the collection. \s sourceImage - The path to the image (for example, C:\\AWS\\pic1.png).\s """; if (args.length != 2) { System.out.println(usage); System.exit(1); } String collectionId = args[0]; String faceId = args[1]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); System.out.println("Searching for a face in a collections"); searchFacebyId(rekClient, collectionId, faceId); rekClient.close(); } public static void searchFacebyId(RekognitionClient rekClient, String collectionId, String faceId) { try { SearchFacesRequest searchFacesRequest = SearchFacesRequest.builder() .collectionId(collectionId) .faceId(faceId) .faceMatchThreshold(70F) .maxFaces(2) .build(); SearchFacesResponse imageResponse = rekClient.searchFaces(searchFacesRequest); System.out.println("Faces matching in the collection"); List<FaceMatch> faceImageMatches = imageResponse.faceMatches(); for (FaceMatch face : faceImageMatches) { System.out.println("The similarity level is " + face.similarity()); System.out.println(); } } catch (RekognitionException e) { System.out.println(e.getMessage()); System.exit(1); } } }
  • API 세부 정보는 SearchFacesByImage AWS SDK for Java 2.x 참조의 API를 참조하세요.

시나리오

다음 코드 예시에서는 사용자가 레이블을 사용하여 사진을 관리할 수 있는 서버리스 애플리케이션을 생성하는 방법을 보여줍니다.

Java 2.x용 SDK

Amazon Rekognition을 사용하여 이미지에서 레이블을 감지하고 나중에 검색할 수 있도록 저장하는 사진 자산 관리 애플리케이션을 개발하는 방법을 보여줍니다.

전체 소스 코드와 설정 및 실행 방법에 대한 지침은 GitHub의 전체 예제를 참조하세요.

이 예제의 출처에 대한 자세한 내용은 AWS  커뮤니티의 게시물을 참조하십시오.

이 예시에서 사용되는 서비스
  • API Gateway

  • DynamoDB

  • Lambda

  • Amazon Rekognition

  • Amazon S3

  • Amazon SNS

다음 코드 예제는 Amazon Rekognition을 사용하여 이미지에서 개인 보호 장비(PPE)를 감지하는 앱을 구축하는 방법을 보여줍니다.

Java 2.x용 SDK

개인 보호 장비로 이미지를 감지하는 AWS Lambda 함수를 생성하는 방법을 보여줍니다.

전체 소스 코드와 설정 및 실행 방법에 대한 지침은 GitHub의 전체 예제를 참조하세요.

이 예제에서 사용되는 서비스
  • DynamoDB

  • Amazon Rekognition

  • Amazon S3

  • Amazon SES

다음 코드 예제에서는 다음과 같은 작업을 수행하는 방법을 보여줍니다.

  • Amazon Rekognition 작업을 시작하여 동영상에서 사람, 사물, 텍스트와 같은 요소를 탐지하세요.

  • 작업이 완료될 때까지 작업 상태를 확인하세요.

  • 각 작업에서 감지한 요소의 목록을 출력합니다.

Java 2.x용 SDK
참고

더 많은 on GitHub가 있습니다. AWS 코드 예시 리포지토리에서 전체 예시를 찾고 설정 및 실행하는 방법을 배워보세요.

Amazon S3 버킷에 있는 동영상에서 유명인사의 결과를 가져옵니다.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.S3Object; import software.amazon.awssdk.services.rekognition.model.NotificationChannel; import software.amazon.awssdk.services.rekognition.model.Video; import software.amazon.awssdk.services.rekognition.model.StartCelebrityRecognitionResponse; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.CelebrityRecognitionSortBy; import software.amazon.awssdk.services.rekognition.model.VideoMetadata; import software.amazon.awssdk.services.rekognition.model.CelebrityRecognition; import software.amazon.awssdk.services.rekognition.model.CelebrityDetail; import software.amazon.awssdk.services.rekognition.model.StartCelebrityRecognitionRequest; import software.amazon.awssdk.services.rekognition.model.GetCelebrityRecognitionRequest; import software.amazon.awssdk.services.rekognition.model.GetCelebrityRecognitionResponse; import java.util.List; /** * To run this code example, ensure that you perform the Prerequisites as stated * in the Amazon Rekognition Guide: * https://docs.aws.amazon.com/rekognition/latest/dg/video-analyzing-with-sqs.html * * Also, ensure that set up your development environment, including your * credentials. * * For information, see this documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class VideoCelebrityDetection { private static String startJobId = ""; public static void main(String[] args) { final String usage = """ Usage: <bucket> <video> <topicArn> <roleArn> Where: bucket - The name of the bucket in which the video is located (for example, (for example, myBucket).\s video - The name of video (for example, people.mp4).\s topicArn - The ARN of the Amazon Simple Notification Service (Amazon SNS) topic.\s roleArn - The ARN of the AWS Identity and Access Management (IAM) role to use.\s """; if (args.length != 4) { System.out.println(usage); System.exit(1); } String bucket = args[0]; String video = args[1]; String topicArn = args[2]; String roleArn = args[3]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); NotificationChannel channel = NotificationChannel.builder() .snsTopicArn(topicArn) .roleArn(roleArn) .build(); startCelebrityDetection(rekClient, channel, bucket, video); getCelebrityDetectionResults(rekClient); System.out.println("This example is done!"); rekClient.close(); } public static void startCelebrityDetection(RekognitionClient rekClient, NotificationChannel channel, String bucket, String video) { try { S3Object s3Obj = S3Object.builder() .bucket(bucket) .name(video) .build(); Video vidOb = Video.builder() .s3Object(s3Obj) .build(); StartCelebrityRecognitionRequest recognitionRequest = StartCelebrityRecognitionRequest.builder() .jobTag("Celebrities") .notificationChannel(channel) .video(vidOb) .build(); StartCelebrityRecognitionResponse startCelebrityRecognitionResult = rekClient .startCelebrityRecognition(recognitionRequest); startJobId = startCelebrityRecognitionResult.jobId(); } catch (RekognitionException e) { System.out.println(e.getMessage()); System.exit(1); } } public static void getCelebrityDetectionResults(RekognitionClient rekClient) { try { String paginationToken = null; GetCelebrityRecognitionResponse recognitionResponse = null; boolean finished = false; String status; int yy = 0; do { if (recognitionResponse != null) paginationToken = recognitionResponse.nextToken(); GetCelebrityRecognitionRequest recognitionRequest = GetCelebrityRecognitionRequest.builder() .jobId(startJobId) .nextToken(paginationToken) .sortBy(CelebrityRecognitionSortBy.TIMESTAMP) .maxResults(10) .build(); // Wait until the job succeeds while (!finished) { recognitionResponse = rekClient.getCelebrityRecognition(recognitionRequest); status = recognitionResponse.jobStatusAsString(); if (status.compareTo("SUCCEEDED") == 0) finished = true; else { System.out.println(yy + " status is: " + status); Thread.sleep(1000); } yy++; } finished = false; // Proceed when the job is done - otherwise VideoMetadata is null. VideoMetadata videoMetaData = recognitionResponse.videoMetadata(); System.out.println("Format: " + videoMetaData.format()); System.out.println("Codec: " + videoMetaData.codec()); System.out.println("Duration: " + videoMetaData.durationMillis()); System.out.println("FrameRate: " + videoMetaData.frameRate()); System.out.println("Job"); List<CelebrityRecognition> celebs = recognitionResponse.celebrities(); for (CelebrityRecognition celeb : celebs) { long seconds = celeb.timestamp() / 1000; System.out.print("Sec: " + seconds + " "); CelebrityDetail details = celeb.celebrity(); System.out.println("Name: " + details.name()); System.out.println("Id: " + details.id()); System.out.println(); } } while (recognitionResponse.nextToken() != null); } catch (RekognitionException | InterruptedException e) { System.out.println(e.getMessage()); System.exit(1); } } }

레이블 감지 작업을 통해 동영상의 레이블을 감지합니다.

import com.fasterxml.jackson.core.JsonProcessingException; import com.fasterxml.jackson.databind.JsonMappingException; import com.fasterxml.jackson.databind.JsonNode; import com.fasterxml.jackson.databind.ObjectMapper; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.StartLabelDetectionResponse; import software.amazon.awssdk.services.rekognition.model.NotificationChannel; import software.amazon.awssdk.services.rekognition.model.S3Object; import software.amazon.awssdk.services.rekognition.model.Video; import software.amazon.awssdk.services.rekognition.model.StartLabelDetectionRequest; import software.amazon.awssdk.services.rekognition.model.GetLabelDetectionRequest; import software.amazon.awssdk.services.rekognition.model.GetLabelDetectionResponse; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.LabelDetectionSortBy; import software.amazon.awssdk.services.rekognition.model.VideoMetadata; import software.amazon.awssdk.services.rekognition.model.LabelDetection; import software.amazon.awssdk.services.rekognition.model.Label; import software.amazon.awssdk.services.rekognition.model.Instance; import software.amazon.awssdk.services.rekognition.model.Parent; import software.amazon.awssdk.services.sqs.SqsClient; import software.amazon.awssdk.services.sqs.model.Message; import software.amazon.awssdk.services.sqs.model.ReceiveMessageRequest; import software.amazon.awssdk.services.sqs.model.DeleteMessageRequest; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class VideoDetect { private static String startJobId = ""; public static void main(String[] args) { final String usage = """ Usage: <bucket> <video> <queueUrl> <topicArn> <roleArn> Where: bucket - The name of the bucket in which the video is located (for example, (for example, myBucket).\s video - The name of the video (for example, people.mp4).\s queueUrl- The URL of a SQS queue.\s topicArn - The ARN of the Amazon Simple Notification Service (Amazon SNS) topic.\s roleArn - The ARN of the AWS Identity and Access Management (IAM) role to use.\s """; if (args.length != 5) { System.out.println(usage); System.exit(1); } String bucket = args[0]; String video = args[1]; String queueUrl = args[2]; String topicArn = args[3]; String roleArn = args[4]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); SqsClient sqs = SqsClient.builder() .region(Region.US_EAST_1) .build(); NotificationChannel channel = NotificationChannel.builder() .snsTopicArn(topicArn) .roleArn(roleArn) .build(); startLabels(rekClient, channel, bucket, video); getLabelJob(rekClient, sqs, queueUrl); System.out.println("This example is done!"); sqs.close(); rekClient.close(); } public static void startLabels(RekognitionClient rekClient, NotificationChannel channel, String bucket, String video) { try { S3Object s3Obj = S3Object.builder() .bucket(bucket) .name(video) .build(); Video vidOb = Video.builder() .s3Object(s3Obj) .build(); StartLabelDetectionRequest labelDetectionRequest = StartLabelDetectionRequest.builder() .jobTag("DetectingLabels") .notificationChannel(channel) .video(vidOb) .minConfidence(50F) .build(); StartLabelDetectionResponse labelDetectionResponse = rekClient.startLabelDetection(labelDetectionRequest); startJobId = labelDetectionResponse.jobId(); boolean ans = true; String status = ""; int yy = 0; while (ans) { GetLabelDetectionRequest detectionRequest = GetLabelDetectionRequest.builder() .jobId(startJobId) .maxResults(10) .build(); GetLabelDetectionResponse result = rekClient.getLabelDetection(detectionRequest); status = result.jobStatusAsString(); if (status.compareTo("SUCCEEDED") == 0) ans = false; else System.out.println(yy + " status is: " + status); Thread.sleep(1000); yy++; } System.out.println(startJobId + " status is: " + status); } catch (RekognitionException | InterruptedException e) { e.getMessage(); System.exit(1); } } public static void getLabelJob(RekognitionClient rekClient, SqsClient sqs, String queueUrl) { List<Message> messages; ReceiveMessageRequest messageRequest = ReceiveMessageRequest.builder() .queueUrl(queueUrl) .build(); try { messages = sqs.receiveMessage(messageRequest).messages(); if (!messages.isEmpty()) { for (Message message : messages) { String notification = message.body(); // Get the status and job id from the notification ObjectMapper mapper = new ObjectMapper(); JsonNode jsonMessageTree = mapper.readTree(notification); JsonNode messageBodyText = jsonMessageTree.get("Message"); ObjectMapper operationResultMapper = new ObjectMapper(); JsonNode jsonResultTree = operationResultMapper.readTree(messageBodyText.textValue()); JsonNode operationJobId = jsonResultTree.get("JobId"); JsonNode operationStatus = jsonResultTree.get("Status"); System.out.println("Job found in JSON is " + operationJobId); DeleteMessageRequest deleteMessageRequest = DeleteMessageRequest.builder() .queueUrl(queueUrl) .build(); String jobId = operationJobId.textValue(); if (startJobId.compareTo(jobId) == 0) { System.out.println("Job id: " + operationJobId); System.out.println("Status : " + operationStatus.toString()); if (operationStatus.asText().equals("SUCCEEDED")) getResultsLabels(rekClient); else System.out.println("Video analysis failed"); sqs.deleteMessage(deleteMessageRequest); } else { System.out.println("Job received was not job " + startJobId); sqs.deleteMessage(deleteMessageRequest); } } } } catch (RekognitionException e) { e.getMessage(); System.exit(1); } catch (JsonMappingException e) { e.printStackTrace(); } catch (JsonProcessingException e) { e.printStackTrace(); } } // Gets the job results by calling GetLabelDetection private static void getResultsLabels(RekognitionClient rekClient) { int maxResults = 10; String paginationToken = null; GetLabelDetectionResponse labelDetectionResult = null; try { do { if (labelDetectionResult != null) paginationToken = labelDetectionResult.nextToken(); GetLabelDetectionRequest labelDetectionRequest = GetLabelDetectionRequest.builder() .jobId(startJobId) .sortBy(LabelDetectionSortBy.TIMESTAMP) .maxResults(maxResults) .nextToken(paginationToken) .build(); labelDetectionResult = rekClient.getLabelDetection(labelDetectionRequest); VideoMetadata videoMetaData = labelDetectionResult.videoMetadata(); System.out.println("Format: " + videoMetaData.format()); System.out.println("Codec: " + videoMetaData.codec()); System.out.println("Duration: " + videoMetaData.durationMillis()); System.out.println("FrameRate: " + videoMetaData.frameRate()); List<LabelDetection> detectedLabels = labelDetectionResult.labels(); for (LabelDetection detectedLabel : detectedLabels) { long seconds = detectedLabel.timestamp(); Label label = detectedLabel.label(); System.out.println("Millisecond: " + seconds + " "); System.out.println(" Label:" + label.name()); System.out.println(" Confidence:" + detectedLabel.label().confidence().toString()); List<Instance> instances = label.instances(); System.out.println(" Instances of " + label.name()); if (instances.isEmpty()) { System.out.println(" " + "None"); } else { for (Instance instance : instances) { System.out.println(" Confidence: " + instance.confidence().toString()); System.out.println(" Bounding box: " + instance.boundingBox().toString()); } } System.out.println(" Parent labels for " + label.name() + ":"); List<Parent> parents = label.parents(); if (parents.isEmpty()) { System.out.println(" None"); } else { for (Parent parent : parents) { System.out.println(" " + parent.name()); } } System.out.println(); } } while (labelDetectionResult != null && labelDetectionResult.nextToken() != null); } catch (RekognitionException e) { e.getMessage(); System.exit(1); } } }

Amazon S3 버킷에 저장된 동영상에서 얼굴을 감지합니다.

import com.fasterxml.jackson.core.JsonProcessingException; import com.fasterxml.jackson.databind.JsonMappingException; import com.fasterxml.jackson.databind.JsonNode; import com.fasterxml.jackson.databind.ObjectMapper; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.StartLabelDetectionResponse; import software.amazon.awssdk.services.rekognition.model.NotificationChannel; import software.amazon.awssdk.services.rekognition.model.S3Object; import software.amazon.awssdk.services.rekognition.model.Video; import software.amazon.awssdk.services.rekognition.model.StartLabelDetectionRequest; import software.amazon.awssdk.services.rekognition.model.GetLabelDetectionRequest; import software.amazon.awssdk.services.rekognition.model.GetLabelDetectionResponse; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.LabelDetectionSortBy; import software.amazon.awssdk.services.rekognition.model.VideoMetadata; import software.amazon.awssdk.services.rekognition.model.LabelDetection; import software.amazon.awssdk.services.rekognition.model.Label; import software.amazon.awssdk.services.rekognition.model.Instance; import software.amazon.awssdk.services.rekognition.model.Parent; import software.amazon.awssdk.services.sqs.SqsClient; import software.amazon.awssdk.services.sqs.model.Message; import software.amazon.awssdk.services.sqs.model.ReceiveMessageRequest; import software.amazon.awssdk.services.sqs.model.DeleteMessageRequest; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class VideoDetect { private static String startJobId = ""; public static void main(String[] args) { final String usage = """ Usage: <bucket> <video> <queueUrl> <topicArn> <roleArn> Where: bucket - The name of the bucket in which the video is located (for example, (for example, myBucket).\s video - The name of the video (for example, people.mp4).\s queueUrl- The URL of a SQS queue.\s topicArn - The ARN of the Amazon Simple Notification Service (Amazon SNS) topic.\s roleArn - The ARN of the AWS Identity and Access Management (IAM) role to use.\s """; if (args.length != 5) { System.out.println(usage); System.exit(1); } String bucket = args[0]; String video = args[1]; String queueUrl = args[2]; String topicArn = args[3]; String roleArn = args[4]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); SqsClient sqs = SqsClient.builder() .region(Region.US_EAST_1) .build(); NotificationChannel channel = NotificationChannel.builder() .snsTopicArn(topicArn) .roleArn(roleArn) .build(); startLabels(rekClient, channel, bucket, video); getLabelJob(rekClient, sqs, queueUrl); System.out.println("This example is done!"); sqs.close(); rekClient.close(); } public static void startLabels(RekognitionClient rekClient, NotificationChannel channel, String bucket, String video) { try { S3Object s3Obj = S3Object.builder() .bucket(bucket) .name(video) .build(); Video vidOb = Video.builder() .s3Object(s3Obj) .build(); StartLabelDetectionRequest labelDetectionRequest = StartLabelDetectionRequest.builder() .jobTag("DetectingLabels") .notificationChannel(channel) .video(vidOb) .minConfidence(50F) .build(); StartLabelDetectionResponse labelDetectionResponse = rekClient.startLabelDetection(labelDetectionRequest); startJobId = labelDetectionResponse.jobId(); boolean ans = true; String status = ""; int yy = 0; while (ans) { GetLabelDetectionRequest detectionRequest = GetLabelDetectionRequest.builder() .jobId(startJobId) .maxResults(10) .build(); GetLabelDetectionResponse result = rekClient.getLabelDetection(detectionRequest); status = result.jobStatusAsString(); if (status.compareTo("SUCCEEDED") == 0) ans = false; else System.out.println(yy + " status is: " + status); Thread.sleep(1000); yy++; } System.out.println(startJobId + " status is: " + status); } catch (RekognitionException | InterruptedException e) { e.getMessage(); System.exit(1); } } public static void getLabelJob(RekognitionClient rekClient, SqsClient sqs, String queueUrl) { List<Message> messages; ReceiveMessageRequest messageRequest = ReceiveMessageRequest.builder() .queueUrl(queueUrl) .build(); try { messages = sqs.receiveMessage(messageRequest).messages(); if (!messages.isEmpty()) { for (Message message : messages) { String notification = message.body(); // Get the status and job id from the notification ObjectMapper mapper = new ObjectMapper(); JsonNode jsonMessageTree = mapper.readTree(notification); JsonNode messageBodyText = jsonMessageTree.get("Message"); ObjectMapper operationResultMapper = new ObjectMapper(); JsonNode jsonResultTree = operationResultMapper.readTree(messageBodyText.textValue()); JsonNode operationJobId = jsonResultTree.get("JobId"); JsonNode operationStatus = jsonResultTree.get("Status"); System.out.println("Job found in JSON is " + operationJobId); DeleteMessageRequest deleteMessageRequest = DeleteMessageRequest.builder() .queueUrl(queueUrl) .build(); String jobId = operationJobId.textValue(); if (startJobId.compareTo(jobId) == 0) { System.out.println("Job id: " + operationJobId); System.out.println("Status : " + operationStatus.toString()); if (operationStatus.asText().equals("SUCCEEDED")) getResultsLabels(rekClient); else System.out.println("Video analysis failed"); sqs.deleteMessage(deleteMessageRequest); } else { System.out.println("Job received was not job " + startJobId); sqs.deleteMessage(deleteMessageRequest); } } } } catch (RekognitionException e) { e.getMessage(); System.exit(1); } catch (JsonMappingException e) { e.printStackTrace(); } catch (JsonProcessingException e) { e.printStackTrace(); } } // Gets the job results by calling GetLabelDetection private static void getResultsLabels(RekognitionClient rekClient) { int maxResults = 10; String paginationToken = null; GetLabelDetectionResponse labelDetectionResult = null; try { do { if (labelDetectionResult != null) paginationToken = labelDetectionResult.nextToken(); GetLabelDetectionRequest labelDetectionRequest = GetLabelDetectionRequest.builder() .jobId(startJobId) .sortBy(LabelDetectionSortBy.TIMESTAMP) .maxResults(maxResults) .nextToken(paginationToken) .build(); labelDetectionResult = rekClient.getLabelDetection(labelDetectionRequest); VideoMetadata videoMetaData = labelDetectionResult.videoMetadata(); System.out.println("Format: " + videoMetaData.format()); System.out.println("Codec: " + videoMetaData.codec()); System.out.println("Duration: " + videoMetaData.durationMillis()); System.out.println("FrameRate: " + videoMetaData.frameRate()); List<LabelDetection> detectedLabels = labelDetectionResult.labels(); for (LabelDetection detectedLabel : detectedLabels) { long seconds = detectedLabel.timestamp(); Label label = detectedLabel.label(); System.out.println("Millisecond: " + seconds + " "); System.out.println(" Label:" + label.name()); System.out.println(" Confidence:" + detectedLabel.label().confidence().toString()); List<Instance> instances = label.instances(); System.out.println(" Instances of " + label.name()); if (instances.isEmpty()) { System.out.println(" " + "None"); } else { for (Instance instance : instances) { System.out.println(" Confidence: " + instance.confidence().toString()); System.out.println(" Bounding box: " + instance.boundingBox().toString()); } } System.out.println(" Parent labels for " + label.name() + ":"); List<Parent> parents = label.parents(); if (parents.isEmpty()) { System.out.println(" None"); } else { for (Parent parent : parents) { System.out.println(" " + parent.name()); } } System.out.println(); } } while (labelDetectionResult != null && labelDetectionResult.nextToken() != null); } catch (RekognitionException e) { e.getMessage(); System.exit(1); } } }

Amazon S3 버킷에 저장된 동영상에서 부적절하거나 불쾌감을 주는 콘텐츠를 감지합니다.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.NotificationChannel; import software.amazon.awssdk.services.rekognition.model.S3Object; import software.amazon.awssdk.services.rekognition.model.Video; import software.amazon.awssdk.services.rekognition.model.StartContentModerationRequest; import software.amazon.awssdk.services.rekognition.model.StartContentModerationResponse; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.GetContentModerationResponse; import software.amazon.awssdk.services.rekognition.model.GetContentModerationRequest; import software.amazon.awssdk.services.rekognition.model.VideoMetadata; import software.amazon.awssdk.services.rekognition.model.ContentModerationDetection; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class VideoDetectInappropriate { private static String startJobId = ""; public static void main(String[] args) { final String usage = """ Usage: <bucket> <video> <topicArn> <roleArn> Where: bucket - The name of the bucket in which the video is located (for example, (for example, myBucket).\s video - The name of video (for example, people.mp4).\s topicArn - The ARN of the Amazon Simple Notification Service (Amazon SNS) topic.\s roleArn - The ARN of the AWS Identity and Access Management (IAM) role to use.\s """; if (args.length != 4) { System.out.println(usage); System.exit(1); } String bucket = args[0]; String video = args[1]; String topicArn = args[2]; String roleArn = args[3]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); NotificationChannel channel = NotificationChannel.builder() .snsTopicArn(topicArn) .roleArn(roleArn) .build(); startModerationDetection(rekClient, channel, bucket, video); getModResults(rekClient); System.out.println("This example is done!"); rekClient.close(); } public static void startModerationDetection(RekognitionClient rekClient, NotificationChannel channel, String bucket, String video) { try { S3Object s3Obj = S3Object.builder() .bucket(bucket) .name(video) .build(); Video vidOb = Video.builder() .s3Object(s3Obj) .build(); StartContentModerationRequest modDetectionRequest = StartContentModerationRequest.builder() .jobTag("Moderation") .notificationChannel(channel) .video(vidOb) .build(); StartContentModerationResponse startModDetectionResult = rekClient .startContentModeration(modDetectionRequest); startJobId = startModDetectionResult.jobId(); } catch (RekognitionException e) { System.out.println(e.getMessage()); System.exit(1); } } public static void getModResults(RekognitionClient rekClient) { try { String paginationToken = null; GetContentModerationResponse modDetectionResponse = null; boolean finished = false; String status; int yy = 0; do { if (modDetectionResponse != null) paginationToken = modDetectionResponse.nextToken(); GetContentModerationRequest modRequest = GetContentModerationRequest.builder() .jobId(startJobId) .nextToken(paginationToken) .maxResults(10) .build(); // Wait until the job succeeds. while (!finished) { modDetectionResponse = rekClient.getContentModeration(modRequest); status = modDetectionResponse.jobStatusAsString(); if (status.compareTo("SUCCEEDED") == 0) finished = true; else { System.out.println(yy + " status is: " + status); Thread.sleep(1000); } yy++; } finished = false; // Proceed when the job is done - otherwise VideoMetadata is null. VideoMetadata videoMetaData = modDetectionResponse.videoMetadata(); System.out.println("Format: " + videoMetaData.format()); System.out.println("Codec: " + videoMetaData.codec()); System.out.println("Duration: " + videoMetaData.durationMillis()); System.out.println("FrameRate: " + videoMetaData.frameRate()); System.out.println("Job"); List<ContentModerationDetection> mods = modDetectionResponse.moderationLabels(); for (ContentModerationDetection mod : mods) { long seconds = mod.timestamp() / 1000; System.out.print("Mod label: " + seconds + " "); System.out.println(mod.moderationLabel().toString()); System.out.println(); } } while (modDetectionResponse != null && modDetectionResponse.nextToken() != null); } catch (RekognitionException | InterruptedException e) { System.out.println(e.getMessage()); System.exit(1); } } }

Amazon S3 버킷에 저장된 동영상에서 기술적 큐 세그먼트와 샷 감지 세그먼트를 감지합니다.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.S3Object; import software.amazon.awssdk.services.rekognition.model.NotificationChannel; import software.amazon.awssdk.services.rekognition.model.Video; import software.amazon.awssdk.services.rekognition.model.StartShotDetectionFilter; import software.amazon.awssdk.services.rekognition.model.StartTechnicalCueDetectionFilter; import software.amazon.awssdk.services.rekognition.model.StartSegmentDetectionFilters; import software.amazon.awssdk.services.rekognition.model.StartSegmentDetectionRequest; import software.amazon.awssdk.services.rekognition.model.StartSegmentDetectionResponse; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.GetSegmentDetectionResponse; import software.amazon.awssdk.services.rekognition.model.GetSegmentDetectionRequest; import software.amazon.awssdk.services.rekognition.model.VideoMetadata; import software.amazon.awssdk.services.rekognition.model.SegmentDetection; import software.amazon.awssdk.services.rekognition.model.TechnicalCueSegment; import software.amazon.awssdk.services.rekognition.model.ShotSegment; import software.amazon.awssdk.services.rekognition.model.SegmentType; import software.amazon.awssdk.services.sqs.SqsClient; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class VideoDetectSegment { private static String startJobId = ""; public static void main(String[] args) { final String usage = """ Usage: <bucket> <video> <topicArn> <roleArn> Where: bucket - The name of the bucket in which the video is located (for example, (for example, myBucket).\s video - The name of video (for example, people.mp4).\s topicArn - The ARN of the Amazon Simple Notification Service (Amazon SNS) topic.\s roleArn - The ARN of the AWS Identity and Access Management (IAM) role to use.\s """; if (args.length != 4) { System.out.println(usage); System.exit(1); } String bucket = args[0]; String video = args[1]; String topicArn = args[2]; String roleArn = args[3]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); SqsClient sqs = SqsClient.builder() .region(Region.US_EAST_1) .build(); NotificationChannel channel = NotificationChannel.builder() .snsTopicArn(topicArn) .roleArn(roleArn) .build(); startSegmentDetection(rekClient, channel, bucket, video); getSegmentResults(rekClient); System.out.println("This example is done!"); sqs.close(); rekClient.close(); } public static void startSegmentDetection(RekognitionClient rekClient, NotificationChannel channel, String bucket, String video) { try { S3Object s3Obj = S3Object.builder() .bucket(bucket) .name(video) .build(); Video vidOb = Video.builder() .s3Object(s3Obj) .build(); StartShotDetectionFilter cueDetectionFilter = StartShotDetectionFilter.builder() .minSegmentConfidence(60F) .build(); StartTechnicalCueDetectionFilter technicalCueDetectionFilter = StartTechnicalCueDetectionFilter.builder() .minSegmentConfidence(60F) .build(); StartSegmentDetectionFilters filters = StartSegmentDetectionFilters.builder() .shotFilter(cueDetectionFilter) .technicalCueFilter(technicalCueDetectionFilter) .build(); StartSegmentDetectionRequest segDetectionRequest = StartSegmentDetectionRequest.builder() .jobTag("DetectingLabels") .notificationChannel(channel) .segmentTypes(SegmentType.TECHNICAL_CUE, SegmentType.SHOT) .video(vidOb) .filters(filters) .build(); StartSegmentDetectionResponse segDetectionResponse = rekClient.startSegmentDetection(segDetectionRequest); startJobId = segDetectionResponse.jobId(); } catch (RekognitionException e) { e.getMessage(); System.exit(1); } } public static void getSegmentResults(RekognitionClient rekClient) { try { String paginationToken = null; GetSegmentDetectionResponse segDetectionResponse = null; boolean finished = false; String status; int yy = 0; do { if (segDetectionResponse != null) paginationToken = segDetectionResponse.nextToken(); GetSegmentDetectionRequest recognitionRequest = GetSegmentDetectionRequest.builder() .jobId(startJobId) .nextToken(paginationToken) .maxResults(10) .build(); // Wait until the job succeeds. while (!finished) { segDetectionResponse = rekClient.getSegmentDetection(recognitionRequest); status = segDetectionResponse.jobStatusAsString(); if (status.compareTo("SUCCEEDED") == 0) finished = true; else { System.out.println(yy + " status is: " + status); Thread.sleep(1000); } yy++; } finished = false; // Proceed when the job is done - otherwise VideoMetadata is null. List<VideoMetadata> videoMetaData = segDetectionResponse.videoMetadata(); for (VideoMetadata metaData : videoMetaData) { System.out.println("Format: " + metaData.format()); System.out.println("Codec: " + metaData.codec()); System.out.println("Duration: " + metaData.durationMillis()); System.out.println("FrameRate: " + metaData.frameRate()); System.out.println("Job"); } List<SegmentDetection> detectedSegments = segDetectionResponse.segments(); for (SegmentDetection detectedSegment : detectedSegments) { String type = detectedSegment.type().toString(); if (type.contains(SegmentType.TECHNICAL_CUE.toString())) { System.out.println("Technical Cue"); TechnicalCueSegment segmentCue = detectedSegment.technicalCueSegment(); System.out.println("\tType: " + segmentCue.type()); System.out.println("\tConfidence: " + segmentCue.confidence().toString()); } if (type.contains(SegmentType.SHOT.toString())) { System.out.println("Shot"); ShotSegment segmentShot = detectedSegment.shotSegment(); System.out.println("\tIndex " + segmentShot.index()); System.out.println("\tConfidence: " + segmentShot.confidence().toString()); } long seconds = detectedSegment.durationMillis(); System.out.println("\tDuration : " + seconds + " milliseconds"); System.out.println("\tStart time code: " + detectedSegment.startTimecodeSMPTE()); System.out.println("\tEnd time code: " + detectedSegment.endTimecodeSMPTE()); System.out.println("\tDuration time code: " + detectedSegment.durationSMPTE()); System.out.println(); } } while (segDetectionResponse != null && segDetectionResponse.nextToken() != null); } catch (RekognitionException | InterruptedException e) { System.out.println(e.getMessage()); System.exit(1); } } }

Amazon S3 버킷에 저장된 동영상에서 텍스트를 감지합니다.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.S3Object; import software.amazon.awssdk.services.rekognition.model.NotificationChannel; import software.amazon.awssdk.services.rekognition.model.Video; import software.amazon.awssdk.services.rekognition.model.StartTextDetectionRequest; import software.amazon.awssdk.services.rekognition.model.StartTextDetectionResponse; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.GetTextDetectionResponse; import software.amazon.awssdk.services.rekognition.model.GetTextDetectionRequest; import software.amazon.awssdk.services.rekognition.model.VideoMetadata; import software.amazon.awssdk.services.rekognition.model.TextDetectionResult; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class VideoDetectText { private static String startJobId = ""; public static void main(String[] args) { final String usage = """ Usage: <bucket> <video> <topicArn> <roleArn> Where: bucket - The name of the bucket in which the video is located (for example, (for example, myBucket).\s video - The name of video (for example, people.mp4).\s topicArn - The ARN of the Amazon Simple Notification Service (Amazon SNS) topic.\s roleArn - The ARN of the AWS Identity and Access Management (IAM) role to use.\s """; if (args.length != 4) { System.out.println(usage); System.exit(1); } String bucket = args[0]; String video = args[1]; String topicArn = args[2]; String roleArn = args[3]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); NotificationChannel channel = NotificationChannel.builder() .snsTopicArn(topicArn) .roleArn(roleArn) .build(); startTextLabels(rekClient, channel, bucket, video); getTextResults(rekClient); System.out.println("This example is done!"); rekClient.close(); } public static void startTextLabels(RekognitionClient rekClient, NotificationChannel channel, String bucket, String video) { try { S3Object s3Obj = S3Object.builder() .bucket(bucket) .name(video) .build(); Video vidOb = Video.builder() .s3Object(s3Obj) .build(); StartTextDetectionRequest labelDetectionRequest = StartTextDetectionRequest.builder() .jobTag("DetectingLabels") .notificationChannel(channel) .video(vidOb) .build(); StartTextDetectionResponse labelDetectionResponse = rekClient.startTextDetection(labelDetectionRequest); startJobId = labelDetectionResponse.jobId(); } catch (RekognitionException e) { System.out.println(e.getMessage()); System.exit(1); } } public static void getTextResults(RekognitionClient rekClient) { try { String paginationToken = null; GetTextDetectionResponse textDetectionResponse = null; boolean finished = false; String status; int yy = 0; do { if (textDetectionResponse != null) paginationToken = textDetectionResponse.nextToken(); GetTextDetectionRequest recognitionRequest = GetTextDetectionRequest.builder() .jobId(startJobId) .nextToken(paginationToken) .maxResults(10) .build(); // Wait until the job succeeds. while (!finished) { textDetectionResponse = rekClient.getTextDetection(recognitionRequest); status = textDetectionResponse.jobStatusAsString(); if (status.compareTo("SUCCEEDED") == 0) finished = true; else { System.out.println(yy + " status is: " + status); Thread.sleep(1000); } yy++; } finished = false; // Proceed when the job is done - otherwise VideoMetadata is null. VideoMetadata videoMetaData = textDetectionResponse.videoMetadata(); System.out.println("Format: " + videoMetaData.format()); System.out.println("Codec: " + videoMetaData.codec()); System.out.println("Duration: " + videoMetaData.durationMillis()); System.out.println("FrameRate: " + videoMetaData.frameRate()); System.out.println("Job"); List<TextDetectionResult> labels = textDetectionResponse.textDetections(); for (TextDetectionResult detectedText : labels) { System.out.println("Confidence: " + detectedText.textDetection().confidence().toString()); System.out.println("Id : " + detectedText.textDetection().id()); System.out.println("Parent Id: " + detectedText.textDetection().parentId()); System.out.println("Type: " + detectedText.textDetection().type()); System.out.println("Text: " + detectedText.textDetection().detectedText()); System.out.println(); } } while (textDetectionResponse != null && textDetectionResponse.nextToken() != null); } catch (RekognitionException | InterruptedException e) { System.out.println(e.getMessage()); System.exit(1); } } }

Amazon S3 버킷에 저장된 동영상에서 사람을 감지합니다.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.S3Object; import software.amazon.awssdk.services.rekognition.model.NotificationChannel; import software.amazon.awssdk.services.rekognition.model.StartPersonTrackingRequest; import software.amazon.awssdk.services.rekognition.model.Video; import software.amazon.awssdk.services.rekognition.model.StartPersonTrackingResponse; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.GetPersonTrackingResponse; import software.amazon.awssdk.services.rekognition.model.GetPersonTrackingRequest; import software.amazon.awssdk.services.rekognition.model.VideoMetadata; import software.amazon.awssdk.services.rekognition.model.PersonDetection; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class VideoPersonDetection { private static String startJobId = ""; public static void main(String[] args) { final String usage = """ Usage: <bucket> <video> <topicArn> <roleArn> Where: bucket - The name of the bucket in which the video is located (for example, (for example, myBucket).\s video - The name of video (for example, people.mp4).\s topicArn - The ARN of the Amazon Simple Notification Service (Amazon SNS) topic.\s roleArn - The ARN of the AWS Identity and Access Management (IAM) role to use.\s """; if (args.length != 4) { System.out.println(usage); System.exit(1); } String bucket = args[0]; String video = args[1]; String topicArn = args[2]; String roleArn = args[3]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); NotificationChannel channel = NotificationChannel.builder() .snsTopicArn(topicArn) .roleArn(roleArn) .build(); startPersonLabels(rekClient, channel, bucket, video); getPersonDetectionResults(rekClient); System.out.println("This example is done!"); rekClient.close(); } public static void startPersonLabels(RekognitionClient rekClient, NotificationChannel channel, String bucket, String video) { try { S3Object s3Obj = S3Object.builder() .bucket(bucket) .name(video) .build(); Video vidOb = Video.builder() .s3Object(s3Obj) .build(); StartPersonTrackingRequest personTrackingRequest = StartPersonTrackingRequest.builder() .jobTag("DetectingLabels") .video(vidOb) .notificationChannel(channel) .build(); StartPersonTrackingResponse labelDetectionResponse = rekClient.startPersonTracking(personTrackingRequest); startJobId = labelDetectionResponse.jobId(); } catch (RekognitionException e) { System.out.println(e.getMessage()); System.exit(1); } } public static void getPersonDetectionResults(RekognitionClient rekClient) { try { String paginationToken = null; GetPersonTrackingResponse personTrackingResult = null; boolean finished = false; String status; int yy = 0; do { if (personTrackingResult != null) paginationToken = personTrackingResult.nextToken(); GetPersonTrackingRequest recognitionRequest = GetPersonTrackingRequest.builder() .jobId(startJobId) .nextToken(paginationToken) .maxResults(10) .build(); // Wait until the job succeeds while (!finished) { personTrackingResult = rekClient.getPersonTracking(recognitionRequest); status = personTrackingResult.jobStatusAsString(); if (status.compareTo("SUCCEEDED") == 0) finished = true; else { System.out.println(yy + " status is: " + status); Thread.sleep(1000); } yy++; } finished = false; // Proceed when the job is done - otherwise VideoMetadata is null. VideoMetadata videoMetaData = personTrackingResult.videoMetadata(); System.out.println("Format: " + videoMetaData.format()); System.out.println("Codec: " + videoMetaData.codec()); System.out.println("Duration: " + videoMetaData.durationMillis()); System.out.println("FrameRate: " + videoMetaData.frameRate()); System.out.println("Job"); List<PersonDetection> detectedPersons = personTrackingResult.persons(); for (PersonDetection detectedPerson : detectedPersons) { long seconds = detectedPerson.timestamp() / 1000; System.out.print("Sec: " + seconds + " "); System.out.println("Person Identifier: " + detectedPerson.person().index()); System.out.println(); } } while (personTrackingResult != null && personTrackingResult.nextToken() != null); } catch (RekognitionException | InterruptedException e) { System.out.println(e.getMessage()); System.exit(1); } } }

다음 코드 예제는 Amazon Rekognition을 사용하여 이미지의 범주별로 객체를 감지하는 앱을 구축하는 방법을 보여줍니다.

Java 2.x용 SDK

Amazon Simple Storage Service(S3) 버킷에 있는 이미지에서 Amazon Rekognition을 사용하여 범주별로 객체를 식별하는 앱을 API 생성하는 방법을 보여줍니다. Amazon Rekognition Amazon S3 앱은 Amazon Simple Email Service(Amazon SES)를 사용하여 결과와 함께 이메일 알림을 관리자에게 전송합니다.

전체 소스 코드와 설정 및 실행 방법에 대한 지침은 GitHub의 전체 예제를 참조하세요.

이 예시에서 사용되는 서비스
  • Amazon Rekognition

  • Amazon S3

  • Amazon SES

다음 코드 예제는 Amazon Rekognition을 사용하여 비디오에서 사람과 객체를 감지하는 방법을 보여줍니다.

Java 2.x용 SDK

Amazon Simple Storage Service(Amazon S3) 버킷에 있는 비디오에서 얼굴과 객체를 감지하는 앱을 생성하기 위해 Amazon Rekognition Java API를 사용하는 방법을 보여줍니다.Amazon S3 앱은 Amazon Simple Email Service(Amazon SES)를 사용하여 결과와 함께 이메일 알림을 관리자에게 전송합니다.

전체 소스 코드와 설정 및 실행 방법에 대한 지침은 GitHub의 전체 예제를 참조하세요.

이 예시에서 사용되는 서비스
  • Amazon Rekognition

  • Amazon S3

  • Amazon SES