Amazon Rekognition と を使用して動画内の情報を検出する AWS SDK - AWS SDK CLI コードの例

AWS Doc SDK Examples GitHub リポジトリには他にも AWS SDK例があります。

翻訳は機械翻訳により提供されています。提供された翻訳内容と英語版の間で齟齬、不一致または矛盾がある場合、英語版が優先します。

Amazon Rekognition と を使用して動画内の情報を検出する AWS SDK

次のコード例は、以下を実行する方法を示しています。

  • Amazon Rekognition のジョブを開始し、人物、オブジェクト、テキストなどの要素を動画から検出します。

  • ジョブが完了するまでジョブのステータスを確認します。

  • 検出された要素のリストをジョブごとに出力します。

Java
SDK for Java 2.x
注記

については、「」を参照してください GitHub。AWS コード例リポジトリ で全く同じ例を見つけて、設定と実行の方法を確認してください。

Amazon S3 バケット内のビデオから有名人の結果を取得します。

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.S3Object; import software.amazon.awssdk.services.rekognition.model.NotificationChannel; import software.amazon.awssdk.services.rekognition.model.Video; import software.amazon.awssdk.services.rekognition.model.StartCelebrityRecognitionResponse; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.CelebrityRecognitionSortBy; import software.amazon.awssdk.services.rekognition.model.VideoMetadata; import software.amazon.awssdk.services.rekognition.model.CelebrityRecognition; import software.amazon.awssdk.services.rekognition.model.CelebrityDetail; import software.amazon.awssdk.services.rekognition.model.StartCelebrityRecognitionRequest; import software.amazon.awssdk.services.rekognition.model.GetCelebrityRecognitionRequest; import software.amazon.awssdk.services.rekognition.model.GetCelebrityRecognitionResponse; import java.util.List; /** * To run this code example, ensure that you perform the Prerequisites as stated * in the Amazon Rekognition Guide: * https://docs.aws.amazon.com/rekognition/latest/dg/video-analyzing-with-sqs.html * * Also, ensure that set up your development environment, including your * credentials. * * For information, see this documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class VideoCelebrityDetection { private static String startJobId = ""; public static void main(String[] args) { final String usage = """ Usage: <bucket> <video> <topicArn> <roleArn> Where: bucket - The name of the bucket in which the video is located (for example, (for example, myBucket).\s video - The name of video (for example, people.mp4).\s topicArn - The ARN of the Amazon Simple Notification Service (Amazon SNS) topic.\s roleArn - The ARN of the AWS Identity and Access Management (IAM) role to use.\s """; if (args.length != 4) { System.out.println(usage); System.exit(1); } String bucket = args[0]; String video = args[1]; String topicArn = args[2]; String roleArn = args[3]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); NotificationChannel channel = NotificationChannel.builder() .snsTopicArn(topicArn) .roleArn(roleArn) .build(); startCelebrityDetection(rekClient, channel, bucket, video); getCelebrityDetectionResults(rekClient); System.out.println("This example is done!"); rekClient.close(); } public static void startCelebrityDetection(RekognitionClient rekClient, NotificationChannel channel, String bucket, String video) { try { S3Object s3Obj = S3Object.builder() .bucket(bucket) .name(video) .build(); Video vidOb = Video.builder() .s3Object(s3Obj) .build(); StartCelebrityRecognitionRequest recognitionRequest = StartCelebrityRecognitionRequest.builder() .jobTag("Celebrities") .notificationChannel(channel) .video(vidOb) .build(); StartCelebrityRecognitionResponse startCelebrityRecognitionResult = rekClient .startCelebrityRecognition(recognitionRequest); startJobId = startCelebrityRecognitionResult.jobId(); } catch (RekognitionException e) { System.out.println(e.getMessage()); System.exit(1); } } public static void getCelebrityDetectionResults(RekognitionClient rekClient) { try { String paginationToken = null; GetCelebrityRecognitionResponse recognitionResponse = null; boolean finished = false; String status; int yy = 0; do { if (recognitionResponse != null) paginationToken = recognitionResponse.nextToken(); GetCelebrityRecognitionRequest recognitionRequest = GetCelebrityRecognitionRequest.builder() .jobId(startJobId) .nextToken(paginationToken) .sortBy(CelebrityRecognitionSortBy.TIMESTAMP) .maxResults(10) .build(); // Wait until the job succeeds while (!finished) { recognitionResponse = rekClient.getCelebrityRecognition(recognitionRequest); status = recognitionResponse.jobStatusAsString(); if (status.compareTo("SUCCEEDED") == 0) finished = true; else { System.out.println(yy + " status is: " + status); Thread.sleep(1000); } yy++; } finished = false; // Proceed when the job is done - otherwise VideoMetadata is null. VideoMetadata videoMetaData = recognitionResponse.videoMetadata(); System.out.println("Format: " + videoMetaData.format()); System.out.println("Codec: " + videoMetaData.codec()); System.out.println("Duration: " + videoMetaData.durationMillis()); System.out.println("FrameRate: " + videoMetaData.frameRate()); System.out.println("Job"); List<CelebrityRecognition> celebs = recognitionResponse.celebrities(); for (CelebrityRecognition celeb : celebs) { long seconds = celeb.timestamp() / 1000; System.out.print("Sec: " + seconds + " "); CelebrityDetail details = celeb.celebrity(); System.out.println("Name: " + details.name()); System.out.println("Id: " + details.id()); System.out.println(); } } while (recognitionResponse.nextToken() != null); } catch (RekognitionException | InterruptedException e) { System.out.println(e.getMessage()); System.exit(1); } } }

ラベル検出オペレーションによって、ビデオ内のラベルを検出します。

import com.fasterxml.jackson.core.JsonProcessingException; import com.fasterxml.jackson.databind.JsonMappingException; import com.fasterxml.jackson.databind.JsonNode; import com.fasterxml.jackson.databind.ObjectMapper; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.StartLabelDetectionResponse; import software.amazon.awssdk.services.rekognition.model.NotificationChannel; import software.amazon.awssdk.services.rekognition.model.S3Object; import software.amazon.awssdk.services.rekognition.model.Video; import software.amazon.awssdk.services.rekognition.model.StartLabelDetectionRequest; import software.amazon.awssdk.services.rekognition.model.GetLabelDetectionRequest; import software.amazon.awssdk.services.rekognition.model.GetLabelDetectionResponse; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.LabelDetectionSortBy; import software.amazon.awssdk.services.rekognition.model.VideoMetadata; import software.amazon.awssdk.services.rekognition.model.LabelDetection; import software.amazon.awssdk.services.rekognition.model.Label; import software.amazon.awssdk.services.rekognition.model.Instance; import software.amazon.awssdk.services.rekognition.model.Parent; import software.amazon.awssdk.services.sqs.SqsClient; import software.amazon.awssdk.services.sqs.model.Message; import software.amazon.awssdk.services.sqs.model.ReceiveMessageRequest; import software.amazon.awssdk.services.sqs.model.DeleteMessageRequest; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class VideoDetect { private static String startJobId = ""; public static void main(String[] args) { final String usage = """ Usage: <bucket> <video> <queueUrl> <topicArn> <roleArn> Where: bucket - The name of the bucket in which the video is located (for example, (for example, myBucket).\s video - The name of the video (for example, people.mp4).\s queueUrl- The URL of a SQS queue.\s topicArn - The ARN of the Amazon Simple Notification Service (Amazon SNS) topic.\s roleArn - The ARN of the AWS Identity and Access Management (IAM) role to use.\s """; if (args.length != 5) { System.out.println(usage); System.exit(1); } String bucket = args[0]; String video = args[1]; String queueUrl = args[2]; String topicArn = args[3]; String roleArn = args[4]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); SqsClient sqs = SqsClient.builder() .region(Region.US_EAST_1) .build(); NotificationChannel channel = NotificationChannel.builder() .snsTopicArn(topicArn) .roleArn(roleArn) .build(); startLabels(rekClient, channel, bucket, video); getLabelJob(rekClient, sqs, queueUrl); System.out.println("This example is done!"); sqs.close(); rekClient.close(); } public static void startLabels(RekognitionClient rekClient, NotificationChannel channel, String bucket, String video) { try { S3Object s3Obj = S3Object.builder() .bucket(bucket) .name(video) .build(); Video vidOb = Video.builder() .s3Object(s3Obj) .build(); StartLabelDetectionRequest labelDetectionRequest = StartLabelDetectionRequest.builder() .jobTag("DetectingLabels") .notificationChannel(channel) .video(vidOb) .minConfidence(50F) .build(); StartLabelDetectionResponse labelDetectionResponse = rekClient.startLabelDetection(labelDetectionRequest); startJobId = labelDetectionResponse.jobId(); boolean ans = true; String status = ""; int yy = 0; while (ans) { GetLabelDetectionRequest detectionRequest = GetLabelDetectionRequest.builder() .jobId(startJobId) .maxResults(10) .build(); GetLabelDetectionResponse result = rekClient.getLabelDetection(detectionRequest); status = result.jobStatusAsString(); if (status.compareTo("SUCCEEDED") == 0) ans = false; else System.out.println(yy + " status is: " + status); Thread.sleep(1000); yy++; } System.out.println(startJobId + " status is: " + status); } catch (RekognitionException | InterruptedException e) { e.getMessage(); System.exit(1); } } public static void getLabelJob(RekognitionClient rekClient, SqsClient sqs, String queueUrl) { List<Message> messages; ReceiveMessageRequest messageRequest = ReceiveMessageRequest.builder() .queueUrl(queueUrl) .build(); try { messages = sqs.receiveMessage(messageRequest).messages(); if (!messages.isEmpty()) { for (Message message : messages) { String notification = message.body(); // Get the status and job id from the notification ObjectMapper mapper = new ObjectMapper(); JsonNode jsonMessageTree = mapper.readTree(notification); JsonNode messageBodyText = jsonMessageTree.get("Message"); ObjectMapper operationResultMapper = new ObjectMapper(); JsonNode jsonResultTree = operationResultMapper.readTree(messageBodyText.textValue()); JsonNode operationJobId = jsonResultTree.get("JobId"); JsonNode operationStatus = jsonResultTree.get("Status"); System.out.println("Job found in JSON is " + operationJobId); DeleteMessageRequest deleteMessageRequest = DeleteMessageRequest.builder() .queueUrl(queueUrl) .build(); String jobId = operationJobId.textValue(); if (startJobId.compareTo(jobId) == 0) { System.out.println("Job id: " + operationJobId); System.out.println("Status : " + operationStatus.toString()); if (operationStatus.asText().equals("SUCCEEDED")) getResultsLabels(rekClient); else System.out.println("Video analysis failed"); sqs.deleteMessage(deleteMessageRequest); } else { System.out.println("Job received was not job " + startJobId); sqs.deleteMessage(deleteMessageRequest); } } } } catch (RekognitionException e) { e.getMessage(); System.exit(1); } catch (JsonMappingException e) { e.printStackTrace(); } catch (JsonProcessingException e) { e.printStackTrace(); } } // Gets the job results by calling GetLabelDetection private static void getResultsLabels(RekognitionClient rekClient) { int maxResults = 10; String paginationToken = null; GetLabelDetectionResponse labelDetectionResult = null; try { do { if (labelDetectionResult != null) paginationToken = labelDetectionResult.nextToken(); GetLabelDetectionRequest labelDetectionRequest = GetLabelDetectionRequest.builder() .jobId(startJobId) .sortBy(LabelDetectionSortBy.TIMESTAMP) .maxResults(maxResults) .nextToken(paginationToken) .build(); labelDetectionResult = rekClient.getLabelDetection(labelDetectionRequest); VideoMetadata videoMetaData = labelDetectionResult.videoMetadata(); System.out.println("Format: " + videoMetaData.format()); System.out.println("Codec: " + videoMetaData.codec()); System.out.println("Duration: " + videoMetaData.durationMillis()); System.out.println("FrameRate: " + videoMetaData.frameRate()); List<LabelDetection> detectedLabels = labelDetectionResult.labels(); for (LabelDetection detectedLabel : detectedLabels) { long seconds = detectedLabel.timestamp(); Label label = detectedLabel.label(); System.out.println("Millisecond: " + seconds + " "); System.out.println(" Label:" + label.name()); System.out.println(" Confidence:" + detectedLabel.label().confidence().toString()); List<Instance> instances = label.instances(); System.out.println(" Instances of " + label.name()); if (instances.isEmpty()) { System.out.println(" " + "None"); } else { for (Instance instance : instances) { System.out.println(" Confidence: " + instance.confidence().toString()); System.out.println(" Bounding box: " + instance.boundingBox().toString()); } } System.out.println(" Parent labels for " + label.name() + ":"); List<Parent> parents = label.parents(); if (parents.isEmpty()) { System.out.println(" None"); } else { for (Parent parent : parents) { System.out.println(" " + parent.name()); } } System.out.println(); } } while (labelDetectionResult != null && labelDetectionResult.nextToken() != null); } catch (RekognitionException e) { e.getMessage(); System.exit(1); } } }

Amazon S3 バケットに保存されたビデオ内の顔を検出する

import com.fasterxml.jackson.core.JsonProcessingException; import com.fasterxml.jackson.databind.JsonMappingException; import com.fasterxml.jackson.databind.JsonNode; import com.fasterxml.jackson.databind.ObjectMapper; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.StartLabelDetectionResponse; import software.amazon.awssdk.services.rekognition.model.NotificationChannel; import software.amazon.awssdk.services.rekognition.model.S3Object; import software.amazon.awssdk.services.rekognition.model.Video; import software.amazon.awssdk.services.rekognition.model.StartLabelDetectionRequest; import software.amazon.awssdk.services.rekognition.model.GetLabelDetectionRequest; import software.amazon.awssdk.services.rekognition.model.GetLabelDetectionResponse; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.LabelDetectionSortBy; import software.amazon.awssdk.services.rekognition.model.VideoMetadata; import software.amazon.awssdk.services.rekognition.model.LabelDetection; import software.amazon.awssdk.services.rekognition.model.Label; import software.amazon.awssdk.services.rekognition.model.Instance; import software.amazon.awssdk.services.rekognition.model.Parent; import software.amazon.awssdk.services.sqs.SqsClient; import software.amazon.awssdk.services.sqs.model.Message; import software.amazon.awssdk.services.sqs.model.ReceiveMessageRequest; import software.amazon.awssdk.services.sqs.model.DeleteMessageRequest; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class VideoDetect { private static String startJobId = ""; public static void main(String[] args) { final String usage = """ Usage: <bucket> <video> <queueUrl> <topicArn> <roleArn> Where: bucket - The name of the bucket in which the video is located (for example, (for example, myBucket).\s video - The name of the video (for example, people.mp4).\s queueUrl- The URL of a SQS queue.\s topicArn - The ARN of the Amazon Simple Notification Service (Amazon SNS) topic.\s roleArn - The ARN of the AWS Identity and Access Management (IAM) role to use.\s """; if (args.length != 5) { System.out.println(usage); System.exit(1); } String bucket = args[0]; String video = args[1]; String queueUrl = args[2]; String topicArn = args[3]; String roleArn = args[4]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); SqsClient sqs = SqsClient.builder() .region(Region.US_EAST_1) .build(); NotificationChannel channel = NotificationChannel.builder() .snsTopicArn(topicArn) .roleArn(roleArn) .build(); startLabels(rekClient, channel, bucket, video); getLabelJob(rekClient, sqs, queueUrl); System.out.println("This example is done!"); sqs.close(); rekClient.close(); } public static void startLabels(RekognitionClient rekClient, NotificationChannel channel, String bucket, String video) { try { S3Object s3Obj = S3Object.builder() .bucket(bucket) .name(video) .build(); Video vidOb = Video.builder() .s3Object(s3Obj) .build(); StartLabelDetectionRequest labelDetectionRequest = StartLabelDetectionRequest.builder() .jobTag("DetectingLabels") .notificationChannel(channel) .video(vidOb) .minConfidence(50F) .build(); StartLabelDetectionResponse labelDetectionResponse = rekClient.startLabelDetection(labelDetectionRequest); startJobId = labelDetectionResponse.jobId(); boolean ans = true; String status = ""; int yy = 0; while (ans) { GetLabelDetectionRequest detectionRequest = GetLabelDetectionRequest.builder() .jobId(startJobId) .maxResults(10) .build(); GetLabelDetectionResponse result = rekClient.getLabelDetection(detectionRequest); status = result.jobStatusAsString(); if (status.compareTo("SUCCEEDED") == 0) ans = false; else System.out.println(yy + " status is: " + status); Thread.sleep(1000); yy++; } System.out.println(startJobId + " status is: " + status); } catch (RekognitionException | InterruptedException e) { e.getMessage(); System.exit(1); } } public static void getLabelJob(RekognitionClient rekClient, SqsClient sqs, String queueUrl) { List<Message> messages; ReceiveMessageRequest messageRequest = ReceiveMessageRequest.builder() .queueUrl(queueUrl) .build(); try { messages = sqs.receiveMessage(messageRequest).messages(); if (!messages.isEmpty()) { for (Message message : messages) { String notification = message.body(); // Get the status and job id from the notification ObjectMapper mapper = new ObjectMapper(); JsonNode jsonMessageTree = mapper.readTree(notification); JsonNode messageBodyText = jsonMessageTree.get("Message"); ObjectMapper operationResultMapper = new ObjectMapper(); JsonNode jsonResultTree = operationResultMapper.readTree(messageBodyText.textValue()); JsonNode operationJobId = jsonResultTree.get("JobId"); JsonNode operationStatus = jsonResultTree.get("Status"); System.out.println("Job found in JSON is " + operationJobId); DeleteMessageRequest deleteMessageRequest = DeleteMessageRequest.builder() .queueUrl(queueUrl) .build(); String jobId = operationJobId.textValue(); if (startJobId.compareTo(jobId) == 0) { System.out.println("Job id: " + operationJobId); System.out.println("Status : " + operationStatus.toString()); if (operationStatus.asText().equals("SUCCEEDED")) getResultsLabels(rekClient); else System.out.println("Video analysis failed"); sqs.deleteMessage(deleteMessageRequest); } else { System.out.println("Job received was not job " + startJobId); sqs.deleteMessage(deleteMessageRequest); } } } } catch (RekognitionException e) { e.getMessage(); System.exit(1); } catch (JsonMappingException e) { e.printStackTrace(); } catch (JsonProcessingException e) { e.printStackTrace(); } } // Gets the job results by calling GetLabelDetection private static void getResultsLabels(RekognitionClient rekClient) { int maxResults = 10; String paginationToken = null; GetLabelDetectionResponse labelDetectionResult = null; try { do { if (labelDetectionResult != null) paginationToken = labelDetectionResult.nextToken(); GetLabelDetectionRequest labelDetectionRequest = GetLabelDetectionRequest.builder() .jobId(startJobId) .sortBy(LabelDetectionSortBy.TIMESTAMP) .maxResults(maxResults) .nextToken(paginationToken) .build(); labelDetectionResult = rekClient.getLabelDetection(labelDetectionRequest); VideoMetadata videoMetaData = labelDetectionResult.videoMetadata(); System.out.println("Format: " + videoMetaData.format()); System.out.println("Codec: " + videoMetaData.codec()); System.out.println("Duration: " + videoMetaData.durationMillis()); System.out.println("FrameRate: " + videoMetaData.frameRate()); List<LabelDetection> detectedLabels = labelDetectionResult.labels(); for (LabelDetection detectedLabel : detectedLabels) { long seconds = detectedLabel.timestamp(); Label label = detectedLabel.label(); System.out.println("Millisecond: " + seconds + " "); System.out.println(" Label:" + label.name()); System.out.println(" Confidence:" + detectedLabel.label().confidence().toString()); List<Instance> instances = label.instances(); System.out.println(" Instances of " + label.name()); if (instances.isEmpty()) { System.out.println(" " + "None"); } else { for (Instance instance : instances) { System.out.println(" Confidence: " + instance.confidence().toString()); System.out.println(" Bounding box: " + instance.boundingBox().toString()); } } System.out.println(" Parent labels for " + label.name() + ":"); List<Parent> parents = label.parents(); if (parents.isEmpty()) { System.out.println(" None"); } else { for (Parent parent : parents) { System.out.println(" " + parent.name()); } } System.out.println(); } } while (labelDetectionResult != null && labelDetectionResult.nextToken() != null); } catch (RekognitionException e) { e.getMessage(); System.exit(1); } } }

Amazon S3 バケットに保存されたビデオ内の不適切なコンテンツや攻撃的なコンテンツを検出します。

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.NotificationChannel; import software.amazon.awssdk.services.rekognition.model.S3Object; import software.amazon.awssdk.services.rekognition.model.Video; import software.amazon.awssdk.services.rekognition.model.StartContentModerationRequest; import software.amazon.awssdk.services.rekognition.model.StartContentModerationResponse; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.GetContentModerationResponse; import software.amazon.awssdk.services.rekognition.model.GetContentModerationRequest; import software.amazon.awssdk.services.rekognition.model.VideoMetadata; import software.amazon.awssdk.services.rekognition.model.ContentModerationDetection; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class VideoDetectInappropriate { private static String startJobId = ""; public static void main(String[] args) { final String usage = """ Usage: <bucket> <video> <topicArn> <roleArn> Where: bucket - The name of the bucket in which the video is located (for example, (for example, myBucket).\s video - The name of video (for example, people.mp4).\s topicArn - The ARN of the Amazon Simple Notification Service (Amazon SNS) topic.\s roleArn - The ARN of the AWS Identity and Access Management (IAM) role to use.\s """; if (args.length != 4) { System.out.println(usage); System.exit(1); } String bucket = args[0]; String video = args[1]; String topicArn = args[2]; String roleArn = args[3]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); NotificationChannel channel = NotificationChannel.builder() .snsTopicArn(topicArn) .roleArn(roleArn) .build(); startModerationDetection(rekClient, channel, bucket, video); getModResults(rekClient); System.out.println("This example is done!"); rekClient.close(); } public static void startModerationDetection(RekognitionClient rekClient, NotificationChannel channel, String bucket, String video) { try { S3Object s3Obj = S3Object.builder() .bucket(bucket) .name(video) .build(); Video vidOb = Video.builder() .s3Object(s3Obj) .build(); StartContentModerationRequest modDetectionRequest = StartContentModerationRequest.builder() .jobTag("Moderation") .notificationChannel(channel) .video(vidOb) .build(); StartContentModerationResponse startModDetectionResult = rekClient .startContentModeration(modDetectionRequest); startJobId = startModDetectionResult.jobId(); } catch (RekognitionException e) { System.out.println(e.getMessage()); System.exit(1); } } public static void getModResults(RekognitionClient rekClient) { try { String paginationToken = null; GetContentModerationResponse modDetectionResponse = null; boolean finished = false; String status; int yy = 0; do { if (modDetectionResponse != null) paginationToken = modDetectionResponse.nextToken(); GetContentModerationRequest modRequest = GetContentModerationRequest.builder() .jobId(startJobId) .nextToken(paginationToken) .maxResults(10) .build(); // Wait until the job succeeds. while (!finished) { modDetectionResponse = rekClient.getContentModeration(modRequest); status = modDetectionResponse.jobStatusAsString(); if (status.compareTo("SUCCEEDED") == 0) finished = true; else { System.out.println(yy + " status is: " + status); Thread.sleep(1000); } yy++; } finished = false; // Proceed when the job is done - otherwise VideoMetadata is null. VideoMetadata videoMetaData = modDetectionResponse.videoMetadata(); System.out.println("Format: " + videoMetaData.format()); System.out.println("Codec: " + videoMetaData.codec()); System.out.println("Duration: " + videoMetaData.durationMillis()); System.out.println("FrameRate: " + videoMetaData.frameRate()); System.out.println("Job"); List<ContentModerationDetection> mods = modDetectionResponse.moderationLabels(); for (ContentModerationDetection mod : mods) { long seconds = mod.timestamp() / 1000; System.out.print("Mod label: " + seconds + " "); System.out.println(mod.moderationLabel().toString()); System.out.println(); } } while (modDetectionResponse != null && modDetectionResponse.nextToken() != null); } catch (RekognitionException | InterruptedException e) { System.out.println(e.getMessage()); System.exit(1); } } }

Amazon S3 バケットに保存されているビデオ内のテクニカルキューセグメントおよびショット検出セグメントを検出します。

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.S3Object; import software.amazon.awssdk.services.rekognition.model.NotificationChannel; import software.amazon.awssdk.services.rekognition.model.Video; import software.amazon.awssdk.services.rekognition.model.StartShotDetectionFilter; import software.amazon.awssdk.services.rekognition.model.StartTechnicalCueDetectionFilter; import software.amazon.awssdk.services.rekognition.model.StartSegmentDetectionFilters; import software.amazon.awssdk.services.rekognition.model.StartSegmentDetectionRequest; import software.amazon.awssdk.services.rekognition.model.StartSegmentDetectionResponse; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.GetSegmentDetectionResponse; import software.amazon.awssdk.services.rekognition.model.GetSegmentDetectionRequest; import software.amazon.awssdk.services.rekognition.model.VideoMetadata; import software.amazon.awssdk.services.rekognition.model.SegmentDetection; import software.amazon.awssdk.services.rekognition.model.TechnicalCueSegment; import software.amazon.awssdk.services.rekognition.model.ShotSegment; import software.amazon.awssdk.services.rekognition.model.SegmentType; import software.amazon.awssdk.services.sqs.SqsClient; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class VideoDetectSegment { private static String startJobId = ""; public static void main(String[] args) { final String usage = """ Usage: <bucket> <video> <topicArn> <roleArn> Where: bucket - The name of the bucket in which the video is located (for example, (for example, myBucket).\s video - The name of video (for example, people.mp4).\s topicArn - The ARN of the Amazon Simple Notification Service (Amazon SNS) topic.\s roleArn - The ARN of the AWS Identity and Access Management (IAM) role to use.\s """; if (args.length != 4) { System.out.println(usage); System.exit(1); } String bucket = args[0]; String video = args[1]; String topicArn = args[2]; String roleArn = args[3]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); SqsClient sqs = SqsClient.builder() .region(Region.US_EAST_1) .build(); NotificationChannel channel = NotificationChannel.builder() .snsTopicArn(topicArn) .roleArn(roleArn) .build(); startSegmentDetection(rekClient, channel, bucket, video); getSegmentResults(rekClient); System.out.println("This example is done!"); sqs.close(); rekClient.close(); } public static void startSegmentDetection(RekognitionClient rekClient, NotificationChannel channel, String bucket, String video) { try { S3Object s3Obj = S3Object.builder() .bucket(bucket) .name(video) .build(); Video vidOb = Video.builder() .s3Object(s3Obj) .build(); StartShotDetectionFilter cueDetectionFilter = StartShotDetectionFilter.builder() .minSegmentConfidence(60F) .build(); StartTechnicalCueDetectionFilter technicalCueDetectionFilter = StartTechnicalCueDetectionFilter.builder() .minSegmentConfidence(60F) .build(); StartSegmentDetectionFilters filters = StartSegmentDetectionFilters.builder() .shotFilter(cueDetectionFilter) .technicalCueFilter(technicalCueDetectionFilter) .build(); StartSegmentDetectionRequest segDetectionRequest = StartSegmentDetectionRequest.builder() .jobTag("DetectingLabels") .notificationChannel(channel) .segmentTypes(SegmentType.TECHNICAL_CUE, SegmentType.SHOT) .video(vidOb) .filters(filters) .build(); StartSegmentDetectionResponse segDetectionResponse = rekClient.startSegmentDetection(segDetectionRequest); startJobId = segDetectionResponse.jobId(); } catch (RekognitionException e) { e.getMessage(); System.exit(1); } } public static void getSegmentResults(RekognitionClient rekClient) { try { String paginationToken = null; GetSegmentDetectionResponse segDetectionResponse = null; boolean finished = false; String status; int yy = 0; do { if (segDetectionResponse != null) paginationToken = segDetectionResponse.nextToken(); GetSegmentDetectionRequest recognitionRequest = GetSegmentDetectionRequest.builder() .jobId(startJobId) .nextToken(paginationToken) .maxResults(10) .build(); // Wait until the job succeeds. while (!finished) { segDetectionResponse = rekClient.getSegmentDetection(recognitionRequest); status = segDetectionResponse.jobStatusAsString(); if (status.compareTo("SUCCEEDED") == 0) finished = true; else { System.out.println(yy + " status is: " + status); Thread.sleep(1000); } yy++; } finished = false; // Proceed when the job is done - otherwise VideoMetadata is null. List<VideoMetadata> videoMetaData = segDetectionResponse.videoMetadata(); for (VideoMetadata metaData : videoMetaData) { System.out.println("Format: " + metaData.format()); System.out.println("Codec: " + metaData.codec()); System.out.println("Duration: " + metaData.durationMillis()); System.out.println("FrameRate: " + metaData.frameRate()); System.out.println("Job"); } List<SegmentDetection> detectedSegments = segDetectionResponse.segments(); for (SegmentDetection detectedSegment : detectedSegments) { String type = detectedSegment.type().toString(); if (type.contains(SegmentType.TECHNICAL_CUE.toString())) { System.out.println("Technical Cue"); TechnicalCueSegment segmentCue = detectedSegment.technicalCueSegment(); System.out.println("\tType: " + segmentCue.type()); System.out.println("\tConfidence: " + segmentCue.confidence().toString()); } if (type.contains(SegmentType.SHOT.toString())) { System.out.println("Shot"); ShotSegment segmentShot = detectedSegment.shotSegment(); System.out.println("\tIndex " + segmentShot.index()); System.out.println("\tConfidence: " + segmentShot.confidence().toString()); } long seconds = detectedSegment.durationMillis(); System.out.println("\tDuration : " + seconds + " milliseconds"); System.out.println("\tStart time code: " + detectedSegment.startTimecodeSMPTE()); System.out.println("\tEnd time code: " + detectedSegment.endTimecodeSMPTE()); System.out.println("\tDuration time code: " + detectedSegment.durationSMPTE()); System.out.println(); } } while (segDetectionResponse != null && segDetectionResponse.nextToken() != null); } catch (RekognitionException | InterruptedException e) { System.out.println(e.getMessage()); System.exit(1); } } }

Amazon S3 バケットに保存されたビデオに保存されたビデオ内のテキストを検出します。

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.S3Object; import software.amazon.awssdk.services.rekognition.model.NotificationChannel; import software.amazon.awssdk.services.rekognition.model.Video; import software.amazon.awssdk.services.rekognition.model.StartTextDetectionRequest; import software.amazon.awssdk.services.rekognition.model.StartTextDetectionResponse; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.GetTextDetectionResponse; import software.amazon.awssdk.services.rekognition.model.GetTextDetectionRequest; import software.amazon.awssdk.services.rekognition.model.VideoMetadata; import software.amazon.awssdk.services.rekognition.model.TextDetectionResult; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class VideoDetectText { private static String startJobId = ""; public static void main(String[] args) { final String usage = """ Usage: <bucket> <video> <topicArn> <roleArn> Where: bucket - The name of the bucket in which the video is located (for example, (for example, myBucket).\s video - The name of video (for example, people.mp4).\s topicArn - The ARN of the Amazon Simple Notification Service (Amazon SNS) topic.\s roleArn - The ARN of the AWS Identity and Access Management (IAM) role to use.\s """; if (args.length != 4) { System.out.println(usage); System.exit(1); } String bucket = args[0]; String video = args[1]; String topicArn = args[2]; String roleArn = args[3]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); NotificationChannel channel = NotificationChannel.builder() .snsTopicArn(topicArn) .roleArn(roleArn) .build(); startTextLabels(rekClient, channel, bucket, video); getTextResults(rekClient); System.out.println("This example is done!"); rekClient.close(); } public static void startTextLabels(RekognitionClient rekClient, NotificationChannel channel, String bucket, String video) { try { S3Object s3Obj = S3Object.builder() .bucket(bucket) .name(video) .build(); Video vidOb = Video.builder() .s3Object(s3Obj) .build(); StartTextDetectionRequest labelDetectionRequest = StartTextDetectionRequest.builder() .jobTag("DetectingLabels") .notificationChannel(channel) .video(vidOb) .build(); StartTextDetectionResponse labelDetectionResponse = rekClient.startTextDetection(labelDetectionRequest); startJobId = labelDetectionResponse.jobId(); } catch (RekognitionException e) { System.out.println(e.getMessage()); System.exit(1); } } public static void getTextResults(RekognitionClient rekClient) { try { String paginationToken = null; GetTextDetectionResponse textDetectionResponse = null; boolean finished = false; String status; int yy = 0; do { if (textDetectionResponse != null) paginationToken = textDetectionResponse.nextToken(); GetTextDetectionRequest recognitionRequest = GetTextDetectionRequest.builder() .jobId(startJobId) .nextToken(paginationToken) .maxResults(10) .build(); // Wait until the job succeeds. while (!finished) { textDetectionResponse = rekClient.getTextDetection(recognitionRequest); status = textDetectionResponse.jobStatusAsString(); if (status.compareTo("SUCCEEDED") == 0) finished = true; else { System.out.println(yy + " status is: " + status); Thread.sleep(1000); } yy++; } finished = false; // Proceed when the job is done - otherwise VideoMetadata is null. VideoMetadata videoMetaData = textDetectionResponse.videoMetadata(); System.out.println("Format: " + videoMetaData.format()); System.out.println("Codec: " + videoMetaData.codec()); System.out.println("Duration: " + videoMetaData.durationMillis()); System.out.println("FrameRate: " + videoMetaData.frameRate()); System.out.println("Job"); List<TextDetectionResult> labels = textDetectionResponse.textDetections(); for (TextDetectionResult detectedText : labels) { System.out.println("Confidence: " + detectedText.textDetection().confidence().toString()); System.out.println("Id : " + detectedText.textDetection().id()); System.out.println("Parent Id: " + detectedText.textDetection().parentId()); System.out.println("Type: " + detectedText.textDetection().type()); System.out.println("Text: " + detectedText.textDetection().detectedText()); System.out.println(); } } while (textDetectionResponse != null && textDetectionResponse.nextToken() != null); } catch (RekognitionException | InterruptedException e) { System.out.println(e.getMessage()); System.exit(1); } } }

Amazon S3 バケットに保存されたビデオに保存されたビデオ内の人物を検出します。

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.S3Object; import software.amazon.awssdk.services.rekognition.model.NotificationChannel; import software.amazon.awssdk.services.rekognition.model.StartPersonTrackingRequest; import software.amazon.awssdk.services.rekognition.model.Video; import software.amazon.awssdk.services.rekognition.model.StartPersonTrackingResponse; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.GetPersonTrackingResponse; import software.amazon.awssdk.services.rekognition.model.GetPersonTrackingRequest; import software.amazon.awssdk.services.rekognition.model.VideoMetadata; import software.amazon.awssdk.services.rekognition.model.PersonDetection; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class VideoPersonDetection { private static String startJobId = ""; public static void main(String[] args) { final String usage = """ Usage: <bucket> <video> <topicArn> <roleArn> Where: bucket - The name of the bucket in which the video is located (for example, (for example, myBucket).\s video - The name of video (for example, people.mp4).\s topicArn - The ARN of the Amazon Simple Notification Service (Amazon SNS) topic.\s roleArn - The ARN of the AWS Identity and Access Management (IAM) role to use.\s """; if (args.length != 4) { System.out.println(usage); System.exit(1); } String bucket = args[0]; String video = args[1]; String topicArn = args[2]; String roleArn = args[3]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); NotificationChannel channel = NotificationChannel.builder() .snsTopicArn(topicArn) .roleArn(roleArn) .build(); startPersonLabels(rekClient, channel, bucket, video); getPersonDetectionResults(rekClient); System.out.println("This example is done!"); rekClient.close(); } public static void startPersonLabels(RekognitionClient rekClient, NotificationChannel channel, String bucket, String video) { try { S3Object s3Obj = S3Object.builder() .bucket(bucket) .name(video) .build(); Video vidOb = Video.builder() .s3Object(s3Obj) .build(); StartPersonTrackingRequest personTrackingRequest = StartPersonTrackingRequest.builder() .jobTag("DetectingLabels") .video(vidOb) .notificationChannel(channel) .build(); StartPersonTrackingResponse labelDetectionResponse = rekClient.startPersonTracking(personTrackingRequest); startJobId = labelDetectionResponse.jobId(); } catch (RekognitionException e) { System.out.println(e.getMessage()); System.exit(1); } } public static void getPersonDetectionResults(RekognitionClient rekClient) { try { String paginationToken = null; GetPersonTrackingResponse personTrackingResult = null; boolean finished = false; String status; int yy = 0; do { if (personTrackingResult != null) paginationToken = personTrackingResult.nextToken(); GetPersonTrackingRequest recognitionRequest = GetPersonTrackingRequest.builder() .jobId(startJobId) .nextToken(paginationToken) .maxResults(10) .build(); // Wait until the job succeeds while (!finished) { personTrackingResult = rekClient.getPersonTracking(recognitionRequest); status = personTrackingResult.jobStatusAsString(); if (status.compareTo("SUCCEEDED") == 0) finished = true; else { System.out.println(yy + " status is: " + status); Thread.sleep(1000); } yy++; } finished = false; // Proceed when the job is done - otherwise VideoMetadata is null. VideoMetadata videoMetaData = personTrackingResult.videoMetadata(); System.out.println("Format: " + videoMetaData.format()); System.out.println("Codec: " + videoMetaData.codec()); System.out.println("Duration: " + videoMetaData.durationMillis()); System.out.println("FrameRate: " + videoMetaData.frameRate()); System.out.println("Job"); List<PersonDetection> detectedPersons = personTrackingResult.persons(); for (PersonDetection detectedPerson : detectedPersons) { long seconds = detectedPerson.timestamp() / 1000; System.out.print("Sec: " + seconds + " "); System.out.println("Person Identifier: " + detectedPerson.person().index()); System.out.println(); } } while (personTrackingResult != null && personTrackingResult.nextToken() != null); } catch (RekognitionException | InterruptedException e) { System.out.println(e.getMessage()); System.exit(1); } } }
Kotlin
SDK Kotlin 用
注記

については、「」を参照してください GitHub。AWS コード例リポジトリ で全く同じ例を見つけて、設定と実行の方法を確認してください。

Amazon S3 バケットに保存されたビデオ内の顔を検出する

suspend fun startFaceDetection( channelVal: NotificationChannel?, bucketVal: String, videoVal: String, ) { val s3Obj = S3Object { bucket = bucketVal name = videoVal } val vidOb = Video { s3Object = s3Obj } val request = StartFaceDetectionRequest { jobTag = "Faces" faceAttributes = FaceAttributes.All notificationChannel = channelVal video = vidOb } RekognitionClient { region = "us-east-1" }.use { rekClient -> val startLabelDetectionResult = rekClient.startFaceDetection(request) startJobId = startLabelDetectionResult.jobId.toString() } } suspend fun getFaceResults() { var finished = false var status: String var yy = 0 RekognitionClient { region = "us-east-1" }.use { rekClient -> var response: GetFaceDetectionResponse? = null val recognitionRequest = GetFaceDetectionRequest { jobId = startJobId maxResults = 10 } // Wait until the job succeeds. while (!finished) { response = rekClient.getFaceDetection(recognitionRequest) status = response.jobStatus.toString() if (status.compareTo("SUCCEEDED") == 0) { finished = true } else { println("$yy status is: $status") delay(1000) } yy++ } // Proceed when the job is done - otherwise VideoMetadata is null. val videoMetaData = response?.videoMetadata println("Format: ${videoMetaData?.format}") println("Codec: ${videoMetaData?.codec}") println("Duration: ${videoMetaData?.durationMillis}") println("FrameRate: ${videoMetaData?.frameRate}") // Show face information. response?.faces?.forEach { face -> println("Age: ${face.face?.ageRange}") println("Face: ${face.face?.beard}") println("Eye glasses: ${face?.face?.eyeglasses}") println("Mustache: ${face.face?.mustache}") println("Smile: ${face.face?.smile}") } } }

Amazon S3 バケットに保存されたビデオ内の不適切なコンテンツや攻撃的なコンテンツを検出します。

suspend fun startModerationDetection( channel: NotificationChannel?, bucketVal: String?, videoVal: String?, ) { val s3Obj = S3Object { bucket = bucketVal name = videoVal } val vidOb = Video { s3Object = s3Obj } val request = StartContentModerationRequest { jobTag = "Moderation" notificationChannel = channel video = vidOb } RekognitionClient { region = "us-east-1" }.use { rekClient -> val startModDetectionResult = rekClient.startContentModeration(request) startJobId = startModDetectionResult.jobId.toString() } } suspend fun getModResults() { var finished = false var status: String var yy = 0 RekognitionClient { region = "us-east-1" }.use { rekClient -> var modDetectionResponse: GetContentModerationResponse? = null val modRequest = GetContentModerationRequest { jobId = startJobId maxResults = 10 } // Wait until the job succeeds. while (!finished) { modDetectionResponse = rekClient.getContentModeration(modRequest) status = modDetectionResponse.jobStatus.toString() if (status.compareTo("SUCCEEDED") == 0) { finished = true } else { println("$yy status is: $status") delay(1000) } yy++ } // Proceed when the job is done - otherwise VideoMetadata is null. val videoMetaData = modDetectionResponse?.videoMetadata println("Format: ${videoMetaData?.format}") println("Codec: ${videoMetaData?.codec}") println("Duration: ${videoMetaData?.durationMillis}") println("FrameRate: ${videoMetaData?.frameRate}") modDetectionResponse?.moderationLabels?.forEach { mod -> val seconds: Long = mod.timestamp / 1000 print("Mod label: $seconds ") println(mod.moderationLabel) } } }