SDK for Python (Boto3) を使用した Amazon Rekognition の例 - AWS SDK コード例

AWS Doc SDK Examples GitHub リポジトリには他にも AWS SDK例があります。

翻訳は機械翻訳により提供されています。提供された翻訳内容と英語版の間で齟齬、不一致または矛盾がある場合、英語版が優先します。

SDK for Python (Boto3) を使用した Amazon Rekognition の例

次のコード例は、Amazon Rekognition AWS SDK for Python (Boto3) で を使用してアクションを実行し、一般的なシナリオを実装する方法を示しています。

アクションはより大きなプログラムからのコードの抜粋であり、コンテキスト内で実行する必要があります。アクションは個々のサービス機能を呼び出す方法を示していますが、コンテキスト内のアクションは、関連するシナリオで確認できます。

「シナリオ」は、1 つのサービス内から、または他の AWS のサービスと組み合わせて複数の関数を呼び出し、特定のタスクを実行する方法を示すコード例です。

各例には、完全なソースコードへのリンクが含まれています。ここでは、コンテキストでコードを設定および実行する方法の手順を確認できます。

アクション

次の例は、CompareFaces を使用する方法を説明しています。

詳細については、「イメージ内の顔を比較する」を参照してください。

SDK Python 用 (Boto3)
注記

の詳細については、「」を参照してください GitHub。用例一覧を検索し、AWS コード例リポジトリでの設定と実行の方法を確認してください。

class RekognitionImage: """ Encapsulates an Amazon Rekognition image. This class is a thin wrapper around parts of the Boto3 Amazon Rekognition API. """ def __init__(self, image, image_name, rekognition_client): """ Initializes the image object. :param image: Data that defines the image, either the image bytes or an Amazon S3 bucket and object key. :param image_name: The name of the image. :param rekognition_client: A Boto3 Rekognition client. """ self.image = image self.image_name = image_name self.rekognition_client = rekognition_client def compare_faces(self, target_image, similarity): """ Compares faces in the image with the largest face in the target image. :param target_image: The target image to compare against. :param similarity: Faces in the image must have a similarity value greater than this value to be included in the results. :return: A tuple. The first element is the list of faces that match the reference image. The second element is the list of faces that have a similarity value below the specified threshold. """ try: response = self.rekognition_client.compare_faces( SourceImage=self.image, TargetImage=target_image.image, SimilarityThreshold=similarity, ) matches = [ RekognitionFace(match["Face"]) for match in response["FaceMatches"] ] unmatches = [RekognitionFace(face) for face in response["UnmatchedFaces"]] logger.info( "Found %s matched faces and %s unmatched faces.", len(matches), len(unmatches), ) except ClientError: logger.exception( "Couldn't match faces from %s to %s.", self.image_name, target_image.image_name, ) raise else: return matches, unmatches
  • API 詳細については、AWS SDKPython (Boto3) APIリファレンス のCompareFaces「」の「」を参照してください。

次の例は、CreateCollection を使用する方法を説明しています。

詳細については、「コレクションを作成する」を参照してください。

SDK Python 用 (Boto3)
注記

の詳細については、「」を参照してください GitHub。用例一覧を検索し、AWS コード例リポジトリでの設定と実行の方法を確認してください。

class RekognitionCollectionManager: """ Encapsulates Amazon Rekognition collection management functions. This class is a thin wrapper around parts of the Boto3 Amazon Rekognition API. """ def __init__(self, rekognition_client): """ Initializes the collection manager object. :param rekognition_client: A Boto3 Rekognition client. """ self.rekognition_client = rekognition_client def create_collection(self, collection_id): """ Creates an empty collection. :param collection_id: Text that identifies the collection. :return: The newly created collection. """ try: response = self.rekognition_client.create_collection( CollectionId=collection_id ) response["CollectionId"] = collection_id collection = RekognitionCollection(response, self.rekognition_client) logger.info("Created collection %s.", collection_id) except ClientError: logger.exception("Couldn't create collection %s.", collection_id) raise else: return collection
  • API 詳細については、AWS SDKPython (Boto3) APIリファレンス のCreateCollection「」の「」を参照してください。

次の例は、DeleteCollection を使用する方法を説明しています。

詳細については、「コレクションを削除する」を参照してください。

SDK Python 用 (Boto3)
注記

の詳細については、「」を参照してください GitHub。用例一覧を検索し、AWS コード例リポジトリでの設定と実行の方法を確認してください。

class RekognitionCollection: """ Encapsulates an Amazon Rekognition collection. This class is a thin wrapper around parts of the Boto3 Amazon Rekognition API. """ def __init__(self, collection, rekognition_client): """ Initializes a collection object. :param collection: Collection data in the format returned by a call to create_collection. :param rekognition_client: A Boto3 Rekognition client. """ self.collection_id = collection["CollectionId"] self.collection_arn, self.face_count, self.created = self._unpack_collection( collection ) self.rekognition_client = rekognition_client @staticmethod def _unpack_collection(collection): """ Unpacks optional parts of a collection that can be returned by describe_collection. :param collection: The collection data. :return: A tuple of the data in the collection. """ return ( collection.get("CollectionArn"), collection.get("FaceCount", 0), collection.get("CreationTimestamp"), ) def delete_collection(self): """ Deletes the collection. """ try: self.rekognition_client.delete_collection(CollectionId=self.collection_id) logger.info("Deleted collection %s.", self.collection_id) self.collection_id = None except ClientError: logger.exception("Couldn't delete collection %s.", self.collection_id) raise
  • API 詳細については、AWS SDKPython (Boto3) APIリファレンス のDeleteCollection「」の「」を参照してください。

次のコード例は、DeleteFaces を使用する方法を示しています。

詳細については、「コレクションから顔を削除する」を参照してください。

SDK Python 用 (Boto3)
注記

の詳細については、「」を参照してください GitHub。用例一覧を検索し、AWS コード例リポジトリでの設定と実行の方法を確認してください。

class RekognitionCollection: """ Encapsulates an Amazon Rekognition collection. This class is a thin wrapper around parts of the Boto3 Amazon Rekognition API. """ def __init__(self, collection, rekognition_client): """ Initializes a collection object. :param collection: Collection data in the format returned by a call to create_collection. :param rekognition_client: A Boto3 Rekognition client. """ self.collection_id = collection["CollectionId"] self.collection_arn, self.face_count, self.created = self._unpack_collection( collection ) self.rekognition_client = rekognition_client @staticmethod def _unpack_collection(collection): """ Unpacks optional parts of a collection that can be returned by describe_collection. :param collection: The collection data. :return: A tuple of the data in the collection. """ return ( collection.get("CollectionArn"), collection.get("FaceCount", 0), collection.get("CreationTimestamp"), ) def delete_faces(self, face_ids): """ Deletes faces from the collection. :param face_ids: The list of IDs of faces to delete. :return: The list of IDs of faces that were deleted. """ try: response = self.rekognition_client.delete_faces( CollectionId=self.collection_id, FaceIds=face_ids ) deleted_ids = response["DeletedFaces"] logger.info( "Deleted %s faces from %s.", len(deleted_ids), self.collection_id ) except ClientError: logger.exception("Couldn't delete faces from %s.", self.collection_id) raise else: return deleted_ids
  • API 詳細については、AWS SDKPython (Boto3) APIリファレンス のDeleteFaces「」の「」を参照してください。

次のコード例は、DescribeCollection を使用する方法を示しています。

詳細については、「コレクションを定義する」を参照してください。

SDK Python 用 (Boto3)
注記

の詳細については、「」を参照してください GitHub。用例一覧を検索し、AWS コード例リポジトリでの設定と実行の方法を確認してください。

class RekognitionCollection: """ Encapsulates an Amazon Rekognition collection. This class is a thin wrapper around parts of the Boto3 Amazon Rekognition API. """ def __init__(self, collection, rekognition_client): """ Initializes a collection object. :param collection: Collection data in the format returned by a call to create_collection. :param rekognition_client: A Boto3 Rekognition client. """ self.collection_id = collection["CollectionId"] self.collection_arn, self.face_count, self.created = self._unpack_collection( collection ) self.rekognition_client = rekognition_client @staticmethod def _unpack_collection(collection): """ Unpacks optional parts of a collection that can be returned by describe_collection. :param collection: The collection data. :return: A tuple of the data in the collection. """ return ( collection.get("CollectionArn"), collection.get("FaceCount", 0), collection.get("CreationTimestamp"), ) def describe_collection(self): """ Gets data about the collection from the Amazon Rekognition service. :return: The collection rendered as a dict. """ try: response = self.rekognition_client.describe_collection( CollectionId=self.collection_id ) # Work around capitalization of Arn vs. ARN response["CollectionArn"] = response.get("CollectionARN") ( self.collection_arn, self.face_count, self.created, ) = self._unpack_collection(response) logger.info("Got data for collection %s.", self.collection_id) except ClientError: logger.exception("Couldn't get data for collection %s.", self.collection_id) raise else: return self.to_dict()
  • API 詳細については、AWS SDKPython (Boto3) APIリファレンス のDescribeCollection「」の「」を参照してください。

次の例は、DetectFaces を使用する方法を説明しています。

詳細については、「イメージ内の顔を検出する」を参照してください。

SDK Python 用 (Boto3)
注記

の詳細については、「」を参照してください GitHub。用例一覧を検索し、AWS コード例リポジトリでの設定と実行の方法を確認してください。

class RekognitionImage: """ Encapsulates an Amazon Rekognition image. This class is a thin wrapper around parts of the Boto3 Amazon Rekognition API. """ def __init__(self, image, image_name, rekognition_client): """ Initializes the image object. :param image: Data that defines the image, either the image bytes or an Amazon S3 bucket and object key. :param image_name: The name of the image. :param rekognition_client: A Boto3 Rekognition client. """ self.image = image self.image_name = image_name self.rekognition_client = rekognition_client def detect_faces(self): """ Detects faces in the image. :return: The list of faces found in the image. """ try: response = self.rekognition_client.detect_faces( Image=self.image, Attributes=["ALL"] ) faces = [RekognitionFace(face) for face in response["FaceDetails"]] logger.info("Detected %s faces.", len(faces)) except ClientError: logger.exception("Couldn't detect faces in %s.", self.image_name) raise else: return faces
  • API 詳細については、AWS SDKPython (Boto3) APIリファレンス のDetectFaces「」の「」を参照してください。

次の例は、DetectLabels を使用する方法を説明しています。

詳細については、「イメージ内のラベルを検出する」を参照してください。

SDK Python 用 (Boto3)
注記

の詳細については、「」を参照してください GitHub。用例一覧を検索し、AWS コード例リポジトリでの設定と実行の方法を確認してください。

class RekognitionImage: """ Encapsulates an Amazon Rekognition image. This class is a thin wrapper around parts of the Boto3 Amazon Rekognition API. """ def __init__(self, image, image_name, rekognition_client): """ Initializes the image object. :param image: Data that defines the image, either the image bytes or an Amazon S3 bucket and object key. :param image_name: The name of the image. :param rekognition_client: A Boto3 Rekognition client. """ self.image = image self.image_name = image_name self.rekognition_client = rekognition_client def detect_labels(self, max_labels): """ Detects labels in the image. Labels are objects and people. :param max_labels: The maximum number of labels to return. :return: The list of labels detected in the image. """ try: response = self.rekognition_client.detect_labels( Image=self.image, MaxLabels=max_labels ) labels = [RekognitionLabel(label) for label in response["Labels"]] logger.info("Found %s labels in %s.", len(labels), self.image_name) except ClientError: logger.info("Couldn't detect labels in %s.", self.image_name) raise else: return labels
  • API 詳細については、AWS SDKPython (Boto3) APIリファレンス のDetectLabels「」の「」を参照してください。

次のコード例は、DetectModerationLabels を使用する方法を示しています。

詳細については、「不適切なイメージを検出する」を参照してください。

SDK Python 用 (Boto3)
注記

の詳細については、「」を参照してください GitHub。用例一覧を検索し、AWS コード例リポジトリでの設定と実行の方法を確認してください。

class RekognitionImage: """ Encapsulates an Amazon Rekognition image. This class is a thin wrapper around parts of the Boto3 Amazon Rekognition API. """ def __init__(self, image, image_name, rekognition_client): """ Initializes the image object. :param image: Data that defines the image, either the image bytes or an Amazon S3 bucket and object key. :param image_name: The name of the image. :param rekognition_client: A Boto3 Rekognition client. """ self.image = image self.image_name = image_name self.rekognition_client = rekognition_client def detect_moderation_labels(self): """ Detects moderation labels in the image. Moderation labels identify content that may be inappropriate for some audiences. :return: The list of moderation labels found in the image. """ try: response = self.rekognition_client.detect_moderation_labels( Image=self.image ) labels = [ RekognitionModerationLabel(label) for label in response["ModerationLabels"] ] logger.info( "Found %s moderation labels in %s.", len(labels), self.image_name ) except ClientError: logger.exception( "Couldn't detect moderation labels in %s.", self.image_name ) raise else: return labels
  • API 詳細については、AWS SDKPython (Boto3) APIリファレンス のDetectModerationLabels「」の「」を参照してください。

次のコード例は、DetectText を使用する方法を示しています。

詳細については、「イメージ内のテキストを検出する」を参照してください。

SDK Python 用 (Boto3)
注記

の詳細については、「」を参照してください GitHub。用例一覧を検索し、AWS コード例リポジトリでの設定と実行の方法を確認してください。

class RekognitionImage: """ Encapsulates an Amazon Rekognition image. This class is a thin wrapper around parts of the Boto3 Amazon Rekognition API. """ def __init__(self, image, image_name, rekognition_client): """ Initializes the image object. :param image: Data that defines the image, either the image bytes or an Amazon S3 bucket and object key. :param image_name: The name of the image. :param rekognition_client: A Boto3 Rekognition client. """ self.image = image self.image_name = image_name self.rekognition_client = rekognition_client def detect_text(self): """ Detects text in the image. :return The list of text elements found in the image. """ try: response = self.rekognition_client.detect_text(Image=self.image) texts = [RekognitionText(text) for text in response["TextDetections"]] logger.info("Found %s texts in %s.", len(texts), self.image_name) except ClientError: logger.exception("Couldn't detect text in %s.", self.image_name) raise else: return texts
  • API 詳細については、AWS SDKPython (Boto3) APIリファレンス のDetectText「」の「」を参照してください。

次の例は、DisassociateFaces を使用する方法を説明しています。

SDK Python 用 (Boto3)
from botocore.exceptions import ClientError import boto3 import logging logger = logging.getLogger(__name__) session = boto3.Session(profile_name='profile-name') client = session.client('rekognition') def disassociate_faces(collection_id, user_id, face_ids): """ Disassociate stored faces within collection to the given user :param collection_id: The ID of the collection where user and faces are stored. :param user_id: The ID of the user that we want to disassociate faces from :param face_ids: The list of face IDs to be disassociated from the given user :return: response of AssociateFaces API """ logger.info(f'Disssociating faces from user: {user_id}, {face_ids}') try: response = client.disassociate_faces( CollectionId=collection_id, UserId=user_id, FaceIds=face_ids ) print(f'- disassociated {len(response["DisassociatedFaces"])} faces') except ClientError: logger.exception("Failed to disassociate faces from the given user") raise else: print(response) return response def main(): face_ids = ["faceId1", "faceId2"] collection_id = "collection-id" user_id = "user-id" disassociate_faces(collection_id, user_id, face_ids) if __name__ == "__main__": main()
  • API 詳細については、AWS SDKPython (Boto3) APIリファレンス のDisassociateFaces「」の「」を参照してください。

次のコード例は、IndexFaces を使用する方法を示しています。

詳細については、「コレクションに顔を追加する」を参照してください。

SDK Python 用 (Boto3)
注記

の詳細については、「」を参照してください GitHub。用例一覧を検索し、AWS コード例リポジトリでの設定と実行の方法を確認してください。

class RekognitionCollection: """ Encapsulates an Amazon Rekognition collection. This class is a thin wrapper around parts of the Boto3 Amazon Rekognition API. """ def __init__(self, collection, rekognition_client): """ Initializes a collection object. :param collection: Collection data in the format returned by a call to create_collection. :param rekognition_client: A Boto3 Rekognition client. """ self.collection_id = collection["CollectionId"] self.collection_arn, self.face_count, self.created = self._unpack_collection( collection ) self.rekognition_client = rekognition_client @staticmethod def _unpack_collection(collection): """ Unpacks optional parts of a collection that can be returned by describe_collection. :param collection: The collection data. :return: A tuple of the data in the collection. """ return ( collection.get("CollectionArn"), collection.get("FaceCount", 0), collection.get("CreationTimestamp"), ) def index_faces(self, image, max_faces): """ Finds faces in the specified image, indexes them, and stores them in the collection. :param image: The image to index. :param max_faces: The maximum number of faces to index. :return: A tuple. The first element is a list of indexed faces. The second element is a list of faces that couldn't be indexed. """ try: response = self.rekognition_client.index_faces( CollectionId=self.collection_id, Image=image.image, ExternalImageId=image.image_name, MaxFaces=max_faces, DetectionAttributes=["ALL"], ) indexed_faces = [ RekognitionFace({**face["Face"], **face["FaceDetail"]}) for face in response["FaceRecords"] ] unindexed_faces = [ RekognitionFace(face["FaceDetail"]) for face in response["UnindexedFaces"] ] logger.info( "Indexed %s faces in %s. Could not index %s faces.", len(indexed_faces), image.image_name, len(unindexed_faces), ) except ClientError: logger.exception("Couldn't index faces in image %s.", image.image_name) raise else: return indexed_faces, unindexed_faces
  • API 詳細については、AWS SDKPython (Boto3) APIリファレンス のIndexFaces「」の「」を参照してください。

次の例は、ListCollections を使用する方法を説明しています。

コレクションの詳細については、「コレクションを一覧表示する」を参照してください。

SDK Python 用 (Boto3)
注記

の詳細については、「」を参照してください GitHub。用例一覧を検索し、AWS コード例リポジトリでの設定と実行の方法を確認してください。

class RekognitionCollectionManager: """ Encapsulates Amazon Rekognition collection management functions. This class is a thin wrapper around parts of the Boto3 Amazon Rekognition API. """ def __init__(self, rekognition_client): """ Initializes the collection manager object. :param rekognition_client: A Boto3 Rekognition client. """ self.rekognition_client = rekognition_client def list_collections(self, max_results): """ Lists collections for the current account. :param max_results: The maximum number of collections to return. :return: The list of collections for the current account. """ try: response = self.rekognition_client.list_collections(MaxResults=max_results) collections = [ RekognitionCollection({"CollectionId": col_id}, self.rekognition_client) for col_id in response["CollectionIds"] ] except ClientError: logger.exception("Couldn't list collections.") raise else: return collections
  • API 詳細については、AWS SDKPython (Boto3) APIリファレンス のListCollections「」の「」を参照してください。

次の例は、ListFaces を使用する方法を説明しています。

詳細については、「コレクションに顔を保存する」を参照してください。

SDK Python 用 (Boto3)
注記

の詳細については、「」を参照してください GitHub。用例一覧を検索し、AWS コード例リポジトリでの設定と実行の方法を確認してください。

class RekognitionCollection: """ Encapsulates an Amazon Rekognition collection. This class is a thin wrapper around parts of the Boto3 Amazon Rekognition API. """ def __init__(self, collection, rekognition_client): """ Initializes a collection object. :param collection: Collection data in the format returned by a call to create_collection. :param rekognition_client: A Boto3 Rekognition client. """ self.collection_id = collection["CollectionId"] self.collection_arn, self.face_count, self.created = self._unpack_collection( collection ) self.rekognition_client = rekognition_client @staticmethod def _unpack_collection(collection): """ Unpacks optional parts of a collection that can be returned by describe_collection. :param collection: The collection data. :return: A tuple of the data in the collection. """ return ( collection.get("CollectionArn"), collection.get("FaceCount", 0), collection.get("CreationTimestamp"), ) def list_faces(self, max_results): """ Lists the faces currently indexed in the collection. :param max_results: The maximum number of faces to return. :return: The list of faces in the collection. """ try: response = self.rekognition_client.list_faces( CollectionId=self.collection_id, MaxResults=max_results ) faces = [RekognitionFace(face) for face in response["Faces"]] logger.info( "Found %s faces in collection %s.", len(faces), self.collection_id ) except ClientError: logger.exception( "Couldn't list faces in collection %s.", self.collection_id ) raise else: return faces
  • API 詳細については、AWS SDKPython (Boto3) APIリファレンス のListFaces「」の「」を参照してください。

次の例は、RecognizeCelebrities を使用する方法を説明しています。

詳細については、「イメージ内で有名人を認識する」を参照してください。

SDK Python 用 (Boto3)
注記

の詳細については、「」を参照してください GitHub。用例一覧を検索し、AWS コード例リポジトリでの設定と実行の方法を確認してください。

class RekognitionImage: """ Encapsulates an Amazon Rekognition image. This class is a thin wrapper around parts of the Boto3 Amazon Rekognition API. """ def __init__(self, image, image_name, rekognition_client): """ Initializes the image object. :param image: Data that defines the image, either the image bytes or an Amazon S3 bucket and object key. :param image_name: The name of the image. :param rekognition_client: A Boto3 Rekognition client. """ self.image = image self.image_name = image_name self.rekognition_client = rekognition_client def recognize_celebrities(self): """ Detects celebrities in the image. :return: A tuple. The first element is the list of celebrities found in the image. The second element is the list of faces that were detected but did not match any known celebrities. """ try: response = self.rekognition_client.recognize_celebrities(Image=self.image) celebrities = [ RekognitionCelebrity(celeb) for celeb in response["CelebrityFaces"] ] other_faces = [ RekognitionFace(face) for face in response["UnrecognizedFaces"] ] logger.info( "Found %s celebrities and %s other faces in %s.", len(celebrities), len(other_faces), self.image_name, ) except ClientError: logger.exception("Couldn't detect celebrities in %s.", self.image_name) raise else: return celebrities, other_faces
  • API 詳細については、AWS SDKPython (Boto3) APIリファレンス のRecognizeCelebrities「」の「」を参照してください。

次のコード例は、SearchFaces を使用する方法を示しています。

詳細については、顔 (フェイス ID) を検索する を参照してください。

SDK Python 用 (Boto3)
注記

の詳細については、「」を参照してください GitHub。用例一覧を検索し、AWS コード例リポジトリでの設定と実行の方法を確認してください。

class RekognitionCollection: """ Encapsulates an Amazon Rekognition collection. This class is a thin wrapper around parts of the Boto3 Amazon Rekognition API. """ def __init__(self, collection, rekognition_client): """ Initializes a collection object. :param collection: Collection data in the format returned by a call to create_collection. :param rekognition_client: A Boto3 Rekognition client. """ self.collection_id = collection["CollectionId"] self.collection_arn, self.face_count, self.created = self._unpack_collection( collection ) self.rekognition_client = rekognition_client @staticmethod def _unpack_collection(collection): """ Unpacks optional parts of a collection that can be returned by describe_collection. :param collection: The collection data. :return: A tuple of the data in the collection. """ return ( collection.get("CollectionArn"), collection.get("FaceCount", 0), collection.get("CreationTimestamp"), ) def search_faces(self, face_id, threshold, max_faces): """ Searches for faces in the collection that match another face from the collection. :param face_id: The ID of the face in the collection to search for. :param threshold: The match confidence must be greater than this value for a face to be included in the results. :param max_faces: The maximum number of faces to return. :return: The list of matching faces found in the collection. This list does not contain the face specified by `face_id`. """ try: response = self.rekognition_client.search_faces( CollectionId=self.collection_id, FaceId=face_id, FaceMatchThreshold=threshold, MaxFaces=max_faces, ) faces = [RekognitionFace(face["Face"]) for face in response["FaceMatches"]] logger.info( "Found %s faces in %s that match %s.", len(faces), self.collection_id, face_id, ) except ClientError: logger.exception( "Couldn't search for faces in %s that match %s.", self.collection_id, face_id, ) raise else: return faces
  • API 詳細については、AWS SDKPython (Boto3) APIリファレンス のSearchFaces「」の「」を参照してください。

次の例は、SearchFacesByImage を使用する方法を説明しています。

詳細については、「顔を検索する (イメージ)」を参照してください。

SDK Python 用 (Boto3)
注記

の詳細については、「」を参照してください GitHub。用例一覧を検索し、AWS コード例リポジトリでの設定と実行の方法を確認してください。

class RekognitionCollection: """ Encapsulates an Amazon Rekognition collection. This class is a thin wrapper around parts of the Boto3 Amazon Rekognition API. """ def __init__(self, collection, rekognition_client): """ Initializes a collection object. :param collection: Collection data in the format returned by a call to create_collection. :param rekognition_client: A Boto3 Rekognition client. """ self.collection_id = collection["CollectionId"] self.collection_arn, self.face_count, self.created = self._unpack_collection( collection ) self.rekognition_client = rekognition_client @staticmethod def _unpack_collection(collection): """ Unpacks optional parts of a collection that can be returned by describe_collection. :param collection: The collection data. :return: A tuple of the data in the collection. """ return ( collection.get("CollectionArn"), collection.get("FaceCount", 0), collection.get("CreationTimestamp"), ) def search_faces_by_image(self, image, threshold, max_faces): """ Searches for faces in the collection that match the largest face in the reference image. :param image: The image that contains the reference face to search for. :param threshold: The match confidence must be greater than this value for a face to be included in the results. :param max_faces: The maximum number of faces to return. :return: A tuple. The first element is the face found in the reference image. The second element is the list of matching faces found in the collection. """ try: response = self.rekognition_client.search_faces_by_image( CollectionId=self.collection_id, Image=image.image, FaceMatchThreshold=threshold, MaxFaces=max_faces, ) image_face = RekognitionFace( { "BoundingBox": response["SearchedFaceBoundingBox"], "Confidence": response["SearchedFaceConfidence"], } ) collection_faces = [ RekognitionFace(face["Face"]) for face in response["FaceMatches"] ] logger.info( "Found %s faces in the collection that match the largest " "face in %s.", len(collection_faces), image.image_name, ) except ClientError: logger.exception( "Couldn't search for faces in %s that match %s.", self.collection_id, image.image_name, ) raise else: return image_face, collection_faces
  • API 詳細については、AWS SDKPython (Boto3) APIリファレンス のSearchFacesByImage「」の「」を参照してください。

シナリオ

次のコードサンプルは、以下の操作方法を示しています。

  • Amazon Rekognition コレクションを作成します。

  • このコレクションにイメージを追加し、その中から顔を検出します。

  • 参照イメージに一致する顔をコレクション内で検索します。

  • コレクションを削除します。

詳細については、コレクション内の顔を検索するには を参照してください。

SDK Python 用 (Boto3)
注記

の詳細については、「」を参照してください GitHub。用例一覧を検索し、AWS コード例リポジトリでの設定と実行の方法を確認してください。

Amazon Rekognition 関数をラップするクラスを作成します。

import logging from pprint import pprint import boto3 from botocore.exceptions import ClientError from rekognition_objects import RekognitionFace from rekognition_image_detection import RekognitionImage logger = logging.getLogger(__name__) class RekognitionImage: """ Encapsulates an Amazon Rekognition image. This class is a thin wrapper around parts of the Boto3 Amazon Rekognition API. """ def __init__(self, image, image_name, rekognition_client): """ Initializes the image object. :param image: Data that defines the image, either the image bytes or an Amazon S3 bucket and object key. :param image_name: The name of the image. :param rekognition_client: A Boto3 Rekognition client. """ self.image = image self.image_name = image_name self.rekognition_client = rekognition_client @classmethod def from_file(cls, image_file_name, rekognition_client, image_name=None): """ Creates a RekognitionImage object from a local file. :param image_file_name: The file name of the image. The file is opened and its bytes are read. :param rekognition_client: A Boto3 Rekognition client. :param image_name: The name of the image. If this is not specified, the file name is used as the image name. :return: The RekognitionImage object, initialized with image bytes from the file. """ with open(image_file_name, "rb") as img_file: image = {"Bytes": img_file.read()} name = image_file_name if image_name is None else image_name return cls(image, name, rekognition_client) class RekognitionCollectionManager: """ Encapsulates Amazon Rekognition collection management functions. This class is a thin wrapper around parts of the Boto3 Amazon Rekognition API. """ def __init__(self, rekognition_client): """ Initializes the collection manager object. :param rekognition_client: A Boto3 Rekognition client. """ self.rekognition_client = rekognition_client def create_collection(self, collection_id): """ Creates an empty collection. :param collection_id: Text that identifies the collection. :return: The newly created collection. """ try: response = self.rekognition_client.create_collection( CollectionId=collection_id ) response["CollectionId"] = collection_id collection = RekognitionCollection(response, self.rekognition_client) logger.info("Created collection %s.", collection_id) except ClientError: logger.exception("Couldn't create collection %s.", collection_id) raise else: return collection def list_collections(self, max_results): """ Lists collections for the current account. :param max_results: The maximum number of collections to return. :return: The list of collections for the current account. """ try: response = self.rekognition_client.list_collections(MaxResults=max_results) collections = [ RekognitionCollection({"CollectionId": col_id}, self.rekognition_client) for col_id in response["CollectionIds"] ] except ClientError: logger.exception("Couldn't list collections.") raise else: return collections class RekognitionCollection: """ Encapsulates an Amazon Rekognition collection. This class is a thin wrapper around parts of the Boto3 Amazon Rekognition API. """ def __init__(self, collection, rekognition_client): """ Initializes a collection object. :param collection: Collection data in the format returned by a call to create_collection. :param rekognition_client: A Boto3 Rekognition client. """ self.collection_id = collection["CollectionId"] self.collection_arn, self.face_count, self.created = self._unpack_collection( collection ) self.rekognition_client = rekognition_client @staticmethod def _unpack_collection(collection): """ Unpacks optional parts of a collection that can be returned by describe_collection. :param collection: The collection data. :return: A tuple of the data in the collection. """ return ( collection.get("CollectionArn"), collection.get("FaceCount", 0), collection.get("CreationTimestamp"), ) def to_dict(self): """ Renders parts of the collection data to a dict. :return: The collection data as a dict. """ rendering = { "collection_id": self.collection_id, "collection_arn": self.collection_arn, "face_count": self.face_count, "created": self.created, } return rendering def describe_collection(self): """ Gets data about the collection from the Amazon Rekognition service. :return: The collection rendered as a dict. """ try: response = self.rekognition_client.describe_collection( CollectionId=self.collection_id ) # Work around capitalization of Arn vs. ARN response["CollectionArn"] = response.get("CollectionARN") ( self.collection_arn, self.face_count, self.created, ) = self._unpack_collection(response) logger.info("Got data for collection %s.", self.collection_id) except ClientError: logger.exception("Couldn't get data for collection %s.", self.collection_id) raise else: return self.to_dict() def delete_collection(self): """ Deletes the collection. """ try: self.rekognition_client.delete_collection(CollectionId=self.collection_id) logger.info("Deleted collection %s.", self.collection_id) self.collection_id = None except ClientError: logger.exception("Couldn't delete collection %s.", self.collection_id) raise def index_faces(self, image, max_faces): """ Finds faces in the specified image, indexes them, and stores them in the collection. :param image: The image to index. :param max_faces: The maximum number of faces to index. :return: A tuple. The first element is a list of indexed faces. The second element is a list of faces that couldn't be indexed. """ try: response = self.rekognition_client.index_faces( CollectionId=self.collection_id, Image=image.image, ExternalImageId=image.image_name, MaxFaces=max_faces, DetectionAttributes=["ALL"], ) indexed_faces = [ RekognitionFace({**face["Face"], **face["FaceDetail"]}) for face in response["FaceRecords"] ] unindexed_faces = [ RekognitionFace(face["FaceDetail"]) for face in response["UnindexedFaces"] ] logger.info( "Indexed %s faces in %s. Could not index %s faces.", len(indexed_faces), image.image_name, len(unindexed_faces), ) except ClientError: logger.exception("Couldn't index faces in image %s.", image.image_name) raise else: return indexed_faces, unindexed_faces def list_faces(self, max_results): """ Lists the faces currently indexed in the collection. :param max_results: The maximum number of faces to return. :return: The list of faces in the collection. """ try: response = self.rekognition_client.list_faces( CollectionId=self.collection_id, MaxResults=max_results ) faces = [RekognitionFace(face) for face in response["Faces"]] logger.info( "Found %s faces in collection %s.", len(faces), self.collection_id ) except ClientError: logger.exception( "Couldn't list faces in collection %s.", self.collection_id ) raise else: return faces def search_faces(self, face_id, threshold, max_faces): """ Searches for faces in the collection that match another face from the collection. :param face_id: The ID of the face in the collection to search for. :param threshold: The match confidence must be greater than this value for a face to be included in the results. :param max_faces: The maximum number of faces to return. :return: The list of matching faces found in the collection. This list does not contain the face specified by `face_id`. """ try: response = self.rekognition_client.search_faces( CollectionId=self.collection_id, FaceId=face_id, FaceMatchThreshold=threshold, MaxFaces=max_faces, ) faces = [RekognitionFace(face["Face"]) for face in response["FaceMatches"]] logger.info( "Found %s faces in %s that match %s.", len(faces), self.collection_id, face_id, ) except ClientError: logger.exception( "Couldn't search for faces in %s that match %s.", self.collection_id, face_id, ) raise else: return faces def search_faces_by_image(self, image, threshold, max_faces): """ Searches for faces in the collection that match the largest face in the reference image. :param image: The image that contains the reference face to search for. :param threshold: The match confidence must be greater than this value for a face to be included in the results. :param max_faces: The maximum number of faces to return. :return: A tuple. The first element is the face found in the reference image. The second element is the list of matching faces found in the collection. """ try: response = self.rekognition_client.search_faces_by_image( CollectionId=self.collection_id, Image=image.image, FaceMatchThreshold=threshold, MaxFaces=max_faces, ) image_face = RekognitionFace( { "BoundingBox": response["SearchedFaceBoundingBox"], "Confidence": response["SearchedFaceConfidence"], } ) collection_faces = [ RekognitionFace(face["Face"]) for face in response["FaceMatches"] ] logger.info( "Found %s faces in the collection that match the largest " "face in %s.", len(collection_faces), image.image_name, ) except ClientError: logger.exception( "Couldn't search for faces in %s that match %s.", self.collection_id, image.image_name, ) raise else: return image_face, collection_faces class RekognitionFace: """Encapsulates an Amazon Rekognition face.""" def __init__(self, face, timestamp=None): """ Initializes the face object. :param face: Face data, in the format returned by Amazon Rekognition functions. :param timestamp: The time when the face was detected, if the face was detected in a video. """ self.bounding_box = face.get("BoundingBox") self.confidence = face.get("Confidence") self.landmarks = face.get("Landmarks") self.pose = face.get("Pose") self.quality = face.get("Quality") age_range = face.get("AgeRange") if age_range is not None: self.age_range = (age_range.get("Low"), age_range.get("High")) else: self.age_range = None self.smile = face.get("Smile", {}).get("Value") self.eyeglasses = face.get("Eyeglasses", {}).get("Value") self.sunglasses = face.get("Sunglasses", {}).get("Value") self.gender = face.get("Gender", {}).get("Value", None) self.beard = face.get("Beard", {}).get("Value") self.mustache = face.get("Mustache", {}).get("Value") self.eyes_open = face.get("EyesOpen", {}).get("Value") self.mouth_open = face.get("MouthOpen", {}).get("Value") self.emotions = [ emo.get("Type") for emo in face.get("Emotions", []) if emo.get("Confidence", 0) > 50 ] self.face_id = face.get("FaceId") self.image_id = face.get("ImageId") self.timestamp = timestamp def to_dict(self): """ Renders some of the face data to a dict. :return: A dict that contains the face data. """ rendering = {} if self.bounding_box is not None: rendering["bounding_box"] = self.bounding_box if self.age_range is not None: rendering["age"] = f"{self.age_range[0]} - {self.age_range[1]}" if self.gender is not None: rendering["gender"] = self.gender if self.emotions: rendering["emotions"] = self.emotions if self.face_id is not None: rendering["face_id"] = self.face_id if self.image_id is not None: rendering["image_id"] = self.image_id if self.timestamp is not None: rendering["timestamp"] = self.timestamp has = [] if self.smile: has.append("smile") if self.eyeglasses: has.append("eyeglasses") if self.sunglasses: has.append("sunglasses") if self.beard: has.append("beard") if self.mustache: has.append("mustache") if self.eyes_open: has.append("open eyes") if self.mouth_open: has.append("open mouth") if has: rendering["has"] = has return rendering

ラッパークラスを使用して、一連のイメージから顔のコレクションを作成し、コレクション内の顔を検索します。

def usage_demo(): print("-" * 88) print("Welcome to the Amazon Rekognition face collection demo!") print("-" * 88) logging.basicConfig(level=logging.INFO, format="%(levelname)s: %(message)s") rekognition_client = boto3.client("rekognition") images = [ RekognitionImage.from_file( ".media/pexels-agung-pandit-wiguna-1128316.jpg", rekognition_client, image_name="sitting", ), RekognitionImage.from_file( ".media/pexels-agung-pandit-wiguna-1128317.jpg", rekognition_client, image_name="hopping", ), RekognitionImage.from_file( ".media/pexels-agung-pandit-wiguna-1128318.jpg", rekognition_client, image_name="biking", ), ] collection_mgr = RekognitionCollectionManager(rekognition_client) collection = collection_mgr.create_collection("doc-example-collection-demo") print(f"Created collection {collection.collection_id}:") pprint(collection.describe_collection()) print("Indexing faces from three images:") for image in images: collection.index_faces(image, 10) print("Listing faces in collection:") faces = collection.list_faces(10) for face in faces: pprint(face.to_dict()) input("Press Enter to continue.") print( f"Searching for faces in the collection that match the first face in the " f"list (Face ID: {faces[0].face_id}." ) found_faces = collection.search_faces(faces[0].face_id, 80, 10) print(f"Found {len(found_faces)} matching faces.") for face in found_faces: pprint(face.to_dict()) input("Press Enter to continue.") print( f"Searching for faces in the collection that match the largest face in " f"{images[0].image_name}." ) image_face, match_faces = collection.search_faces_by_image(images[0], 80, 10) print(f"The largest face in {images[0].image_name} is:") pprint(image_face.to_dict()) print(f"Found {len(match_faces)} matching faces.") for face in match_faces: pprint(face.to_dict()) input("Press Enter to continue.") collection.delete_collection() print("Thanks for watching!") print("-" * 88)

次のコードサンプルは、以下の操作方法を示しています。

  • Amazon Rekognition を使用してイメージから要素を検出します。

  • イメージを表示し、検出された要素の周囲に境界ボックスを描画します。

詳細については、境界ボックスの表示 を参照してください。

SDK Python 用 (Boto3)
注記

の詳細については、「」を参照してください GitHub。用例一覧を検索し、AWS コード例リポジトリでの設定と実行の方法を確認してください。

Amazon Rekognition 関数をラップするクラスを作成します。

import logging from pprint import pprint import boto3 from botocore.exceptions import ClientError import requests from rekognition_objects import ( RekognitionFace, RekognitionCelebrity, RekognitionLabel, RekognitionModerationLabel, RekognitionText, show_bounding_boxes, show_polygons, ) logger = logging.getLogger(__name__) class RekognitionImage: """ Encapsulates an Amazon Rekognition image. This class is a thin wrapper around parts of the Boto3 Amazon Rekognition API. """ def __init__(self, image, image_name, rekognition_client): """ Initializes the image object. :param image: Data that defines the image, either the image bytes or an Amazon S3 bucket and object key. :param image_name: The name of the image. :param rekognition_client: A Boto3 Rekognition client. """ self.image = image self.image_name = image_name self.rekognition_client = rekognition_client @classmethod def from_file(cls, image_file_name, rekognition_client, image_name=None): """ Creates a RekognitionImage object from a local file. :param image_file_name: The file name of the image. The file is opened and its bytes are read. :param rekognition_client: A Boto3 Rekognition client. :param image_name: The name of the image. If this is not specified, the file name is used as the image name. :return: The RekognitionImage object, initialized with image bytes from the file. """ with open(image_file_name, "rb") as img_file: image = {"Bytes": img_file.read()} name = image_file_name if image_name is None else image_name return cls(image, name, rekognition_client) @classmethod def from_bucket(cls, s3_object, rekognition_client): """ Creates a RekognitionImage object from an Amazon S3 object. :param s3_object: An Amazon S3 object that identifies the image. The image is not retrieved until needed for a later call. :param rekognition_client: A Boto3 Rekognition client. :return: The RekognitionImage object, initialized with Amazon S3 object data. """ image = {"S3Object": {"Bucket": s3_object.bucket_name, "Name": s3_object.key}} return cls(image, s3_object.key, rekognition_client) def detect_faces(self): """ Detects faces in the image. :return: The list of faces found in the image. """ try: response = self.rekognition_client.detect_faces( Image=self.image, Attributes=["ALL"] ) faces = [RekognitionFace(face) for face in response["FaceDetails"]] logger.info("Detected %s faces.", len(faces)) except ClientError: logger.exception("Couldn't detect faces in %s.", self.image_name) raise else: return faces def detect_labels(self, max_labels): """ Detects labels in the image. Labels are objects and people. :param max_labels: The maximum number of labels to return. :return: The list of labels detected in the image. """ try: response = self.rekognition_client.detect_labels( Image=self.image, MaxLabels=max_labels ) labels = [RekognitionLabel(label) for label in response["Labels"]] logger.info("Found %s labels in %s.", len(labels), self.image_name) except ClientError: logger.info("Couldn't detect labels in %s.", self.image_name) raise else: return labels def recognize_celebrities(self): """ Detects celebrities in the image. :return: A tuple. The first element is the list of celebrities found in the image. The second element is the list of faces that were detected but did not match any known celebrities. """ try: response = self.rekognition_client.recognize_celebrities(Image=self.image) celebrities = [ RekognitionCelebrity(celeb) for celeb in response["CelebrityFaces"] ] other_faces = [ RekognitionFace(face) for face in response["UnrecognizedFaces"] ] logger.info( "Found %s celebrities and %s other faces in %s.", len(celebrities), len(other_faces), self.image_name, ) except ClientError: logger.exception("Couldn't detect celebrities in %s.", self.image_name) raise else: return celebrities, other_faces def compare_faces(self, target_image, similarity): """ Compares faces in the image with the largest face in the target image. :param target_image: The target image to compare against. :param similarity: Faces in the image must have a similarity value greater than this value to be included in the results. :return: A tuple. The first element is the list of faces that match the reference image. The second element is the list of faces that have a similarity value below the specified threshold. """ try: response = self.rekognition_client.compare_faces( SourceImage=self.image, TargetImage=target_image.image, SimilarityThreshold=similarity, ) matches = [ RekognitionFace(match["Face"]) for match in response["FaceMatches"] ] unmatches = [RekognitionFace(face) for face in response["UnmatchedFaces"]] logger.info( "Found %s matched faces and %s unmatched faces.", len(matches), len(unmatches), ) except ClientError: logger.exception( "Couldn't match faces from %s to %s.", self.image_name, target_image.image_name, ) raise else: return matches, unmatches def detect_moderation_labels(self): """ Detects moderation labels in the image. Moderation labels identify content that may be inappropriate for some audiences. :return: The list of moderation labels found in the image. """ try: response = self.rekognition_client.detect_moderation_labels( Image=self.image ) labels = [ RekognitionModerationLabel(label) for label in response["ModerationLabels"] ] logger.info( "Found %s moderation labels in %s.", len(labels), self.image_name ) except ClientError: logger.exception( "Couldn't detect moderation labels in %s.", self.image_name ) raise else: return labels def detect_text(self): """ Detects text in the image. :return The list of text elements found in the image. """ try: response = self.rekognition_client.detect_text(Image=self.image) texts = [RekognitionText(text) for text in response["TextDetections"]] logger.info("Found %s texts in %s.", len(texts), self.image_name) except ClientError: logger.exception("Couldn't detect text in %s.", self.image_name) raise else: return texts

境界ボックスとポリゴンを描画するヘルパー関数を作成します。

import io import logging from PIL import Image, ImageDraw logger = logging.getLogger(__name__) def show_bounding_boxes(image_bytes, box_sets, colors): """ Draws bounding boxes on an image and shows it with the default image viewer. :param image_bytes: The image to draw, as bytes. :param box_sets: A list of lists of bounding boxes to draw on the image. :param colors: A list of colors to use to draw the bounding boxes. """ image = Image.open(io.BytesIO(image_bytes)) draw = ImageDraw.Draw(image) for boxes, color in zip(box_sets, colors): for box in boxes: left = image.width * box["Left"] top = image.height * box["Top"] right = (image.width * box["Width"]) + left bottom = (image.height * box["Height"]) + top draw.rectangle([left, top, right, bottom], outline=color, width=3) image.show() def show_polygons(image_bytes, polygons, color): """ Draws polygons on an image and shows it with the default image viewer. :param image_bytes: The image to draw, as bytes. :param polygons: The list of polygons to draw on the image. :param color: The color to use to draw the polygons. """ image = Image.open(io.BytesIO(image_bytes)) draw = ImageDraw.Draw(image) for polygon in polygons: draw.polygon( [ (image.width * point["X"], image.height * point["Y"]) for point in polygon ], outline=color, ) image.show()

Amazon Rekognition から返されたオブジェクトを解析するクラスを作成します。

class RekognitionFace: """Encapsulates an Amazon Rekognition face.""" def __init__(self, face, timestamp=None): """ Initializes the face object. :param face: Face data, in the format returned by Amazon Rekognition functions. :param timestamp: The time when the face was detected, if the face was detected in a video. """ self.bounding_box = face.get("BoundingBox") self.confidence = face.get("Confidence") self.landmarks = face.get("Landmarks") self.pose = face.get("Pose") self.quality = face.get("Quality") age_range = face.get("AgeRange") if age_range is not None: self.age_range = (age_range.get("Low"), age_range.get("High")) else: self.age_range = None self.smile = face.get("Smile", {}).get("Value") self.eyeglasses = face.get("Eyeglasses", {}).get("Value") self.sunglasses = face.get("Sunglasses", {}).get("Value") self.gender = face.get("Gender", {}).get("Value", None) self.beard = face.get("Beard", {}).get("Value") self.mustache = face.get("Mustache", {}).get("Value") self.eyes_open = face.get("EyesOpen", {}).get("Value") self.mouth_open = face.get("MouthOpen", {}).get("Value") self.emotions = [ emo.get("Type") for emo in face.get("Emotions", []) if emo.get("Confidence", 0) > 50 ] self.face_id = face.get("FaceId") self.image_id = face.get("ImageId") self.timestamp = timestamp def to_dict(self): """ Renders some of the face data to a dict. :return: A dict that contains the face data. """ rendering = {} if self.bounding_box is not None: rendering["bounding_box"] = self.bounding_box if self.age_range is not None: rendering["age"] = f"{self.age_range[0]} - {self.age_range[1]}" if self.gender is not None: rendering["gender"] = self.gender if self.emotions: rendering["emotions"] = self.emotions if self.face_id is not None: rendering["face_id"] = self.face_id if self.image_id is not None: rendering["image_id"] = self.image_id if self.timestamp is not None: rendering["timestamp"] = self.timestamp has = [] if self.smile: has.append("smile") if self.eyeglasses: has.append("eyeglasses") if self.sunglasses: has.append("sunglasses") if self.beard: has.append("beard") if self.mustache: has.append("mustache") if self.eyes_open: has.append("open eyes") if self.mouth_open: has.append("open mouth") if has: rendering["has"] = has return rendering class RekognitionCelebrity: """Encapsulates an Amazon Rekognition celebrity.""" def __init__(self, celebrity, timestamp=None): """ Initializes the celebrity object. :param celebrity: Celebrity data, in the format returned by Amazon Rekognition functions. :param timestamp: The time when the celebrity was detected, if the celebrity was detected in a video. """ self.info_urls = celebrity.get("Urls") self.name = celebrity.get("Name") self.id = celebrity.get("Id") self.face = RekognitionFace(celebrity.get("Face")) self.confidence = celebrity.get("MatchConfidence") self.bounding_box = celebrity.get("BoundingBox") self.timestamp = timestamp def to_dict(self): """ Renders some of the celebrity data to a dict. :return: A dict that contains the celebrity data. """ rendering = self.face.to_dict() if self.name is not None: rendering["name"] = self.name if self.info_urls: rendering["info URLs"] = self.info_urls if self.timestamp is not None: rendering["timestamp"] = self.timestamp return rendering class RekognitionPerson: """Encapsulates an Amazon Rekognition person.""" def __init__(self, person, timestamp=None): """ Initializes the person object. :param person: Person data, in the format returned by Amazon Rekognition functions. :param timestamp: The time when the person was detected, if the person was detected in a video. """ self.index = person.get("Index") self.bounding_box = person.get("BoundingBox") face = person.get("Face") self.face = RekognitionFace(face) if face is not None else None self.timestamp = timestamp def to_dict(self): """ Renders some of the person data to a dict. :return: A dict that contains the person data. """ rendering = self.face.to_dict() if self.face is not None else {} if self.index is not None: rendering["index"] = self.index if self.bounding_box is not None: rendering["bounding_box"] = self.bounding_box if self.timestamp is not None: rendering["timestamp"] = self.timestamp return rendering class RekognitionLabel: """Encapsulates an Amazon Rekognition label.""" def __init__(self, label, timestamp=None): """ Initializes the label object. :param label: Label data, in the format returned by Amazon Rekognition functions. :param timestamp: The time when the label was detected, if the label was detected in a video. """ self.name = label.get("Name") self.confidence = label.get("Confidence") self.instances = label.get("Instances") self.parents = label.get("Parents") self.timestamp = timestamp def to_dict(self): """ Renders some of the label data to a dict. :return: A dict that contains the label data. """ rendering = {} if self.name is not None: rendering["name"] = self.name if self.timestamp is not None: rendering["timestamp"] = self.timestamp return rendering class RekognitionModerationLabel: """Encapsulates an Amazon Rekognition moderation label.""" def __init__(self, label, timestamp=None): """ Initializes the moderation label object. :param label: Label data, in the format returned by Amazon Rekognition functions. :param timestamp: The time when the moderation label was detected, if the label was detected in a video. """ self.name = label.get("Name") self.confidence = label.get("Confidence") self.parent_name = label.get("ParentName") self.timestamp = timestamp def to_dict(self): """ Renders some of the moderation label data to a dict. :return: A dict that contains the moderation label data. """ rendering = {} if self.name is not None: rendering["name"] = self.name if self.parent_name is not None: rendering["parent_name"] = self.parent_name if self.timestamp is not None: rendering["timestamp"] = self.timestamp return rendering class RekognitionText: """Encapsulates an Amazon Rekognition text element.""" def __init__(self, text_data): """ Initializes the text object. :param text_data: Text data, in the format returned by Amazon Rekognition functions. """ self.text = text_data.get("DetectedText") self.kind = text_data.get("Type") self.id = text_data.get("Id") self.parent_id = text_data.get("ParentId") self.confidence = text_data.get("Confidence") self.geometry = text_data.get("Geometry") def to_dict(self): """ Renders some of the text data to a dict. :return: A dict that contains the text data. """ rendering = {} if self.text is not None: rendering["text"] = self.text if self.kind is not None: rendering["kind"] = self.kind if self.geometry is not None: rendering["polygon"] = self.geometry.get("Polygon") return rendering

ラッパークラスを使用して、イメージ内の要素を検出し、その境界ボックスを表示します。この例で使用されているイメージは、手順やコード GitHub とともに にあります。

def usage_demo(): print("-" * 88) print("Welcome to the Amazon Rekognition image detection demo!") print("-" * 88) logging.basicConfig(level=logging.INFO, format="%(levelname)s: %(message)s") rekognition_client = boto3.client("rekognition") street_scene_file_name = ".media/pexels-kaique-rocha-109919.jpg" celebrity_file_name = ".media/pexels-pixabay-53370.jpg" one_girl_url = "https://dhei5unw3vrsx.cloudfront.net/images/source3_resized.jpg" three_girls_url = "https://dhei5unw3vrsx.cloudfront.net/images/target3_resized.jpg" swimwear_object = boto3.resource("s3").Object( "console-sample-images-pdx", "yoga_swimwear.jpg" ) book_file_name = ".media/pexels-christina-morillo-1181671.jpg" street_scene_image = RekognitionImage.from_file( street_scene_file_name, rekognition_client ) print(f"Detecting faces in {street_scene_image.image_name}...") faces = street_scene_image.detect_faces() print(f"Found {len(faces)} faces, here are the first three.") for face in faces[:3]: pprint(face.to_dict()) show_bounding_boxes( street_scene_image.image["Bytes"], [[face.bounding_box for face in faces]], ["aqua"], ) input("Press Enter to continue.") print(f"Detecting labels in {street_scene_image.image_name}...") labels = street_scene_image.detect_labels(100) print(f"Found {len(labels)} labels.") for label in labels: pprint(label.to_dict()) names = [] box_sets = [] colors = ["aqua", "red", "white", "blue", "yellow", "green"] for label in labels: if label.instances: names.append(label.name) box_sets.append([inst["BoundingBox"] for inst in label.instances]) print(f"Showing bounding boxes for {names} in {colors[:len(names)]}.") show_bounding_boxes( street_scene_image.image["Bytes"], box_sets, colors[: len(names)] ) input("Press Enter to continue.") celebrity_image = RekognitionImage.from_file( celebrity_file_name, rekognition_client ) print(f"Detecting celebrities in {celebrity_image.image_name}...") celebs, others = celebrity_image.recognize_celebrities() print(f"Found {len(celebs)} celebrities.") for celeb in celebs: pprint(celeb.to_dict()) show_bounding_boxes( celebrity_image.image["Bytes"], [[celeb.face.bounding_box for celeb in celebs]], ["aqua"], ) input("Press Enter to continue.") girl_image_response = requests.get(one_girl_url) girl_image = RekognitionImage( {"Bytes": girl_image_response.content}, "one-girl", rekognition_client ) group_image_response = requests.get(three_girls_url) group_image = RekognitionImage( {"Bytes": group_image_response.content}, "three-girls", rekognition_client ) print("Comparing reference face to group of faces...") matches, unmatches = girl_image.compare_faces(group_image, 80) print(f"Found {len(matches)} face matching the reference face.") show_bounding_boxes( group_image.image["Bytes"], [[match.bounding_box for match in matches]], ["aqua"], ) input("Press Enter to continue.") swimwear_image = RekognitionImage.from_bucket(swimwear_object, rekognition_client) print(f"Detecting suggestive content in {swimwear_object.key}...") labels = swimwear_image.detect_moderation_labels() print(f"Found {len(labels)} moderation labels.") for label in labels: pprint(label.to_dict()) input("Press Enter to continue.") book_image = RekognitionImage.from_file(book_file_name, rekognition_client) print(f"Detecting text in {book_image.image_name}...") texts = book_image.detect_text() print(f"Found {len(texts)} text instances. Here are the first seven:") for text in texts[:7]: pprint(text.to_dict()) show_polygons( book_image.image["Bytes"], [text.geometry["Polygon"] for text in texts], "aqua" ) print("Thanks for watching!") print("-" * 88)

次のコード例は、Amazon Rekognition を使用して画像内のカテゴリ別にオブジェクトを検出するアプリケーションを構築する方法を示しています。

SDK Python 用 (Boto3)

を使用して、以下を可能にするウェブアプリケーション AWS SDK for Python (Boto3) を作成する方法を示します。

  • Amazon Simple Storage Service (Amazon S3) バケットに、写真をアップロードします。

  • Amazon Rekognition を使用して、写真を分析およびラベル付けします。

  • Amazon Simple Email Service (Amazon SES) を使用して、画像分析の E メールレポートを送信します。

この例では、React で構築 JavaScript された で記述されたウェブページと、Flask- で構築された Python で記述されたRESTサービスという 2 つの主要なコンポーネントが含まれていますRESTful。

React ウェブページを使用すると、次のことができます。

  • S3 バケットに保存されているイメージのリストを表示します。

  • イメージを S3 バケットにアップロードします。

  • イメージ内で検出された項目を識別するイメージとラベルを表示します。

  • S3 バケット内のすべてのイメージのレポートを取得し、レポートの E メールを送信します。

ウェブページはRESTサービスを呼び出します。サービスはリクエストを AWS に送信して、以下のアクションを実行します。

  • S3 バケット内のイメージのリストを取得し、フィルタリングします。

  • Amazon S3 バケットに写真をアップロードします。

  • Amazon Rekognition を使用して個々の写真を分析し、写真で検出された項目を識別するラベルのリストを取得します。

  • S3 バケット内のすべての写真を分析し、Amazon SES を使用してレポートを E メールで送信します。

完全なソースコードとセットアップと実行の手順については、 の詳細な例を参照してくださいGitHub

この例で使用されているサービス
  • Amazon Rekognition

  • Amazon S3

  • Amazon SES

次のコード例は、Amazon Rekognition を使用してビデオ内のユーザーとオブジェクトを検出する方法を示しています。

SDK Python 用 (Boto3)

Amazon Rekognition を使用して、非同期検出ジョブを開始して、動画内の顔、オブジェクト、人を検出します。この例では、ジョブが完了し、Amazon Simple Queue Service (Amazon ) キューをトピックにサブスクライブしたときに Amazon Simple Notification Service (Amazon ) トピックに通知するように Amazon Rekognition を設定します。 SNS SQSキューがジョブに関するメッセージを受信すると、ジョブが取得され、結果が出力されます。

この例は、 で表示するのが最適です GitHub。完全なソースコードとセットアップと実行の手順については、 の詳細な例を参照してくださいGitHub

この例で使用されているサービス
  • Amazon Rekognition

  • Amazon SNS

  • Amazon SQS