D'autres AWS SDK exemples sont disponibles dans le GitHub dépôt AWS Doc SDK Examples
Les traductions sont fournies par des outils de traduction automatique. En cas de conflit entre le contenu d'une traduction et celui de la version originale en anglais, la version anglaise prévaudra.
À utiliser CompareFaces
avec un AWS SDK ou CLI
Les exemples de code suivants montrent comment utiliserCompareFaces
.
Pour de plus amples informations, veuillez consulter Comparaison de visages dans des images.
- .NET
-
- AWS SDK for .NET
-
Note
Il y en a plus sur GitHub. Trouvez l’exemple complet et découvrez comment le configurer et l’exécuter dans le référentiel d’exemples de code AWS
. using System; using System.IO; using System.Threading.Tasks; using Amazon.Rekognition; using Amazon.Rekognition.Model; /// <summary> /// Uses the Amazon Rekognition Service to compare faces in two images. /// </summary> public class CompareFaces { public static async Task Main() { float similarityThreshold = 70F; string sourceImage = "source.jpg"; string targetImage = "target.jpg"; var rekognitionClient = new AmazonRekognitionClient(); Amazon.Rekognition.Model.Image imageSource = new Amazon.Rekognition.Model.Image(); try { using FileStream fs = new FileStream(sourceImage, FileMode.Open, FileAccess.Read); byte[] data = new byte[fs.Length]; fs.Read(data, 0, (int)fs.Length); imageSource.Bytes = new MemoryStream(data); } catch (Exception) { Console.WriteLine($"Failed to load source image: {sourceImage}"); return; } Amazon.Rekognition.Model.Image imageTarget = new Amazon.Rekognition.Model.Image(); try { using FileStream fs = new FileStream(targetImage, FileMode.Open, FileAccess.Read); byte[] data = new byte[fs.Length]; data = new byte[fs.Length]; fs.Read(data, 0, (int)fs.Length); imageTarget.Bytes = new MemoryStream(data); } catch (Exception ex) { Console.WriteLine($"Failed to load target image: {targetImage}"); Console.WriteLine(ex.Message); return; } var compareFacesRequest = new CompareFacesRequest { SourceImage = imageSource, TargetImage = imageTarget, SimilarityThreshold = similarityThreshold, }; // Call operation var compareFacesResponse = await rekognitionClient.CompareFacesAsync(compareFacesRequest); // Display results compareFacesResponse.FaceMatches.ForEach(match => { ComparedFace face = match.Face; BoundingBox position = face.BoundingBox; Console.WriteLine($"Face at {position.Left} {position.Top} matches with {match.Similarity}% confidence."); }); Console.WriteLine($"Found {compareFacesResponse.UnmatchedFaces.Count} face(s) that did not match."); } }
-
Pour API plus de détails, voir CompareFacesla section AWS SDK for .NET APIRéférence.
-
- CLI
-
- AWS CLI
-
Pour comparer des visages sur deux images
La
compare-faces
commande suivante compare les visages de deux images stockées dans un compartiment Amazon S3.aws rekognition compare-faces \ --source-image '
{"S3Object":{"Bucket":"MyImageS3Bucket","Name":"source.jpg"}}
' \ --target-image '{"S3Object":{"Bucket":"MyImageS3Bucket","Name":"target.jpg"}}
'Sortie :
{ "UnmatchedFaces": [], "FaceMatches": [ { "Face": { "BoundingBox": { "Width": 0.12368916720151901, "Top": 0.16007372736930847, "Left": 0.5901257991790771, "Height": 0.25140416622161865 }, "Confidence": 100.0, "Pose": { "Yaw": -3.7351467609405518, "Roll": -0.10309021919965744, "Pitch": 0.8637830018997192 }, "Quality": { "Sharpness": 95.51618957519531, "Brightness": 65.29893493652344 }, "Landmarks": [ { "Y": 0.26721030473709106, "X": 0.6204193830490112, "Type": "eyeLeft" }, { "Y": 0.26831310987472534, "X": 0.6776827573776245, "Type": "eyeRight" }, { "Y": 0.3514654338359833, "X": 0.6241428852081299, "Type": "mouthLeft" }, { "Y": 0.35258132219314575, "X": 0.6713621020317078, "Type": "mouthRight" }, { "Y": 0.3140771687030792, "X": 0.6428444981575012, "Type": "nose" } ] }, "Similarity": 100.0 } ], "SourceImageFace": { "BoundingBox": { "Width": 0.12368916720151901, "Top": 0.16007372736930847, "Left": 0.5901257991790771, "Height": 0.25140416622161865 }, "Confidence": 100.0 } }
Pour plus d'informations, consultez la section Comparaison de visages dans des images dans le manuel Amazon Rekognition Developer Guide.
-
Pour API plus de détails, voir CompareFaces
la section Référence des AWS CLI commandes.
-
- Java
-
- SDKpour Java 2.x
-
Note
Il y en a plus sur GitHub. Trouvez l’exemple complet et découvrez comment le configurer et l’exécuter dans le référentiel d’exemples de code AWS
. import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.Image; import software.amazon.awssdk.services.rekognition.model.CompareFacesRequest; import software.amazon.awssdk.services.rekognition.model.CompareFacesResponse; import software.amazon.awssdk.services.rekognition.model.CompareFacesMatch; import software.amazon.awssdk.services.rekognition.model.ComparedFace; import software.amazon.awssdk.services.rekognition.model.BoundingBox; import software.amazon.awssdk.core.SdkBytes; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.InputStream; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class CompareFaces { public static void main(String[] args) { final String usage = """ Usage: <pathSource> <pathTarget> Where: pathSource - The path to the source image (for example, C:\\AWS\\pic1.png).\s pathTarget - The path to the target image (for example, C:\\AWS\\pic2.png).\s """; if (args.length != 2) { System.out.println(usage); System.exit(1); } Float similarityThreshold = 70F; String sourceImage = args[0]; String targetImage = args[1]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); compareTwoFaces(rekClient, similarityThreshold, sourceImage, targetImage); rekClient.close(); } public static void compareTwoFaces(RekognitionClient rekClient, Float similarityThreshold, String sourceImage, String targetImage) { try { InputStream sourceStream = new FileInputStream(sourceImage); InputStream tarStream = new FileInputStream(targetImage); SdkBytes sourceBytes = SdkBytes.fromInputStream(sourceStream); SdkBytes targetBytes = SdkBytes.fromInputStream(tarStream); // Create an Image object for the source image. Image souImage = Image.builder() .bytes(sourceBytes) .build(); Image tarImage = Image.builder() .bytes(targetBytes) .build(); CompareFacesRequest facesRequest = CompareFacesRequest.builder() .sourceImage(souImage) .targetImage(tarImage) .similarityThreshold(similarityThreshold) .build(); // Compare the two images. CompareFacesResponse compareFacesResult = rekClient.compareFaces(facesRequest); List<CompareFacesMatch> faceDetails = compareFacesResult.faceMatches(); for (CompareFacesMatch match : faceDetails) { ComparedFace face = match.face(); BoundingBox position = face.boundingBox(); System.out.println("Face at " + position.left().toString() + " " + position.top() + " matches with " + face.confidence().toString() + "% confidence."); } List<ComparedFace> uncompared = compareFacesResult.unmatchedFaces(); System.out.println("There was " + uncompared.size() + " face(s) that did not match"); System.out.println("Source image rotation: " + compareFacesResult.sourceImageOrientationCorrection()); System.out.println("target image rotation: " + compareFacesResult.targetImageOrientationCorrection()); } catch (RekognitionException | FileNotFoundException e) { System.out.println("Failed to load source image " + sourceImage); System.exit(1); } } }
-
Pour API plus de détails, voir CompareFacesla section AWS SDK for Java 2.x APIRéférence.
-
- Kotlin
-
- SDKpour Kotlin
-
Note
Il y en a plus sur GitHub. Trouvez l’exemple complet et découvrez comment le configurer et l’exécuter dans le référentiel d’exemples de code AWS
. suspend fun compareTwoFaces( similarityThresholdVal: Float, sourceImageVal: String, targetImageVal: String, ) { val sourceBytes = (File(sourceImageVal).readBytes()) val targetBytes = (File(targetImageVal).readBytes()) // Create an Image object for the source image. val souImage = Image { bytes = sourceBytes } val tarImage = Image { bytes = targetBytes } val facesRequest = CompareFacesRequest { sourceImage = souImage targetImage = tarImage similarityThreshold = similarityThresholdVal } RekognitionClient { region = "us-east-1" }.use { rekClient -> val compareFacesResult = rekClient.compareFaces(facesRequest) val faceDetails = compareFacesResult.faceMatches if (faceDetails != null) { for (match: CompareFacesMatch in faceDetails) { val face = match.face val position = face?.boundingBox if (position != null) { println("Face at ${position.left} ${position.top} matches with ${face.confidence} % confidence.") } } } val uncompared = compareFacesResult.unmatchedFaces if (uncompared != null) { println("There was ${uncompared.size} face(s) that did not match") } println("Source image rotation: ${compareFacesResult.sourceImageOrientationCorrection}") println("target image rotation: ${compareFacesResult.targetImageOrientationCorrection}") } }
-
Pour API plus de détails, voir CompareFaces
la APIréférence AWS SDK à Kotlin.
-
- Python
-
- SDKpour Python (Boto3)
-
Note
Il y en a plus sur GitHub. Trouvez l’exemple complet et découvrez comment le configurer et l’exécuter dans le référentiel d’exemples de code AWS
. class RekognitionImage: """ Encapsulates an Amazon Rekognition image. This class is a thin wrapper around parts of the Boto3 Amazon Rekognition API. """ def __init__(self, image, image_name, rekognition_client): """ Initializes the image object. :param image: Data that defines the image, either the image bytes or an Amazon S3 bucket and object key. :param image_name: The name of the image. :param rekognition_client: A Boto3 Rekognition client. """ self.image = image self.image_name = image_name self.rekognition_client = rekognition_client def compare_faces(self, target_image, similarity): """ Compares faces in the image with the largest face in the target image. :param target_image: The target image to compare against. :param similarity: Faces in the image must have a similarity value greater than this value to be included in the results. :return: A tuple. The first element is the list of faces that match the reference image. The second element is the list of faces that have a similarity value below the specified threshold. """ try: response = self.rekognition_client.compare_faces( SourceImage=self.image, TargetImage=target_image.image, SimilarityThreshold=similarity, ) matches = [ RekognitionFace(match["Face"]) for match in response["FaceMatches"] ] unmatches = [RekognitionFace(face) for face in response["UnmatchedFaces"]] logger.info( "Found %s matched faces and %s unmatched faces.", len(matches), len(unmatches), ) except ClientError: logger.exception( "Couldn't match faces from %s to %s.", self.image_name, target_image.image_name, ) raise else: return matches, unmatches
-
Pour API plus de détails, reportez-vous CompareFacesà la section AWS SDKpour Python (Boto3) Reference. API
-