With Rekognition you can compare faces between two images using the CompareFaces operation. This feature is useful for applications like identity verification or photo matching.
CompareFaces compares a face in the source image with each face in the target image. Images are passed to CompareFaces as either:
-
A base64-encoded representation of an image.
-
Amazon S3 objects.
Face Detection vs Face Comparison
Face comparison is different from face detection. Face detection (which uses DetectFaces) only identifies the presence and location of faces in an image or video. In contrast, face comparison involves comparing a detected face in a source image to faces in a target image to find matches.
Similarity thresholds
Use the similarityThreshold
parameter to define the minimum confidence
level for matches to be included in the response. By default, only faces with a
similarity score of greater than or equal to 80% are returned in the response.
Note
CompareFaces
uses machine learning algorithms, which are
probabilistic. A false negative is an incorrect prediction that a face in the target
image has a low similarity confidence score when compared to the face in the source
image. To reduce the probability of false negatives, we recommend that you compare
the target image against multiple source images. If you plan to use
CompareFaces
to make a decision that impacts an individual's
rights, privacy, or access to services, we recommend that you pass the result to a
human for review and further validation before taking action.
The following code examples demonstrate how to use the CompareFaces operations for various AWS SDKs. In the AWS CLI example, you upload two JPEG images to your Amazon S3 bucket and specify the object key name. In the other examples, you load two files from the local file system and input them as image byte arrays.
To compare faces
-
If you haven't already:
-
Create or update a user with
AmazonRekognitionFullAccess
andAmazonS3ReadOnlyAccess
(AWS CLI example only) permissions. For more information, see Step 1: Set up an AWS account and create a User. -
Install and configure the AWS CLI and the AWS SDKs. For more information, see Step 2: Set up the AWS CLI and AWS SDKs.
-
-
Use the following example code to call the
CompareFaces
operation.This example displays information about matching faces in source and target images that are loaded from the local file system.
Replace the values of
sourceImage
andtargetImage
with the path and file name of the source and target images.//Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. //PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) package aws.example.rekognition.image; import com.amazonaws.services.rekognition.AmazonRekognition; import com.amazonaws.services.rekognition.AmazonRekognitionClientBuilder; import com.amazonaws.services.rekognition.model.Image; import com.amazonaws.services.rekognition.model.BoundingBox; import com.amazonaws.services.rekognition.model.CompareFacesMatch; import com.amazonaws.services.rekognition.model.CompareFacesRequest; import com.amazonaws.services.rekognition.model.CompareFacesResult; import com.amazonaws.services.rekognition.model.ComparedFace; import java.util.List; import java.io.File; import java.io.FileInputStream; import java.io.InputStream; import java.nio.ByteBuffer; import com.amazonaws.util.IOUtils; public class CompareFaces { public static void main(String[] args) throws Exception{ Float similarityThreshold = 70F; String sourceImage = "source.jpg"; String targetImage = "target.jpg"; ByteBuffer sourceImageBytes=null; ByteBuffer targetImageBytes=null; AmazonRekognition rekognitionClient = AmazonRekognitionClientBuilder.defaultClient(); //Load source and target images and create input parameters try (InputStream inputStream = new FileInputStream(new File(sourceImage))) { sourceImageBytes = ByteBuffer.wrap(IOUtils.toByteArray(inputStream)); } catch(Exception e) { System.out.println("Failed to load source image " + sourceImage); System.exit(1); } try (InputStream inputStream = new FileInputStream(new File(targetImage))) { targetImageBytes = ByteBuffer.wrap(IOUtils.toByteArray(inputStream)); } catch(Exception e) { System.out.println("Failed to load target images: " + targetImage); System.exit(1); } Image source=new Image() .withBytes(sourceImageBytes); Image target=new Image() .withBytes(targetImageBytes); CompareFacesRequest request = new CompareFacesRequest() .withSourceImage(source) .withTargetImage(target) .withSimilarityThreshold(similarityThreshold); // Call operation CompareFacesResult compareFacesResult=rekognitionClient.compareFaces(request); // Display results List <CompareFacesMatch> faceDetails = compareFacesResult.getFaceMatches(); for (CompareFacesMatch match: faceDetails){ ComparedFace face= match.getFace(); BoundingBox position = face.getBoundingBox(); System.out.println("Face at " + position.getLeft().toString() + " " + position.getTop() + " matches with " + match.getSimilarity().toString() + "% confidence."); } List<ComparedFace> uncompared = compareFacesResult.getUnmatchedFaces(); System.out.println("There was " + uncompared.size() + " face(s) that did not match"); } }
CompareFaces operation request
The input to CompareFaces
is an image. In this example, the source
and target images are loaded from the local file system. The
SimilarityThreshold
input parameter specifies the minimum
confidence that compared faces must match to be included in the response. For more
information, see Working with images.
{
"SourceImage": {
"Bytes": "/9j/4AAQSk2Q==..."
},
"TargetImage": {
"Bytes": "/9j/4O1Q==..."
},
"SimilarityThreshold": 70
}
CompareFaces operation response
The response includes:
-
An array of face matches: A list of matched faces with similarity scores and metadata for each matching face. If multiple faces match, the
faceMatches
array includes all of the face matches.
-
Face match details: Each matched face also provides a bounding box, confidence value, landmark locations, and similarity score.
-
A list of unmatched faces: The response also includes faces from the target image that didn’t match the source image face. Includes a bounding box for each unmatched face.
-
Source face information: Includes information about the face from the source image that was used for comparison, including the bounding box and confidence value.
The example shows that one face match was found in the target image. For that face match, it provides a bounding box and a confidence value (the level of confidence that Amazon Rekognition has that the bounding box contains a face). The similarity score of 99.99 indicates how similar the faces are. The example also shows one face that Amazon Rekognition found in the target image that didn't match the face that was analyzed in the source image.
{
"FaceMatches": [{
"Face": {
"BoundingBox": {
"Width": 0.5521978139877319,
"Top": 0.1203877404332161,
"Left": 0.23626373708248138,
"Height": 0.3126954436302185
},
"Confidence": 99.98751068115234,
"Pose": {
"Yaw": -82.36799621582031,
"Roll": -62.13221740722656,
"Pitch": 0.8652129173278809
},
"Quality": {
"Sharpness": 99.99880981445312,
"Brightness": 54.49755096435547
},
"Landmarks": [{
"Y": 0.2996366024017334,
"X": 0.41685718297958374,
"Type": "eyeLeft"
},
{
"Y": 0.2658946216106415,
"X": 0.4414493441581726,
"Type": "eyeRight"
},
{
"Y": 0.3465650677680969,
"X": 0.48636093735694885,
"Type": "nose"
},
{
"Y": 0.30935320258140564,
"X": 0.6251809000968933,
"Type": "mouthLeft"
},
{
"Y": 0.26942989230155945,
"X": 0.6454493403434753,
"Type": "mouthRight"
}
]
},
"Similarity": 100.0
}],
"SourceImageOrientationCorrection": "ROTATE_90",
"TargetImageOrientationCorrection": "ROTATE_90",
"UnmatchedFaces": [{
"BoundingBox": {
"Width": 0.4890109896659851,
"Top": 0.6566604375839233,
"Left": 0.10989011079072952,
"Height": 0.278298944234848
},
"Confidence": 99.99992370605469,
"Pose": {
"Yaw": 51.51519012451172,
"Roll": -110.32493591308594,
"Pitch": -2.322134017944336
},
"Quality": {
"Sharpness": 99.99671173095703,
"Brightness": 57.23163986206055
},
"Landmarks": [{
"Y": 0.8288310766220093,
"X": 0.3133862614631653,
"Type": "eyeLeft"
},
{
"Y": 0.7632885575294495,
"X": 0.28091415762901306,
"Type": "eyeRight"
},
{
"Y": 0.7417283654212952,
"X": 0.3631140887737274,
"Type": "nose"
},
{
"Y": 0.8081989884376526,
"X": 0.48565614223480225,
"Type": "mouthLeft"
},
{
"Y": 0.7548204660415649,
"X": 0.46090251207351685,
"Type": "mouthRight"
}
]
}],
"SourceImageFace": {
"BoundingBox": {
"Width": 0.5521978139877319,
"Top": 0.1203877404332161,
"Left": 0.23626373708248138,
"Height": 0.3126954436302185
},
"Confidence": 99.98751068115234
}
}