Hay más AWS SDK ejemplos disponibles en el GitHub repositorio de AWS Doc SDK Examples
Las traducciones son generadas a través de traducción automática. En caso de conflicto entre la traducción y la version original de inglés, prevalecerá la version en inglés.
Úselo DetectFaces
con un o AWS SDK CLI
En los siguientes ejemplos de código, se muestra cómo utilizar DetectFaces
.
Para obtener información, consulte Detección de rostros en una imagen.
- .NET
-
- AWS SDK for .NET
-
nota
Hay más en marcha GitHub. Busque el ejemplo completo y aprenda a configurar y ejecutar en el Repositorio de ejemplos de código de AWS
. using System; using System.Collections.Generic; using System.Threading.Tasks; using Amazon.Rekognition; using Amazon.Rekognition.Model; /// <summary> /// Uses the Amazon Rekognition Service to detect faces within an image /// stored in an Amazon Simple Storage Service (Amazon S3) bucket. /// </summary> public class DetectFaces { public static async Task Main() { string photo = "input.jpg"; string bucket = "amzn-s3-demo-bucket"; var rekognitionClient = new AmazonRekognitionClient(); var detectFacesRequest = new DetectFacesRequest() { Image = new Image() { S3Object = new S3Object() { Name = photo, Bucket = bucket, }, }, // Attributes can be "ALL" or "DEFAULT". // "DEFAULT": BoundingBox, Confidence, Landmarks, Pose, and Quality. // "ALL": See https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/Rekognition/TFaceDetail.html Attributes = new List<string>() { "ALL" }, }; try { DetectFacesResponse detectFacesResponse = await rekognitionClient.DetectFacesAsync(detectFacesRequest); bool hasAll = detectFacesRequest.Attributes.Contains("ALL"); foreach (FaceDetail face in detectFacesResponse.FaceDetails) { Console.WriteLine($"BoundingBox: top={face.BoundingBox.Left} left={face.BoundingBox.Top} width={face.BoundingBox.Width} height={face.BoundingBox.Height}"); Console.WriteLine($"Confidence: {face.Confidence}"); Console.WriteLine($"Landmarks: {face.Landmarks.Count}"); Console.WriteLine($"Pose: pitch={face.Pose.Pitch} roll={face.Pose.Roll} yaw={face.Pose.Yaw}"); Console.WriteLine($"Brightness: {face.Quality.Brightness}\tSharpness: {face.Quality.Sharpness}"); if (hasAll) { Console.WriteLine($"Estimated age is between {face.AgeRange.Low} and {face.AgeRange.High} years old."); } } } catch (Exception ex) { Console.WriteLine(ex.Message); } } }
Visualizar información del cuadro delimitador de todos los rostros en una imagen.
using System; using System.Collections.Generic; using System.Drawing; using System.IO; using System.Threading.Tasks; using Amazon.Rekognition; using Amazon.Rekognition.Model; /// <summary> /// Uses the Amazon Rekognition Service to display the details of the /// bounding boxes around the faces detected in an image. /// </summary> public class ImageOrientationBoundingBox { public static async Task Main() { string photo = @"D:\Development\AWS-Examples\Rekognition\target.jpg"; // "photo.jpg"; var rekognitionClient = new AmazonRekognitionClient(); var image = new Amazon.Rekognition.Model.Image(); try { using var fs = new FileStream(photo, FileMode.Open, FileAccess.Read); byte[] data = null; data = new byte[fs.Length]; fs.Read(data, 0, (int)fs.Length); image.Bytes = new MemoryStream(data); } catch (Exception) { Console.WriteLine("Failed to load file " + photo); return; } int height; int width; // Used to extract original photo width/height using (var imageBitmap = new Bitmap(photo)) { height = imageBitmap.Height; width = imageBitmap.Width; } Console.WriteLine("Image Information:"); Console.WriteLine(photo); Console.WriteLine("Image Height: " + height); Console.WriteLine("Image Width: " + width); try { var detectFacesRequest = new DetectFacesRequest() { Image = image, Attributes = new List<string>() { "ALL" }, }; DetectFacesResponse detectFacesResponse = await rekognitionClient.DetectFacesAsync(detectFacesRequest); detectFacesResponse.FaceDetails.ForEach(face => { Console.WriteLine("Face:"); ShowBoundingBoxPositions( height, width, face.BoundingBox, detectFacesResponse.OrientationCorrection); Console.WriteLine($"BoundingBox: top={face.BoundingBox.Left} left={face.BoundingBox.Top} width={face.BoundingBox.Width} height={face.BoundingBox.Height}"); Console.WriteLine($"The detected face is estimated to be between {face.AgeRange.Low} and {face.AgeRange.High} years old.\n"); }); } catch (Exception ex) { Console.WriteLine(ex.Message); } } /// <summary> /// Display the bounding box information for an image. /// </summary> /// <param name="imageHeight">The height of the image.</param> /// <param name="imageWidth">The width of the image.</param> /// <param name="box">The bounding box for a face found within the image.</param> /// <param name="rotation">The rotation of the face's bounding box.</param> public static void ShowBoundingBoxPositions(int imageHeight, int imageWidth, BoundingBox box, string rotation) { float left; float top; if (rotation == null) { Console.WriteLine("No estimated orientation. Check Exif data."); return; } // Calculate face position based on image orientation. switch (rotation) { case "ROTATE_0": left = imageWidth * box.Left; top = imageHeight * box.Top; break; case "ROTATE_90": left = imageHeight * (1 - (box.Top + box.Height)); top = imageWidth * box.Left; break; case "ROTATE_180": left = imageWidth - (imageWidth * (box.Left + box.Width)); top = imageHeight * (1 - (box.Top + box.Height)); break; case "ROTATE_270": left = imageHeight * box.Top; top = imageWidth * (1 - box.Left - box.Width); break; default: Console.WriteLine("No estimated orientation information. Check Exif data."); return; } // Display face location information. Console.WriteLine($"Left: {left}"); Console.WriteLine($"Top: {top}"); Console.WriteLine($"Face Width: {imageWidth * box.Width}"); Console.WriteLine($"Face Height: {imageHeight * box.Height}"); } }
-
Para API obtener más información, consulte DetectFacesla AWS SDK for .NET APIReferencia.
-
- CLI
-
- AWS CLI
-
Detección de rostros en una imagen
El siguiente comando
detect-faces
detecta rostros en la imagen especificada almacenada en un bucket de Amazon S3.aws rekognition detect-faces \ --image '
{"S3Object":{"Bucket":"MyImageS3Bucket","Name":"MyFriend.jpg"}}
' \ --attributes"ALL"
Salida:
{ "FaceDetails": [ { "Confidence": 100.0, "Eyeglasses": { "Confidence": 98.91107940673828, "Value": false }, "Sunglasses": { "Confidence": 99.7966537475586, "Value": false }, "Gender": { "Confidence": 99.56611633300781, "Value": "Male" }, "Landmarks": [ { "Y": 0.26721030473709106, "X": 0.6204193830490112, "Type": "eyeLeft" }, { "Y": 0.26831310987472534, "X": 0.6776827573776245, "Type": "eyeRight" }, { "Y": 0.3514654338359833, "X": 0.6241428852081299, "Type": "mouthLeft" }, { "Y": 0.35258132219314575, "X": 0.6713621020317078, "Type": "mouthRight" }, { "Y": 0.3140771687030792, "X": 0.6428444981575012, "Type": "nose" }, { "Y": 0.24662546813488007, "X": 0.6001564860343933, "Type": "leftEyeBrowLeft" }, { "Y": 0.24326619505882263, "X": 0.6303644776344299, "Type": "leftEyeBrowRight" }, { "Y": 0.23818562924861908, "X": 0.6146903038024902, "Type": "leftEyeBrowUp" }, { "Y": 0.24373626708984375, "X": 0.6640064716339111, "Type": "rightEyeBrowLeft" }, { "Y": 0.24877218902111053, "X": 0.7025929093360901, "Type": "rightEyeBrowRight" }, { "Y": 0.23938551545143127, "X": 0.6823262572288513, "Type": "rightEyeBrowUp" }, { "Y": 0.265746533870697, "X": 0.6112898588180542, "Type": "leftEyeLeft" }, { "Y": 0.2676128149032593, "X": 0.6317071914672852, "Type": "leftEyeRight" }, { "Y": 0.262735515832901, "X": 0.6201658248901367, "Type": "leftEyeUp" }, { "Y": 0.27025148272514343, "X": 0.6206279993057251, "Type": "leftEyeDown" }, { "Y": 0.268223375082016, "X": 0.6658390760421753, "Type": "rightEyeLeft" }, { "Y": 0.2672517001628876, "X": 0.687832236289978, "Type": "rightEyeRight" }, { "Y": 0.26383838057518005, "X": 0.6769183874130249, "Type": "rightEyeUp" }, { "Y": 0.27138751745224, "X": 0.676596462726593, "Type": "rightEyeDown" }, { "Y": 0.32283174991607666, "X": 0.6350004076957703, "Type": "noseLeft" }, { "Y": 0.3219289481639862, "X": 0.6567046642303467, "Type": "noseRight" }, { "Y": 0.3420318365097046, "X": 0.6450609564781189, "Type": "mouthUp" }, { "Y": 0.3664324879646301, "X": 0.6455618143081665, "Type": "mouthDown" }, { "Y": 0.26721030473709106, "X": 0.6204193830490112, "Type": "leftPupil" }, { "Y": 0.26831310987472534, "X": 0.6776827573776245, "Type": "rightPupil" }, { "Y": 0.26343393325805664, "X": 0.5946047306060791, "Type": "upperJawlineLeft" }, { "Y": 0.3543180525302887, "X": 0.6044883728027344, "Type": "midJawlineLeft" }, { "Y": 0.4084877669811249, "X": 0.6477024555206299, "Type": "chinBottom" }, { "Y": 0.3562754988670349, "X": 0.707981526851654, "Type": "midJawlineRight" }, { "Y": 0.26580461859703064, "X": 0.7234612107276917, "Type": "upperJawlineRight" } ], "Pose": { "Yaw": -3.7351467609405518, "Roll": -0.10309021919965744, "Pitch": 0.8637830018997192 }, "Emotions": [ { "Confidence": 8.74203109741211, "Type": "SURPRISED" }, { "Confidence": 2.501944065093994, "Type": "ANGRY" }, { "Confidence": 0.7378743290901184, "Type": "DISGUSTED" }, { "Confidence": 3.5296201705932617, "Type": "HAPPY" }, { "Confidence": 1.7162904739379883, "Type": "SAD" }, { "Confidence": 9.518536567687988, "Type": "CONFUSED" }, { "Confidence": 0.45474427938461304, "Type": "FEAR" }, { "Confidence": 72.79895782470703, "Type": "CALM" } ], "AgeRange": { "High": 48, "Low": 32 }, "EyesOpen": { "Confidence": 98.93987274169922, "Value": true }, "BoundingBox": { "Width": 0.12368916720151901, "Top": 0.16007372736930847, "Left": 0.5901257991790771, "Height": 0.25140416622161865 }, "Smile": { "Confidence": 93.4493179321289, "Value": false }, "MouthOpen": { "Confidence": 90.53053283691406, "Value": false }, "Quality": { "Sharpness": 95.51618957519531, "Brightness": 65.29893493652344 }, "Mustache": { "Confidence": 89.85221099853516, "Value": false }, "Beard": { "Confidence": 86.1991195678711, "Value": true } } ] }
Para obtener más información, consulte Detección de rostros en una imagen en la Guía para desarrolladores de Amazon Rekognition.
-
Para API obtener más información, consulte DetectFaces
la Referencia de AWS CLI comandos.
-
- Java
-
- SDKpara Java 2.x
-
nota
Hay más información. GitHub Busque el ejemplo completo y aprenda a configurar y ejecutar en el Repositorio de ejemplos de código de AWS
. import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.DetectFacesRequest; import software.amazon.awssdk.services.rekognition.model.DetectFacesResponse; import software.amazon.awssdk.services.rekognition.model.Image; import software.amazon.awssdk.services.rekognition.model.Attribute; import software.amazon.awssdk.services.rekognition.model.FaceDetail; import software.amazon.awssdk.services.rekognition.model.AgeRange; import software.amazon.awssdk.core.SdkBytes; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.InputStream; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class DetectFaces { public static void main(String[] args) { final String usage = """ Usage: <sourceImage> Where: sourceImage - The path to the image (for example, C:\\AWS\\pic1.png).\s """; if (args.length != 1) { System.out.println(usage); System.exit(1); } String sourceImage = args[0]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); detectFacesinImage(rekClient, sourceImage); rekClient.close(); } public static void detectFacesinImage(RekognitionClient rekClient, String sourceImage) { try { InputStream sourceStream = new FileInputStream(sourceImage); SdkBytes sourceBytes = SdkBytes.fromInputStream(sourceStream); // Create an Image object for the source image. Image souImage = Image.builder() .bytes(sourceBytes) .build(); DetectFacesRequest facesRequest = DetectFacesRequest.builder() .attributes(Attribute.ALL) .image(souImage) .build(); DetectFacesResponse facesResponse = rekClient.detectFaces(facesRequest); List<FaceDetail> faceDetails = facesResponse.faceDetails(); for (FaceDetail face : faceDetails) { AgeRange ageRange = face.ageRange(); System.out.println("The detected face is estimated to be between " + ageRange.low().toString() + " and " + ageRange.high().toString() + " years old."); System.out.println("There is a smile : " + face.smile().value().toString()); } } catch (RekognitionException | FileNotFoundException e) { System.out.println(e.getMessage()); System.exit(1); } } }
-
Para API obtener más información, consulte DetectFacesla AWS SDK for Java 2.x APIReferencia.
-
- Kotlin
-
- SDKpara Kotlin
-
nota
Hay más información. GitHub Busque el ejemplo completo y aprenda a configurar y ejecutar en el Repositorio de ejemplos de código de AWS
. suspend fun detectFacesinImage(sourceImage: String?) { val souImage = Image { bytes = (File(sourceImage).readBytes()) } val request = DetectFacesRequest { attributes = listOf(Attribute.All) image = souImage } RekognitionClient { region = "us-east-1" }.use { rekClient -> val response = rekClient.detectFaces(request) response.faceDetails?.forEach { face -> val ageRange = face.ageRange println("The detected face is estimated to be between ${ageRange?.low} and ${ageRange?.high} years old.") println("There is a smile ${face.smile?.value}") } } }
-
Para API obtener más información, consulta DetectFaces
la AWS SDKAPIreferencia sobre Kotlin.
-
- Python
-
- SDKpara Python (Boto3)
-
nota
Hay más información. GitHub Busque el ejemplo completo y aprenda a configurar y ejecutar en el Repositorio de ejemplos de código de AWS
. class RekognitionImage: """ Encapsulates an Amazon Rekognition image. This class is a thin wrapper around parts of the Boto3 Amazon Rekognition API. """ def __init__(self, image, image_name, rekognition_client): """ Initializes the image object. :param image: Data that defines the image, either the image bytes or an Amazon S3 bucket and object key. :param image_name: The name of the image. :param rekognition_client: A Boto3 Rekognition client. """ self.image = image self.image_name = image_name self.rekognition_client = rekognition_client def detect_faces(self): """ Detects faces in the image. :return: The list of faces found in the image. """ try: response = self.rekognition_client.detect_faces( Image=self.image, Attributes=["ALL"] ) faces = [RekognitionFace(face) for face in response["FaceDetails"]] logger.info("Detected %s faces.", len(faces)) except ClientError: logger.exception("Couldn't detect faces in %s.", self.image_name) raise else: return faces
-
Para API obtener más información, consulte DetectFacesla AWS SDKreferencia de Python (Boto3). API
-