Skip to main content

Identify Faces and Emotions using Microsoft Face API

Facial recognition is one of the key element you would find in a modern designs and implementations. With the advancements of the AI and Machine learning algorithms, it keeps improving day by day to analyze and recognize the people more accurately than ever before.  In past, though we’ve seen facial detection and facial recognition scenes in Hollywood movies, it started to become a reality specially general purpose applications in recent past.

You may be a solution designer or architect who works with Healthcare industry, you can implement face detection and identification solution to detect patient faces using CCTV cameras, in order to minimize the risks and incidents. You may be designing a solution for the business processes where public engaging, Financial companies, banks, an public events etc. Even for corporate offices, you can implement a facial recognition solution to identify faces and mark their roles. There may be thousands of use cases where you can implement facial recognition system.



Microsoft Face API has made your job simpler, and accessible by providing a cloud-based service which built by using most advanced face recognition algorithms which you can invoke and get the results more accurately.

Face API has two main functions
Face Detection: Face  API detects up to 64 human faces in an image accurately. The image can specify file in bytes or URL.
It detects face location with optional attributes which related to faces like gender, age, pose, head pose, facial hair etc. More details about the API for development click here

Face Recognition
API provides verification against two detected faces or verify from a one detected face to one person object
Some other features of face recognition are finding similar faces: by given target face and set of candidate's faces to search with
face grouping: given one set of unknown faces, and from the Face grouping API automatically divide and group into several disjoint sets. Each set contains similar faces. All the faces in the same group can be considered as those are belongs to the same person.
person identification: face API can be used to detect and identify people based on people database( a people group as above).

For further reference to the face recognition function click here

I've provided the sample code I used to perform a face detection and emotions identification.

Prerequisites

  1. Microsoft Face API windows SDK

You can install the Face API Windows SDK from Nuget package manager. This is going to be a C# console application. You can select any other template as well. Once you opened open the Nuget Package manager console and type this command. 

PM> Install-Package Microsoft.ProjectOxford.Face -Version 1.3.0


if you got trouble when installing the package, go here for help

       2. Obtain a subscription key
You can get Face API subscription keys from here. Which provide in detail level about the Face API subscription and how it works. For the Direct link: click here
Usually, once you login from one of the option you will get two Keys and http endpoint which you can refer to invoke the face API calls.


The code is not much different than provided in the documentation. Only thing I've changed the Subscription Key with one of my key and the uriBase with the Endpoint.


using System;
using System.IO;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Text;

namespace CSHttpClientSample
{
    static class Program
    {
        // **********************************************
        // *** Update or verify the following values. ***
        // **********************************************

        // Replace the subscriptionKey string value with your valid subscription key.
        const string subscriptionKey = "13hc77781f7e4b19b5fcdd72a8df7156";

        // Replace or verify the region.
        //
        // You must use the same region in your REST API call as you used to obtain your subscription keys.
        // For example, if you obtained your subscription keys from the westus region, replace 
        // "westcentralus" in the URI below with "westus".
        //
        // NOTE: Free trial subscription keys are generated in the westcentralus region, so if you are using
        // a free trial subscription key, you should not need to change this region.
        const string uriBase = "https://westcentralus.api.cognitive.microsoft.com/face/v1.0/detect";


        static void Main()
        {
            // Get the path and filename to process from the user.
            Console.WriteLine("Detect faces:");
            Console.Write("Enter the path to an image with faces that you wish to analzye: ");
            string imageFilePath = Console.ReadLine();

            // Execute the REST API call.
            MakeAnalysisRequest(imageFilePath);

            Console.WriteLine("\nPlease wait a moment for the results to appear. Then, press Enter to exit...\n");
            Console.ReadLine();
        }


        /// 
        /// Gets the analysis of the specified image file by using the Computer Vision REST API.
        /// 
        /// The image file.
        static async void MakeAnalysisRequest(string imageFilePath)
        {
            HttpClient client = new HttpClient();

            // Request headers.
            client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", subscriptionKey);

            // Request parameters. A third optional parameter is "details".
            string requestParameters = "returnFaceId=true&returnFaceLandmarks=false&returnFaceAttributes=age,gender,headPose,smile,facialHair,glasses,emotion,hair,makeup,occlusion,accessories,blur,exposure,noise";

            // Assemble the URI for the REST API Call.
            string uri = uriBase + "?" + requestParameters;

            HttpResponseMessage response;

            // Request body. Posts a locally stored JPEG image.
            byte[] byteData = GetImageAsByteArray(imageFilePath);

            using (ByteArrayContent content = new ByteArrayContent(byteData))
            {
                // This example uses content type "application/octet-stream".
                // The other content types you can use are "application/json" and "multipart/form-data".
                content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");

                // Execute the REST API call.
                response = await client.PostAsync(uri, content);

                // Get the JSON response.
                string contentString = await response.Content.ReadAsStringAsync();

                // Display the JSON response.
                Console.WriteLine("\nResponse:\n");
                Console.WriteLine(JsonPrettyPrint(contentString));
            }
        }


        /// 
        /// Returns the contents of the specified file as a byte array.
        /// 
        /// The image file to read.
        /// The byte array of the image data.
        static byte[] GetImageAsByteArray(string imageFilePath)
        {
            FileStream fileStream = new FileStream(imageFilePath, FileMode.Open, FileAccess.Read);
            BinaryReader binaryReader = new BinaryReader(fileStream);
            return binaryReader.ReadBytes((int)fileStream.Length);
        }


        /// 
        /// Formats the given JSON string by adding line breaks and indents.
        /// 
        /// The raw JSON string to format.
        /// The formatted JSON string.
        static string JsonPrettyPrint(string json)
        {
            if (string.IsNullOrEmpty(json))
                return string.Empty;

            json = json.Replace(Environment.NewLine, "").Replace("\t", "");

            StringBuilder sb = new StringBuilder();
            bool quote = false;
            bool ignore = false;
            int offset = 0;
            int indentLength = 3;

            foreach (char ch in json)
            {
                switch (ch)
                {
                    case '"':
                        if (!ignore) quote = !quote;
                        break;
                    case '\'':
                        if (quote) ignore = !ignore;
                        break;
                }

                if (quote)
                    sb.Append(ch);
                else
                {
                    switch (ch)
                    {
                        case '{':
                        case '[':
                            sb.Append(ch);
                            sb.Append(Environment.NewLine);
                            sb.Append(new string(' ', ++offset * indentLength));
                            break;
                        case '}':
                        case ']':
                            sb.Append(Environment.NewLine);
                            sb.Append(new string(' ', --offset * indentLength));
                            sb.Append(ch);
                            break;
                        case ',':
                            sb.Append(ch);
                            sb.Append(Environment.NewLine);
                            sb.Append(new string(' ', offset * indentLength));
                            break;
                        case ':':
                            sb.Append(ch);
                            sb.Append(' ');
                            break;
                        default:
                            if (ch != ' ') sb.Append(ch);
                            break;
                    }
                }
            }

            return sb.ToString().Trim();
        }
    }
}




Once you build the code and run, in the command console will place you to provide the Face Location from your machine to analyze. I tried with 4 different photos in which I downloaded include mine and checked the results along with the input image.



Image 1 Image 2 Image 3 Image 4




[
   {
      "faceId": "195a734f-56c3-4c6a-a97a-31ab60075eed",
      "faceRectangle": {
         "top": 125,
         "left": 76,
         "width": 200,
         "height": 200
      },
      "faceAttributes": {
         "smile": 0.531,
         "headPose": {
            "pitch": 0.0,
            "roll": 3.2,
            "yaw": -9.9
         },
         "gender": "male",
         "age": 31.7,
         "facialHair": {
            "moustache": 0.6,
            "beard": 0.5,
            "sideburns": 0.6
         },
         "glasses": "NoGlasses",
         "emotion": {
            "anger": 0.0,
            "contempt": 0.145,
            "disgust": 0.0,
            "fear": 0.0,
            "happiness": 0.531,
            "neutral": 0.321,
            "sadness": 0.003,
            "surprise": 0.0
         },
         "blur": {
            "blurLevel": "low",
            "value": 0.0
         },
         "exposure": {
            "exposureLevel": "overExposure",
            "value": 0.96
         },
         "noise": {
            "noiseLevel": "low",
            "value": 0.0
         },
         "makeup": {
            "eyeMakeup": true,
            "lipMakeup": false
         },
         "accessories": [

         ],
         "occlusion": {
            "foreheadOccluded": false,
            "eyeOccluded": false,
            "mouthOccluded": false
         },
         "hair": {
            "bald": 0.04,
            "invisible": false,
            "hairColor": [
               {
                  "color": "black",
                  "confidence": 1.0
               },
               {
                  "color": "other",
                  "confidence": 0.64
               },
               {
                  "color": "brown",
                  "confidence": 0.21
               },
               {
                  "color": "red",
                  "confidence": 0.11
               },
               {
                  "color": "gray",
                  "confidence": 0.11
               },
               {
                  "color": "blond",
                  "confidence": 0.03
               }
            ]
         }
      }
   }
]
[
   {
      "faceId": "cb1a411b-f79e-4836-8098-5abc68bfad4f",
      "faceRectangle": {
         "top": 107,
         "left": 213,
         "width": 300,
         "height": 300
      },
      "faceAttributes": {
         "smile": 0.0,
         "headPose": {
            "pitch": 0.0,
            "roll": -1.0,
            "yaw": 1.4
         },
         "gender": "male",
         "age": 46.4,
         "facialHair": {
            "moustache": 0.5,
            "beard": 0.6,
            "sideburns": 0.5
         },
         "glasses": "NoGlasses",
         "emotion": {
            "anger": 1.0,
            "contempt": 0.0,
            "disgust": 0.0,
            "fear": 0.0,
            "happiness": 0.0,
            "neutral": 0.0,
            "sadness": 0.0,
            "surprise": 0.0
         },
         "blur": {
            "blurLevel": "low",
            "value": 0.0
         },
         "exposure": {
            "exposureLevel": "goodExposure",
            "value": 0.54
         },
         "noise": {
            "noiseLevel": "medium",
            "value": 0.45
         },
         "makeup": {
            "eyeMakeup": false,
            "lipMakeup": false
         },
         "accessories": [

         ],
         "occlusion": {
            "foreheadOccluded": false,
            "eyeOccluded": false,
            "mouthOccluded": false
         },
         "hair": {
            "bald": 0.01,
            "invisible": false,
            "hairColor": [
               {
                  "color": "brown",
                  "confidence": 0.99
               },
               {
                  "color": "black",
                  "confidence": 0.99
               },
               {
                  "color": "other",
                  "confidence": 0.46
               },
               {
                  "color": "gray",
                  "confidence": 0.17
               },
               {
                  "color": "red",
                  "confidence": 0.03
               },
               {
                  "color": "blond",
                  "confidence": 0.02
               }
            ]
         }
      }
   }
]
[
   {
      "faceId": "2cfb5ecd-9f95-4141-9550-75888360c861",
      "faceRectangle": {
         "top": 197,
         "left": 290,
         "width": 400,
         "height": 400
      },
      "faceAttributes": {
         "smile": 0.0,
         "headPose": {
            "pitch": 0.0,
            "roll": -2.8,
            "yaw": -4.4
         },
         "gender": "female",
         "age": 30.5,
         "facialHair": {
            "moustache": 0.0,
            "beard": 0.0,
            "sideburns": 0.0
         },
         "glasses": "NoGlasses",
         "emotion": {
            "anger": 0.0,
            "contempt": 0.0,
            "disgust": 0.0,
            "fear": 0.0,
            "happiness": 0.0,
            "neutral": 0.0,
            "sadness": 0.0,
            "surprise": 1.0
         },
         "blur": {
            "blurLevel": "low",
            "value": 0.0
         },
         "exposure": {
            "exposureLevel": "goodExposure",
            "value": 0.66
         },
         "noise": {
            "noiseLevel": "low",
            "value": 0.0
         },
         "makeup": {
            "eyeMakeup": true,
            "lipMakeup": false
         },
         "accessories": [

         ],
         "occlusion": {
            "foreheadOccluded": false,
            "eyeOccluded": false,
            "mouthOccluded": true
         },
         "hair": {
            "bald": 0.01,
            "invisible": false,
            "hairColor": [
               {
                  "color": "brown",
                  "confidence": 0.98
               },
               {
                  "color": "blond",
                  "confidence": 0.65
               },
               {
                  "color": "other",
                  "confidence": 0.5
               },
               {
                  "color": "gray",
                  "confidence": 0.39
               },
               {
                  "color": "black",
                  "confidence": 0.22
               },
               {
                  "color": "red",
                  "confidence": 0.05
               }
            ]
         }
      }
   }
]
[
   {
      "faceId": "f9da6ee3-aaf4-440d-8d1a-751298978d8e",
      "faceRectangle": {
         "top": 129,
         "left": 311,
         "width": 222,
         "height": 222
      },
      "faceAttributes": {
         "smile": 1.0,
         "headPose": {
            "pitch": 0.0,
            "roll": 5.7,
            "yaw": 2.5
         },
         "gender": "female",
         "age": 28.3,
         "facialHair": {
            "moustache": 0.0,
            "beard": 0.0,
            "sideburns": 0.0
         },
         "glasses": "ReadingGlasses",
         "emotion": {
            "anger": 0.0,
            "contempt": 0.0,
            "disgust": 0.0,
            "fear": 0.0,
            "happiness": 1.0,
            "neutral": 0.0,
            "sadness": 0.0,
            "surprise": 0.0
         },
         "blur": {
            "blurLevel": "low",
            "value": 0.0
         },
         "exposure": {
            "exposureLevel": "goodExposure",
            "value": 0.58
         },
         "noise": {
            "noiseLevel": "low",
            "value": 0.0
         },
         "makeup": {
            "eyeMakeup": true,
            "lipMakeup": true
         },
         "accessories": [
            {
               "type": "glasses",
               "confidence": 0.99
            }
         ],
         "occlusion": {
            "foreheadOccluded": false,
            "eyeOccluded": false,
            "mouthOccluded": false
         },
         "hair": {
            "bald": 0.12,
            "invisible": false,
            "hairColor": [
               {
                  "color": "black",
                  "confidence": 0.95
               },
               {
                  "color": "brown",
                  "confidence": 0.86
               },
               {
                  "color": "other",
                  "confidence": 0.62
               },
               {
                  "color": "red",
                  "confidence": 0.23
               },
               {
                  "color": "blond",
                  "confidence": 0.15
               },
               {
                  "color": "gray",
                  "confidence": 0.09
               }
            ]
         }
      }
   }
]

API detected the facial expression and the gender properly in my image. In this image, it detects accurately that Huge's emotion as anger. Similar to that in 3rd image emotion as a surprise and other optional features In this 4th image it detects that she is wearing glasses and she is in happy mode.

This is just a beginning for the Face API. But there is lot hidden gems there we can work with in order to make some advanced applications. Especially people grouping and detect accurately among a group of people in a photo. But, that doesn't mean that this code we start cannot apply to real-world scenarios. 

Yes, we can. There are a bunch of applications we can do even with this code. Ex: identify whether employee or students in a class whether happy or not. Specially person behavior over time. Customer satisfaction in a Bank are just a few of them 😊

Reference: MSDN

Comments

  1. SARA Technologies develop powerful face detection software that uses facial points and next-generation advanced detection technology to authenticate the user accurately. Some software uses the face features from the input image to recognize the face while other normalize a set of face images that can be used for facial detection. API's which we use can easily identify faces and emotions.

    ReplyDelete

Post a Comment

Popular posts from this blog

SSIS Error Resolved: Unable to convert MySQL date/time value to System.DateTime

I had a requirement to extract data from MySQL source and load into the SQL Server data warehouse. The reason for that is from ADO.Net library its unable to convert MySQL date/time data type to System.DateTime data type. I tried to find out a solution from the Internet but realized that there were no convenient post regarding this.
When we consider about Data Warehouse development, this will be a common your problem which can be face anytime.

Error Message:
[ADO NET Source [2]] Error: The error "Unable to convert MySQL date/time value to System.DateTime" occurred while processing "ADO NET Source.Outputs[ADO NET Source Output].Columns[start_date]".
After reading the error message, I realized this error gives from the Source ADO.Net component no in the intermediate flows, neither write to the destination table.

Solution
Then I tried to cast the start_date column to well known date time format in the source query.

DATE_FORMAT('1997-10-04 22:23:00','%H %k %I %…

#FloodSL Red Alert

I've created a simple Power BI report based on the source published by Department of Government Information, Sri Lanka. It is an urgent responsibility to focus on these areas and evacuate as its more likely to rain in next few days. You can navigate the areas through the map. View the Report

How To : Uninstall SQL Server Analysis Services Instance (SSAS)

(1 min read )
You may NOT ever wanted to un-install Microsoft SQL Server Analysis Services instance. Even things were same for me until now. I needed to un-install existing multidimensional instance in client environment and install Analysis Services Tabular instance for development. There are no-straight forward way to do and even the steps provided on Tech-net  did not work for me. Lets have a look at pretty straight forward simple steps to do in order to remove your AS instance. 
Prerequisites :
In order to do this, you might mount the SQL Server Installation image mount and locate as the setup.exe path. In my case its F:\ drive the iso mounted

Steps :

1. Open Command Prompt as Administrator model in the server or PC you need to perform uninstallation.

2. Type the below command and run

F:\setup.exe /ACTION=uninstall /FEATURES=AS /INSTANCENAME=YOUR_INSTANCE_NAME

In above command has below parameters.

1- Installation setup path
2- Action performed. In here we are going to un-install
3- Fea…