Sunday, March 10, 2019

How does Facial Recognition System Work? Explained - SingleWindowTech

Facial Recognition System

Facial Recognition System - Definition

In recent years, face recognition has attracted much attention and its research has rapidly expanded by not only engineers but also neuroscientists, since it has many potential applications in computer vision communication and automatic access control system.
Especially, face detection is an important part of facial recognition as the first step of automatic face recognition. However, face detection is not straightforward because it has lots of variations of image appearance, such as pose variation (front, non-front), occlusion, image orientation, illuminating condition and facial expression.


Face detection is one of the visual tasks which humans can do effortlessly. However, in computer vision terms, this task is not easy. A general statement of the problem can be defined as follows: Given a still or video image, detect and localize an unknown number (if any) of faces. The solution to the problem involves segmentation, extraction, and verification of faces and possibly facial features from an uncontrolled background. As a visual frontend processor, a face detection system should also be able to achieve the task regardless of illumination, orientation, and camera distance.


Facial Recognition System - History

Many would say that the father of facial recognition was Woodrow Wilson Bledsoe. Working in the 1960s, Bledsoe developed a system that could classify photos of faces by hand using what’s known as a RAND tablet, a device that people could use to input horizontal and vertical coordinates on a grid using a stylus that emitted electromagnetic pulses. The system could be used to manually record the coordinate locations of various facial features including the eyes, nose, hairline and mouth.
  • 1988:  Sirovich and Kirby began applying linear algebra to the problem of facial recognition. Sirovich and Kriby were able to show that feature analysis on a collection of facial images could form a set of basic features. They were also able to show that less than one hundred values were required in order to accurately code a normalized face image.
  • 1991:  Turk and Pentland expanded upon the Eigenface approach by discovering how to detect faces within images. This led to the first instances of automatic face recognition. Their approach was constrained by technological and environmental factors, but it was a significant breakthrough in proving the feasibility of automatic facial recognition.
  • The Defense Advanced Research Projects Agency (DARPA) and the National Institute of Standards and Technology rolled out the Face Recognition Technology (FERET) program beginning in the 1990s in order to encourage the commercial face recognition market. The project involved creating a database of facial images. The database was updated in 2003 to include high-resolution 24-bit color versions of images. 
  • 2002: At the 2002 Super Bowl, law enforcement officials used facial recognition in a major test of the technology. While officials reported that several “petty criminals” were detected, overall the test was seen as a failure. False positives and backlash from critics proved that face recognition wasn’t quite ready for prime time.

How Does Facial Recognition System Work?

Every face has numerous, distinguishable landmarks, the different peaks and valleys that make up facial features. FaceIt defines these landmarks as nodal points. Each human face has approximately 80 nodal points. Some of these measured by the software are:
  • Distance between eyes
  • Width of Nose
  • Depth of eye sockets
  • Shape of cheekbones
  • Length of Jaw Line
These nodal points are measured creating a numerical code, called a faceprint, representing the face in the database.

In the past, facial recognition software has relied on a 2D image to compare or identify another 2D image from the database. To be effective and accurate, the image captured needed to be of a face that was looking almost directly at the camera, with little variance of light or facial expression from the image in the database. This created quite a problem.

In most instances the images were not taken in a controlled environment. Even the smallest changes in light or orientation could reduce the effectiveness of the system, so they couldn't be matched to any face in the database, leading to a high rate of failure.

Also read: Android vs. iPhone

Functions:

The main function of this step is to determine:
  1. Whether human faces appear in a given image
  2. Where these faces are located at.

The expected outputs of this step are patches containing each face in the input image. In order to make further face recognition system more robust and easy to design, face alignment are performed to justify the scales and orientations of these patches. Besides serving as the pre-processing for face recognition, face detection could be used for region-of-interest detection, retargeting, video and image classification, etc.

Feature Extraction:

How Does Facial Recognition System Work
After the face detection step, human-face patches are extracted from images. Directly using these patches for face recognition have some disadvantages, first, each patch usually contains over 1000 pixels, which are too large to build a robust recognition system. Second, face patches may be taken from different camera alignments, with different face expressions, illuminations, and may suffer from occlusion and clutter. To overcome these drawbacks, feature extractions are performed to do information packing, dimension reduction, salience extraction, and noise cleaning. After this step, a face patch is usually transformed into a vector with fixed dimension or a set of fiducially points and their corresponding locations.
  • In order to achieve automatic recognition, a face database is required to build. For each person, several images are taken and their features are extracted and stored in the database. Then when an input face image comes in, we perform face detection and feature extraction, and compare its feature to each face class stored in the database.
  • There are two general applications of face recognition, one is called identification and another one is called verification. Face identification means given a face image, we want the system to tell who he / she is or the most probable identification; while in face verification, given a face image and a guess of the identification, we want the system to tell true or false about the guess.

Design Issues:

When designing a face detection and facial recognition system, in addition to considering the aspects from psychophysics and neuroscience and the factors of human appearance variations, there are still some design issues to be taken into account.
  • First, the execution speed of the system reveals the possibility of on-line service and the ability to handle large amounts of data. Some previous methods could accurately detect human faces and determine their identities by complicated algorithms, which requires a few seconds to a few minutes for just an input image and can’t be used in practical applications.
  • Second, the training data size is another important issue in algorithm design. It is trivial that more data are included, more information we can exploit and better performance we can achieve. While in practical cases, the database size is usually limited due to the difficulty in data acquisition and the human privacy. Under the condition of limited data size, the designed algorithm should not only capture information from training data but also include some prior knowledge or try to predict and interpolate the missing and unseen data. Finally, how to bring the algorithms into uncontrolled conditions is yet an unsolved problem.

Also read: How Technology Is Evolving

Facial Recognition System - Applications:

  1. Payment: It doesn’t take a genius to work out why businesses want payments to be easy. Online shopping and contactless cards are just two examples that demonstrate the seamlessness of postmodern purchases. With FaceTech, however, customers wouldn’t even need their cards. In 2016, MasterCard launched a new selfie pay app called MasterCard Identity Check. Customers open the app to confirm a payment using their camera, and that’s that.
  2. Access & Security: As well as verifying a payment, facial biometrics can be integrated with physical devices and objects. Instead of using passcodes, mobile phones and other consumer electronics will be accessed via owners’ facial features. Apple, Samsung and Xiaomi Corp. have all installed FaceTech in their phones.
  3. Criminal Identification: If FaceTech can be used to keep unauthorised people out of facilities, surely it can be used to help put them firmly inside them. This is exactly what the US Federal Bureau of Investigation is attempting to do by using a machine learning algorithm to identify suspects from their driver’s licences.
  4. Advertisement: The ability to collect and collate masses of personal data has given marketers and advertisers the chance to get closer than ever to their target markets. FaceTech could do much the same, by allowing companies to recognise certain demographics – for instance, if the customer is a male between the ages of 12 and 21, the screen might show an ad for the latest FIFA game.
  5. Healthcare: Instead of recognising an individual via FaceTech, medical professionals could identify illnesses by looking at a patient’s features. This would alleviate the ongoing strain on medical centres by slashing waiting lists and streamlining the appointment process.
  6. Find Missing Persons: Face recognition can be used to find missing children and victims of human trafficking. As long as missing individuals are added to a database, law enforcement can become alerted as soon as they are recognized by face recognition—be it an airport, retail store or other public space.
  7. School Threats: Face recognition surveillance systems can instantly identify when expelled students, dangerous parents, drug dealers or other individuals that pose a threat to school safety enter school grounds.
  8. Validating Identity: It seems likely that face scans will eventually replace ATM cards completely. But in the meantime, face recognition can be used to make sure that individuals using ATMs cards are who they say they are.
With a predicted worth of $15 billion by 2025, biometrics is an industry worth watching. It’s clear that facial biometrics are a helpful tool for finance, law enforcement, advertising and healthcare, as well as a solution to hacking and identity theft.
 

Apple Face Recognition - The Face ID

Apple Face Recognition

We cannot say if this article is complete if we do not talk about Apple's Face Recognition System which is also called the Face ID. 

Face ID is a facial recognition system designed and developed by Apple Inc. for the iPhone and iPad Pro. The system allows biometric authentication for unlocking a device, making payments, and accessing sensitive data, as well as providing detailed facial expression tracking for Animoji and other features. 

 
Initially released in November 2017 with the iPhone X, it has since been updated and introduced to all new iPhone and iPad Pro models.

The Face ID hardware consists of a sensor with two modules; one projects a grid of small infrared dots onto a user's face, and another module reads the resulting pattern and generates a 3D facial map. This map is compared with the registered face using a secure subsystem, and the user is authenticated if the two faces match sufficiently. 

The system can recognize faces with glasses, clothing, makeup, and facial hair, and adapts to changes in appearance over time.

Do share this article if you like this. Share your views and opinions in the comments box.

 

No comments:
Write comments

Featured Post

What is Microsoft 365? How AI in Microsoft 365 is helping in making things better?

What is Microsoft 365? In a nutshell, Microsoft 365 is an integrated bundle of the operating system Windows 10, Microsoft Office 36...