Ever thought why the tag suggestions on facebook are so accurate?
Well, it is due to the most efficient face detector tool, being used by the respective company. Your favorite social media giant facebook is using a curiosity-raising machine learning detector for recognizing human faces more accurately, called ‘Deep Face’, which is found by the facebook AI research group, California.
The algorithm which runs deepface
The deepface tool works on a face verification algorithm, structured by artificial intelligence (AI) techniques using neural network models. A neural network is a collection of trained neurons that perform various analyze and imitate the human brain to given out the results. Neural techniques try to achieve the abilities of human nerve cells, by making the machines intelligent.
Let’s understand it better.
- The input
- The process
- The output
- Training data
- The result
Researchers scanned a wild form (low picture quality photos without any editing) of photos with large complex data like images of body parts, clothes, hairstyles, etc. daily. It helped this intelligent tool obtain a higher degree of accuracy. The tool enables facial detection on the basis of human facial features (eyebrows, nose, lips, etc.)
In modern face recognition, the process completes in 4 raw steps:
As facebook utilizes an advanced version of this approach, the steps are a bit more matured and elaborated than these. Adding the 3D transformation and piece-wise affine transformation in the procedure, the algorithm is empowered for delivering more accurate results. For details, refer to the ‘Algorithm’s steps’ section of this article. You’ll find it after a bit of scrolling.
The final result is a face representation, which is derived from a 9-layer deep neural net. This neural net has more than 120 million variables, which are mapped to different locally-connected layers. On contrary to the standard convolution layers, these layers do not have weight sharing deployed.
Any AI or deep learning system needs enough training data so that it can ‘learn’. With a vast user base, facebook has enough images to experiment with. The team used more than 4 million facial images of more than 4000 people for this purpose. This algorithm performs a lot of operations for recognizing faces with human accuracy levels.
Facebook can detect whether the two images represent the same person or not. The website can do it, irrespective of surroundings light, camera angle, and colors wearing on face i.e. facial make-up. To your surprise, this algorithm works with 97.47 percent accuracy, which is almost equal to human eye accuracy 97.65 percent.
Algorithm’s steps: How does facebook’s face detection algorithm works?
The advanced level DeepFace algorithm works as under:
- Storing the images after upload
- It uses neural networking for facial feature detection
- Recursive checking and 68 landmark testing
- Encoding and mapping
When we upload images to the system, the deep face tool scans the image of the human face. Even if you are uploading two photos of the same persons, taken from different angles, those are the same for humans but not for the computer. It would be treated as two persons by computing algorithms.
Then, it starts the ‘matching’ process. This face identification is carried out by generating and transmitting a signal to the synapses, a nine-layer deep neuron network having a wide collection of human faces approx. 120 million.
By receiving the signal, artificial neurons establish a connection among synapses. All the connections between two neurons can be considered as a route. These routes pass the signal ahead with step by step identification based on image facial features.
Neuron matches the scanned data with all existing images to pass the signal in the wired direction (image is already existing or matching with any face) or bad lighting direction if that image is not matching with any other faces.
In the first case, the signal continues the route of the existing face and stores the image in the previous album by landmark testing, whereas in the second case, The signal diverts its route based on highly possible connections like the image is having big eyes or small eyes.
Suppose the image is having big eyes, the signal starts to travel towards all big eyes images. Then further it can identify whether the eyes are black or brown. Suppose it is black, and then it can make further distinctions like the distance between eyes, colors of eyebrows, types of lips, nose, etc.
By following recurring identification routes using artificial intelligence, the deep face tool goes for 68 landmark testing, every human face is having 68 specific facial points that make a pattern.
The tool checks pattern by matching the testing image with other images of the same person and a new person image. Then it encodes the correct image. After encoding, it searches the information like name and address of that encoding and saves the image in that album.
Is it efficient?
Yes. This ML detector has amazing accuracy!
Face recognition – good or bad?
Keeping aside the success of researchers and our amusement, it is the time to emphasize the fact that is it an ethical practice or not?
The major concern, which is daunting the users, is about data privacy. Face recognition is undoubtedly a useful practice and could act as a revolutionary first step towards many innovative. But the thing driving people furious is – how can a company read their faces and tell the world about their identity without their permission. Why is an organization so interested in knowing people by faces, whether or not the concerned person is willing?
The questions are legit and that’s the cause why facebook isn’t found celebrating the success of deepface. You may have heard their spokesperson telling that it’s just an academic project and not a typical facebook product.
The current use of deepface will remain controversial as most of us are not willing to reveal our identity to everyone on the internet. Though there are a few other applications of face recognition, which could be intriguing:
- In-home security systems, to help the security appliances understand if a person is authorized for entering a building/home or not.
- In the investigation of crimes.
- To enable the blinds or visually-impaired people with artificial power to see.
- In robots so that they can recognize their team members and distinguish among them.
- In academic researches.
So the answer of ‘is face recognition good or bad’ is probably as per your perspective of viewing at the technology. Tell us what do you think of it?