With the introduction of Google Play Services-7.8, you can now not only detect multiple faces, you are able to detect facial features like: eyes, nose, cheeks, ears in a few steps. You can detect facial states like exactly which eye is open or closed, and whether the person is smiling or not.
Face detection can help us build some really smart applications.
Isn’t it nice to have such features in the native android libraries? This api is powerful enough to detect and track faces even if the faces are at different angles.
With this blog post, you are going to learn detecting (not tracking) multiple faces and facial features on an image.
Before moving further we are assuming that you have:
1. Android Studio IDE
2. Latest Android SDK
3. Google Play Services SDK 7.8 or higher
4. Real Android device or a configured Emulator in IDE
Here we go…
Create an android project and add the play services dependency in build.gradle file:
Checking for detector.isOperational() status:
Since, we have mentioned a dependency in manifest file, so the face detection api will be available before its usage. Usually, this is done by the installer before the app is run for the first time. But it may happen for the first time that your device is not having the Play Services ready, in which case you must handle such a situation in your code. The detector automatically becomes operational once the library download has been completed on device. A detector’s isOperational() method can be used to check if the required native library is currently available:
Toast.makeText(this, "Face detection service is not ready", Toast.LENGTH_SHORT).show(); return;}
Detect all faces in loaded image:
Each detected face is given a unique id and is returned as an object of the Face class. Since its possible to detect multiple faces in one image, the result is given in SparseArray object.