With the introduction of Google Play Services-7.8, you can now not only detect multiple faces, you are able to detect facial features like: eyes, nose, cheeks, ears in a few steps. You can detect facial states like exactly which eye is open or closed,  and whether the person is smiling or not.

Face detection can help us build some really smart applications.

Isn’t it nice to have such features in the native android libraries? This api is powerful enough to detect and track faces even if the faces are at different angles.

With this blog post, you are going to learn detecting (not tracking) multiple faces and facial features on an image.

Before moving further we are assuming that you have:

1. Android Studio IDE
2. Latest Android SDK
3. Google Play Services SDK 7.8 or higher
4. Real Android device or a configured Emulator in IDE

An image before image detection and after image detection.
An image before image detection and after image detection. (Image credit: http://littledrun-k.tumblr.com/post/29367045597/)

Here we go…

Create an android project and add the play services dependency in build.gradle file:

compile 'com.google.android.gms:play-services-vision:7.8+'

 
Add the face detection api dependency in AndroidManifest.xml file:

<meta-data
            android:name="com.google.android.gms.vision.DEPENDENCIES"
            android:value="face" />

This ensures that the library is available for face detection.

Load the image:

For the sake of simplicity, we are loading the image from the Drawable folder:

BitmapFactory.Options opts = new BitmapFactory.Options();
opts.inMutable=true;
mImage = BitmapFactory
    .decodeResource(getResources(), R.drawable.bitmap, opts);

 

Initialize the FaceDetector and Frame Object:

FaceDetector detector = new FaceDetector
    .Builder(this) 
    .setMode(FaceDetector.ACCURATE_MODE) 
    .setLandmarkType(FaceDetector.ALL_LANDMARKS) 
    .setClassificationType(FaceDetector.ALL_CLASSIFICATIONS) 
    .setTrackingEnabled(false) .build();
 
//Add the image on a Frame object
Frame frame = new Frame.Builder().setBitmap(mImage).build();

 

Checking for detector.isOperational() status:
Since, we have mentioned a dependency in manifest file, so the face detection api will be available before its usage. Usually, this is done by the installer before the app is run for the first time. But it may happen for the first time that your device is not having the Play Services ready, in which case you must handle such a situation in your code. The detector automatically becomes operational once the library download has been completed on device. A detector’s isOperational() method can be used to check if the required native library is currently available:

if(!detector.isOperational()){ 
    Toast.makeText(this, 
        "Face detection service is not ready", Toast.LENGTH_SHORT).show(); 
    return; 
}

 

Detect all faces in loaded image:
Each detected face is given a unique id and is returned as an object of the Face class. Since its possible to detect multiple faces in one image, the result is given in SparseArray object.

SparseArray<Face> faceArray = detector.detect(frame);

 

Write utility methods for drawing around faces:

    //This method draws a rectangle
    private void drawRectangle(Canvas canvas, PointF point, float width, 
            float height){
        Paint paint = new Paint();
        paint.setColor(Color.RED);
        paint.setStrokeWidth(5);
        paint.setStyle(Paint.Style.STROKE);

        float x1 = point.x;
        float y1 = point.y;
        float x2 = x1 + width;
        float y2 = y1 + height;

        RectF rect = new RectF(x1, y1, x2, y2);
        canvas.drawRect(rect, paint);
    }


    //This method draws a point with hole
    private void drawPoint(Canvas canvas, PointF point){
        Paint paint = new Paint();
        paint.setColor(Color.RED);
        paint.setStrokeWidth(8);
        paint.setStyle(Paint.Style.STROKE);

        float x = point.x;
        float y = point.y;

        canvas.drawCircle(x, y, 1, paint);
    }

 

Draw on detected faces and facial landmarks:
Now, We are passing all the faces in a method in order to get some rectangles and points drawn on faces.

    private Bitmap drawOnFace(SparseArray<Face> faceArray){
        Bitmap outBitmap = Bitmap.createBitmap(mImage.getWidth(), mImage.getHeight(), Bitmap.Config.RGB_565);
        Canvas canvas = new Canvas(outBitmap);
        canvas.drawBitmap(mImage, 0, 0, null);

        for(int i=0; i < faceArray.size(); i++){
            Face face = faceArray.get(i);

            //Drawing rectangle on each face
            drawRectangle(canvas, face.getPosition(), face.getWidth(), face.getHeight());

            //Drawing a point on each face features
            for(Landmark landmark : face.getLandmarks()) {
                switch (landmark.getType()){
                    case Landmark.LEFT_EYE:
                        drawPoint(canvas, landmark.getPosition());
                        break;
                    case Landmark.RIGHT_EYE:
                        drawPoint(canvas, landmark.getPosition());
                        break;
                    case Landmark.BOTTOM_MOUTH:
                        drawPoint(canvas, landmark.getPosition());
                        break;
                    case Landmark.LEFT_MOUTH:
                        drawPoint(canvas, landmark.getPosition());
                        break;
                    case Landmark.RIGHT_MOUTH:
                        drawPoint(canvas, landmark.getPosition());
                        break;
                    case Landmark.NOSE_BASE:
                        drawPoint(canvas, landmark.getPosition());
                        break;
                    case Landmark.LEFT_CHEEK:
                        drawPoint(canvas, landmark.getPosition());
                        break;
                    case Landmark.RIGHT_CHEEK:
                        drawPoint(canvas, landmark.getPosition());
                        break;
                    case Landmark.LEFT_EAR:
                        drawPoint(canvas, landmark.getPosition());
                        break;
                    case Landmark.LEFT_EAR_TIP:
                        drawPoint(canvas, landmark.getPosition());
                        break;
                    case Landmark.RIGHT_EAR:
                        drawPoint(canvas, landmark.getPosition());
                        break;
                    case Landmark.RIGHT_EAR_TIP:
                        drawPoint(canvas, landmark.getPosition());
                        break;
                }
            }
            
            //Other useful details that may be of your interest
            Log.d("", "FaceDetection- FaceId:"+face.getId()
                    + " Smiling:"+face.getIsSmilingProbability()
                    + " LeftEyeOpen:" + face.getIsLeftEyeOpenProbability()
                    + " RightEyeOpen:" + face.getIsRightEyeOpenProbability());
        }

        return outBitmap;
    }

 

Releasing the FaceDetector object:
Since the FaceDetector uses native resources in order to do detection, it is necessary to release the FaceDetector instance when we don’t need it anymore:

detector.release();

 

Now, you have learnt detecting the face and facial features from a still image. You can now figure out the state of a face, whether its smiling or winking.

You can use this technology to enhance the user experience of your image and video apps. Its also now possible to perform specific operations when people in the image are in smiling or winking state.

Even more things are possible, when you use face tracking. I will cover how to use face tracking in a future post.

You can download the complete sample code from the following link:

Face Detection Sample Code