Face Anti-spoofing via Motion Magnification and Multifeature Videolet Aggregation

For robust face biometrics, a reliable anti-spoofing approach has become an essential pre-requisite against attacks. While spoofing attacks are possible with any biometric modality, face spoofing attacks are relatively easy which makes facial biometrics especially vulnerable. This paper presents a new framework for spoofing detection in face videos using motion magnification and multi-feature evidence aggregation in a windowed fashion. Micro- and macro- facial expressions commonly exhibited by subjects are first magnified using Eulerian motion magnification. Next, two feature extraction algorithms, a configuration of local binary pattern and motion estimation using histogram of oriented optical flow, are used to encode texture and motion (liveness) properties respectively. Multifeature windowed videolet aggregation of these two orthogonal features, coupled with support vector machine classification provides robustness to different attacks. The proposed approach is evaluated and compared with existing algorithms on publicly available Print Attack, Replay Attack, and CASIA-FASD databases. The proposed algorithm yields state-of-the-art performance and robust generalizability with low computational complexity.

heat map
Fig 1. Sample frames at regular intervals of a subject from the Replay Attack dataset. Frames in (a) are original while (b) contains frames from the corresponding motion magnified video. (c) illustrates the heat map of the mean of absolute differences of corresponding frames. The activity in eye regions represent the blinking motion of the eyes.

The videos available here helps visualize the effect of motion magnification.

(Best viewed using a Google Chrome browser)


Print Attack

Replay Attack


Videos of subjects with various motion magnification parameters can be found here. The effect of motion magnificaiton on CASIA-FASD dataset can be observed in these videos here.

More resources will be made available soon.