Chrome Extension
WeChat Mini Program
Use on ChatGLM

A robust fingerprint presentation attack detection method against unseen attacks through adversarial learning.

2020 International Conference of the Biometrics Special Interest Group (BIOSIG)(2020)

Cited 9|Views6
No score
Abstract
Fingerprint presentation attack detection (PAD) methods present a stunning performance in current literature. However, the fingerprint PAD generalisation problem is still an open challenge requiring the development of methods able to cope with sophisticated and unseen attacks as our eventual intruders become more capable. This work addresses this problem by applying a regularisation technique based on an adversarial training and representation learning specifically designed to to improve the PAD generalisation capacity of the model to an unseen attack. In the adopted approach, the model jointly learns the representation and the classifier from the data, while explicitly imposing invariance in the high-level representations regarding the type of attacks for a robust PAD. The application of the adversarial training methodology is evaluated in two different scenarios: i) a handcrafted feature extraction method combined with a Multilayer Perceptron (MLP); and ii) an end-to-end solution using a Convolutional Neural Network (CNN). The experimental results demonstrated that the adopted regularisation strategies equipped the neural networks with increased PAD robustness. The adversarial approach particularly improved the CNN models' capacity for attacks detection in the unseen-attack scenario, showing remarkable improved APCER error rates when compared to state-of-the-art methods in similar conditions.
More
Translated text
Key words
unseen attacks
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined