Self-supervised-based multimodal fusion for active biometric verification on mobile devices

Ouadjer, Youcef; Galdi, Chiara; Berrani, Sid-Ahmed; Adnane, Mourad; Dugelay, Jean-Luc
ICPRAM 2024, 13th International Conference on Pattern Recognition Applications and Methods, 24-26 February 2024, Rome, Italy

This paper focuses on the fusion of multimodal data for an effective active biometric verification on mobile devices. Our proposed Multimodal Fusion (MMFusion) framework combines hand movement data and touch screen interactions. Unlike conventional approaches that rely on annotated unimodal data for deep neural network training, our method makes use of contrastive self-supervised learning in order to extract powerful feature representations and to deal with the lack of labeled training data. The fusion is performed at the feature level, by combining information from hand movement data (collected using background sensors like accelerometer, gyroscope and magnetometer) and touch screen logs. Following the self- supervised learning protocol, MMFusion is pre-trained to capture similarities between hand movement sensor data and touch screen logs, effectively attracting similar pairs and repelling dissimilar ones. Extensive evaluations demonstrate its high performance on user verification across diverse tasks compared to unimodal alternatives trained using the SimCLR framework. Moreover, experiments in semi-supervised scenarios reveal the superiority of MMFusion with the best trade-off between sensitivity and specificity.

Type:
Conference
City:
Rome
Date:
2024-02-24
Department:
Digital Security
Eurecom Ref:
7607
Copyright:
© 2024 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

PERMALINK : https://www.eurecom.fr/publication/7607