One embedding to predict them all: Visible and thermal universal face representations for soft biometric estimation via vision transformers

Mirabet-Herranz, Nélida; Galdi, Chiara; Dugelay, Jean-Luc
BIOMETRICS 2024, 19th IEEE Computer Society Workshop on Biometrics 2024, in conjunction with CVPR 2024, IEEE/CVF Conference on Computer Vision and Pattern Recognition, 17-21 June 2024, Seattle, USA

Human faces encode a vast amount of information including not only uniquely distinctive features of the individual but also demographic information such as a person’s age, gender, and weight. Such information is referred to as soft-biometrics, which are physical, behavioral or adhered human characteristics, classifiable in pre–defined human compliant categories. As we often say ’one look is worth a thousand words’. Vision Transformers have emerged as a powerful deep learning architecture able to achieve accurate classifications for different computer vision tasks, but
these models have not been yet applied to soft-biometrics. In this work, we propose the Bidirectional Encoder Face representation from image Transformers (BEFiT), a model
that leverages the multi-attention mechanisms to capture local and global features and produce a multi-purpose face embedding. This unique embedding enables the estimation of different demographics without having to re-train the model for each soft-biometric trait, ensuring high efficiency without compromising accuracy. Our approach explores the use of visible and thermal images to achieve powerful face embedding in different light spectra. We demonstrate that the BEFiT embeddings can capture essential information for gender, age, and weight estimation, surpassing the performance of dedicated deep learning structures for the estimation of a single soft biometric trait. The code of BEFiT implementation is publicly available.

Type:
Conférence
City:
Seattle
Date:
2024-06-17
Department:
Sécurité numérique
Eurecom Ref:
7672

PERMALINK : https://www.eurecom.fr/publication/7672