Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 1 de 1
Filtrar
Más filtros

Bases de datos
Tipo del documento
Asunto de la revista
Intervalo de año de publicación
1.
IEEE Trans Image Process ; 32: 6155-6167, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37938958

RESUMEN

Facial age estimation has received a lot of attention for its diverse application scenarios. Most existing studies treat each sample equally and aim to reduce the average estimation error for the entire dataset, which can be summarized as General Age Estimation. However, due to the long-tailed distribution prevalent in the dataset, treating all samples equally will inevitably bias the model toward the head classes (usually the adult with a majority of samples). Driven by this, some works suggest that each class should be treated equally to improve performance in tail classes (with a minority of samples), which can be summarized as Long-tailed Age Estimation. However, Long-tailed Age Estimation usually faces a performance trade-off, i.e., achieving improvement in tail classes by sacrificing the head classes. In this paper, our goal is to design a unified framework to perform well on both tasks, killing two birds with one stone. To this end, we propose a simple, effective, and flexible training paradigm named GLAE, which is two-fold. First, we propose Feature Rearrangement (FR) and Pixel-level Auxiliary learning (PA) for better feature utilization to improve the overall age estimation performance. Second, we propose Adaptive Routing (AR) for selecting the appropriate classifier to improve performance in the tail classes while maintaining the head classes. Moreover, we introduce a new metric, named Class-wise Mean Absolute Error (CMAE), to equally evaluate the performance of all classes. Our GLAE provides a surprising improvement on Morph II, reaching the lowest MAE and CMAE of 1.14 and 1.27 years, respectively. Compared to the previous best method, MAE dropped by up to 34%, which is an unprecedented improvement, and for the first time, MAE is close to 1 year old. Extensive experiments on other age benchmark datasets, including CACD, MIVIA, and Chalearn LAP 2015, also indicate that GLAE outperforms the state-of-the-art approaches significantly.


Asunto(s)
Benchmarking , Aprendizaje , Humanos , Lactante , Adulto
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA