Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.012
Filtrar
1.
Sensors (Basel) ; 24(9)2024 Apr 24.
Artículo en Inglés | MEDLINE | ID: mdl-38732790

RESUMEN

With the development of biometric identification technology, finger vein identification has received more and more widespread attention for its security, efficiency, and stability. However, because of the performance of the current standard finger vein image acquisition device and the complex internal organization of the finger, the acquired images are often heavily degraded and have lost their texture characteristics. This makes the topology of the finger veins inconspicuous or even difficult to distinguish, greatly affecting the identification accuracy. Therefore, this paper proposes a finger vein image recovery and enhancement algorithm using atmospheric scattering theory. Firstly, to normalize the local over-bright and over-dark regions of finger vein images within a certain threshold, the Gamma transform method is improved in this paper to correct and measure the gray value of a given image. Then, we reconstruct the image based on atmospheric scattering theory and design a pixel mutation filter to segment the venous and non-venous contact zones. Finally, the degraded finger vein images are recovered and enhanced by global image gray value normalization. Experiments on SDUMLA-HMT and ZJ-UVM datasets show that our proposed method effectively achieves the recovery and enhancement of degraded finger vein images. The image restoration and enhancement algorithm proposed in this paper performs well in finger vein recognition using traditional methods, machine learning, and deep learning. The recognition accuracy of the processed image is improved by more than 10% compared to the original image.


Asunto(s)
Algoritmos , Dedos , Procesamiento de Imagen Asistido por Computador , Venas , Humanos , Dedos/irrigación sanguínea , Dedos/diagnóstico por imagen , Venas/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Identificación Biométrica/métodos , Atmósfera
2.
Sensors (Basel) ; 24(9)2024 Apr 25.
Artículo en Inglés | MEDLINE | ID: mdl-38732856

RESUMEN

Biometric authentication plays a vital role in various everyday applications with increasing demands for reliability and security. However, the use of real biometric data for research raises privacy concerns and data scarcity issues. A promising approach using synthetic biometric data to address the resulting unbalanced representation and bias, as well as the limited availability of diverse datasets for the development and evaluation of biometric systems, has emerged. Methods for a parameterized generation of highly realistic synthetic data are emerging and the necessary quality metrics to prove that synthetic data can compare to real data are open research tasks. The generation of 3D synthetic face data using game engines' capabilities of generating varied realistic virtual characters is explored as a possible alternative for generating synthetic face data while maintaining reproducibility and ground truth, as opposed to other creation methods. While synthetic data offer several benefits, including improved resilience against data privacy concerns, the limitations and challenges associated with their usage are addressed. Our work shows concurrent behavior in comparing semi-synthetic data as a digital representation of a real identity with their real datasets. Despite slight asymmetrical performance in comparison with a larger database of real samples, a promising performance in face data authentication is shown, which lays the foundation for further investigations with digital avatars and the creation and analysis of fully synthetic data. Future directions for improving synthetic biometric data generation and their impact on advancing biometrics research are discussed.


Asunto(s)
Cara , Juegos de Video , Humanos , Cara/anatomía & histología , Cara/fisiología , Biometría/métodos , Identificación Biométrica/métodos , Imagenología Tridimensional/métodos , Masculino , Femenino , Algoritmos , Reproducibilidad de los Resultados
3.
Forensic Sci Int ; 359: 111993, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38704925

RESUMEN

There are numerous anatomical and anthropometrical standards that can be utilised for craniofacial analysis and identification. These standards originate from a wide variety of sources, such as orthodontic, maxillofacial, surgical, anatomical, anthropological and forensic literature, and numerous media have been employed to collect data from living and deceased subjects. With the development of clinical imaging and the enhanced technology associated with this field, multiple methods of data collection have become accessible, including Computed Tomography, Cone-Beam Computed Tomography, Magnetic Resonance Imaging, Radiographs, Three-dimensional Scanning, Photogrammetry and Ultrasound, alongside the more traditional in vivo methods, such as palpation and direct measurement, and cadaveric human dissection. Practitioners often struggle to identify the most appropriate standards and research results are frequently inconsistent adding to the confusion. This paper aims to clarify how practitioners can choose optimal standards, which standards are the most reliable and when to apply these standards for craniofacial identification. This paper describes the advantages and disadvantages of each mode of data collection and collates published research to review standards across different populations for each facial feature. This paper does not aim to be a practical instruction paper; since this field encompasses a wide range of 2D and 3D approaches (e.g., clay sculpture, sketch, automated, computer-modelling), the implementation of these standards is left to the individual practitioner.


Asunto(s)
Antropología Forense , Humanos , Antropología Forense/métodos , Reproducibilidad de los Resultados , Cara/diagnóstico por imagen , Cara/anatomía & histología , Imagenología Tridimensional , Cráneo/diagnóstico por imagen , Cráneo/anatomía & histología , Cefalometría/normas , Identificación Biométrica/métodos
4.
Sci Rep ; 14(1): 10871, 2024 05 13.
Artículo en Inglés | MEDLINE | ID: mdl-38740777

RESUMEN

Reinforcement of the Internet of Medical Things (IoMT) network security has become extremely significant as these networks enable both patients and healthcare providers to communicate with each other by exchanging medical signals, data, and vital reports in a safe way. To ensure the safe transmission of sensitive information, robust and secure access mechanisms are paramount. Vulnerabilities in these networks, particularly at the access points, could expose patients to significant risks. Among the possible security measures, biometric authentication is becoming a more feasible choice, with a focus on leveraging regularly-monitored biomedical signals like Electrocardiogram (ECG) signals due to their unique characteristics. A notable challenge within all biometric authentication systems is the risk of losing original biometric traits, if hackers successfully compromise the biometric template storage space. Current research endorses replacement of the original biometrics used in access control with cancellable templates. These are produced using encryption or non-invertible transformation, which improves security by enabling the biometric templates to be changed in case an unwanted access is detected. This study presents a comprehensive framework for ECG-based recognition with cancellable templates. This framework may be used for accessing IoMT networks. An innovative methodology is introduced through non-invertible modification of ECG signals using blind signal separation and lightweight encryption. The basic idea here depends on the assumption that if the ECG signal and an auxiliary audio signal for the same person are subjected to a separation algorithm, the algorithm will yield two uncorrelated components through the minimization of a correlation cost function. Hence, the obtained outputs from the separation algorithm will be distorted versions of the ECG as well as the audio signals. The distorted versions of the ECG signals can be treated with a lightweight encryption stage and used as cancellable templates. Security enhancement is achieved through the utilization of the lightweight encryption stage based on a user-specific pattern and XOR operation, thereby reducing the processing burden associated with conventional encryption methods. The proposed framework efficacy is demonstrated through its application on the ECG-ID and MIT-BIH datasets, yielding promising results. The experimental evaluation reveals an Equal Error Rate (EER) of 0.134 on the ECG-ID dataset and 0.4 on the MIT-BIH dataset, alongside an exceptionally large Area under the Receiver Operating Characteristic curve (AROC) of 99.96% for both datasets. These results underscore the framework potential in securing IoMT networks through cancellable biometrics, offering a hybrid security model that combines the strengths of non-invertible transformations and lightweight encryption.


Asunto(s)
Seguridad Computacional , Electrocardiografía , Internet de las Cosas , Electrocardiografía/métodos , Humanos , Algoritmos , Procesamiento de Señales Asistido por Computador , Identificación Biométrica/métodos
5.
PLoS One ; 19(4): e0301971, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38648227

RESUMEN

This work, in a pioneering approach, attempts to build a biometric system that works purely based on the fluid mechanics governing exhaled breath. We test the hypothesis that the structure of turbulence in exhaled human breath can be exploited to build biometric algorithms. This work relies on the idea that the extrathoracic airway is unique for every individual, making the exhaled breath a biomarker. Methods including classical multi-dimensional hypothesis testing approach and machine learning models are employed in building user authentication algorithms, namely user confirmation and user identification. A user confirmation algorithm tries to verify whether a user is the person they claim to be. A user identification algorithm tries to identify a user's identity with no prior information available. A dataset of exhaled breath time series samples from 94 human subjects was used to evaluate the performance of these algorithms. The user confirmation algorithms performed exceedingly well for the given dataset with over 97% true confirmation rate. The machine learning based algorithm achieved a good true confirmation rate, reiterating our understanding of why machine learning based algorithms typically outperform classical hypothesis test based algorithms. The user identification algorithm performs reasonably well with the provided dataset with over 50% of the users identified as being within two possible suspects. We show surprisingly unique turbulent signatures in the exhaled breath that have not been discovered before. In addition to discussions on a novel biometric system, we make arguments to utilise this idea as a tool to gain insights into the morphometric variation of extrathoracic airway across individuals. Such tools are expected to have future potential in the area of personalised medicines.


Asunto(s)
Algoritmos , Pruebas Respiratorias , Espiración , Aprendizaje Automático , Humanos , Espiración/fisiología , Pruebas Respiratorias/métodos , Identificación Biométrica/métodos
6.
Sensors (Basel) ; 24(8)2024 Apr 09.
Artículo en Inglés | MEDLINE | ID: mdl-38676006

RESUMEN

Due to their user-friendliness and reliability, biometric systems have taken a central role in everyday digital identity management for all kinds of private, financial and governmental applications with increasing security requirements. A central security aspect of unsupervised biometric authentication systems is the presentation attack detection (PAD) mechanism, which defines the robustness to fake or altered biometric features. Artifacts like photos, artificial fingers, face masks and fake iris contact lenses are a general security threat for all biometric modalities. The Biometric Evaluation Center of the Institute of Safety and Security Research (ISF) at the University of Applied Sciences Bonn-Rhein-Sieg has specialized in the development of a near-infrared (NIR)-based contact-less detection technology that can distinguish between human skin and most artifact materials. This technology is highly adaptable and has already been successfully integrated into fingerprint scanners, face recognition devices and hand vein scanners. In this work, we introduce a cutting-edge, miniaturized near-infrared presentation attack detection (NIR-PAD) device. It includes an innovative signal processing chain and an integrated distance measurement feature to boost both reliability and resilience. We detail the device's modular configuration and conceptual decisions, highlighting its suitability as a versatile platform for sensor fusion and seamless integration into future biometric systems. This paper elucidates the technological foundations and conceptual framework of the NIR-PAD reference platform, alongside an exploration of its potential applications and prospective enhancements.


Asunto(s)
Identificación Biométrica , Humanos , Identificación Biométrica/métodos , Piel/diagnóstico por imagen , Biometría/métodos , Seguridad Computacional , Reproducibilidad de los Resultados , Rayos Infrarrojos , Espectroscopía Infrarroja Corta/métodos , Dermatoglifia , Procesamiento de Señales Asistido por Computador
7.
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi ; 41(2): 272-280, 2024 Apr 25.
Artículo en Chino | MEDLINE | ID: mdl-38686407

RESUMEN

The existing one-time identity authentication technology cannot continuously guarantee the legitimacy of user identity during the whole human-computer interaction session, and often requires active cooperation of users, which seriously limits the availability. This study proposes a new non-contact identity recognition technology based on cardiac micro-motion detection using ultra wideband (UWB) bio-radar. After the multi-point micro-motion echoes in the range dimension of the human heart surface area were continuously detected by ultra wideband bio-radar, the two-dimensional principal component analysis (2D-PCA) was exploited to extract the compressed features of the two-dimensional image matrix, namely the distance channel-heart beat sampling point (DC-HBP) matrix, in each accurate segmented heart beat cycle for identity recognition. In the practical measurement experiment, based on the proposed multi-range-bin & 2D-PCA feature scheme along with two conventional reference feature schemes, three typical classifiers were selected as representatives to conduct the heart beat identification under two states of normal breathing and breath holding. The results showed that the multi-range-bin & 2D-PCA feature scheme proposed in this paper showed the best recognition effect. Compared with the optimal range-bin & overall heart beat feature scheme, our proposed scheme held an overall average recognition accuracy of 6.16% higher (normal respiration: 6.84%; breath holding: 5.48%). Compared with the multi-distance unit & whole heart beat feature scheme, the overall average accuracy increase was 27.42% (normal respiration: 28.63%; breath holding: 26.21%) for our proposed scheme. This study is expected to provide a new method of undisturbed, all-weather, non-contact and continuous identification for authentication.


Asunto(s)
Corazón , Análisis de Componente Principal , Humanos , Corazón/fisiología , Algoritmos , Frecuencia Cardíaca , Procesamiento de Señales Asistido por Computador , Movimiento (Física) , Identificación Biométrica/métodos , Respiración
8.
Math Biosci Eng ; 21(2): 3129-3145, 2024 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-38454722

RESUMEN

Biometric authentication prevents losses from identity misuse in the artificial intelligence (AI) era. The fusion method integrates palmprint and palm vein features, leveraging their stability and security and enhances counterfeiting prevention and overall system efficiency through multimodal correlations. However, most of the existing multi-modal palmprint and palm vein feature extraction methods extract only feature information independently from different modalities, ignoring the importance of the correlation between different modal samples in the class to the improvement of recognition performance. In this study, we addressed the aforementioned issues by proposing a feature-level joint learning fusion approach for palmprint and palm vein recognition based on modal correlations. The method employs a sparse unsupervised projection algorithm with a "purification matrix" constraint to enhance consistency in intra-modal features. This minimizes data reconstruction errors, eliminating noise and extracting compact, and discriminative representations. Subsequently, the partial least squares algorithm extracts high grayscale variance and category correlation subspaces from each modality. A weighted sum is then utilized to dynamically optimize the contribution of each modality for effective classification recognition. Experimental evaluations conducted for five multimodal databases, composed of six unimodal databases including the Chinese Academy of Sciences multispectral palmprint and palm vein databases, yielded equal error rates (EER) of 0.0173%, 0.0192%, 0.0059%, 0.0010%, and 0.0008%. Compared to some classical methods for palmprint and palm vein fusion recognition, the algorithm significantly improves recognition performance. The algorithm is suitable for identity recognition in scenarios with high security requirements and holds practical value.


Asunto(s)
Inteligencia Artificial , Identificación Biométrica , Identificación Biométrica/métodos , Algoritmos , Mano/anatomía & histología , Aprendizaje
9.
Sensors (Basel) ; 24(4)2024 Feb 09.
Artículo en Inglés | MEDLINE | ID: mdl-38400290

RESUMEN

FV (finger vein) identification is a biometric identification technology that extracts the features of FV images for identity authentication. To address the limitations of CNN-based FV identification, particularly the challenge of small receptive fields and difficulty in capturing long-range dependencies, an FV identification method named Let-Net (large kernel and attention mechanism network) was introduced, which combines local and global information. Firstly, Let-Net employs large kernels to capture a broader spectrum of spatial contextual information, utilizing deep convolution in conjunction with residual connections to curtail the volume of model parameters. Subsequently, an integrated attention mechanism is applied to augment information flow within the channel and spatial dimensions, effectively modeling global information for the extraction of crucial FV features. The experimental results on nine public datasets show that Let-Net has excellent identification performance, and the EER and accuracy rate on the FV_USM dataset can reach 0.04% and 99.77%. The parameter number and FLOPs of Let-Net are only 0.89M and 0.25G, which means that the time cost of training and reasoning of the model is low, and it is easier to deploy and integrate into various applications.


Asunto(s)
Identificación Biométrica , Extremidades , Solución de Problemas , Tecnología , Venas/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador
10.
PLoS One ; 19(2): e0291084, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38358992

RESUMEN

In the field of data security, biometric security is a significant emerging concern. The multimodal biometrics system with enhanced accuracy and detection rate for smart environments is still a significant challenge. The fusion of an electrocardiogram (ECG) signal with a fingerprint is an effective multimodal recognition system. In this work, unimodal and multimodal biometric systems using Convolutional Neural Network (CNN) are conducted and compared with traditional methods using different levels of fusion of fingerprint and ECG signal. This study is concerned with the evaluation of the effectiveness of proposed parallel and sequential multimodal biometric systems with various feature extraction and classification methods. Additionally, the performance of unimodal biometrics of ECG and fingerprint utilizing deep learning and traditional classification technique is examined. The suggested biometric systems were evaluated utilizing ECG (MIT-BIH) and fingerprint (FVC2004) databases. Additional tests are conducted to examine the suggested models with:1) virtual dataset without augmentation (ODB) and 2) virtual dataset with augmentation (VDB). The findings show that the optimum performance of the parallel multimodal achieved 0.96 Area Under the ROC Curve (AUC) and sequential multimodal achieved 0.99 AUC, in comparison to unimodal biometrics which achieved 0.87 and 0.99 AUCs, for the fingerprint and ECG biometrics, respectively. The overall performance of the proposed multimodal biometrics outperformed unimodal biometrics using CNN. Moreover, the performance of the suggested CNN model for ECG signal and sequential multimodal system based on neural network outperformed other systems. Lastly, the performance of the proposed systems is compared with previously existing works.


Asunto(s)
Identificación Biométrica , Aprendizaje Profundo , Identificación Biométrica/métodos , Biometría/métodos , Redes Neurales de la Computación , Electrocardiografía/métodos
11.
IEEE Trans Image Process ; 33: 1588-1599, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38358875

RESUMEN

Attributed to the development of deep networks and abundant data, automatic face recognition (FR) has quickly reached human-level capacity in the past few years. However, the FR problem is not perfectly solved in case of large poses and uncontrolled occlusions. In this paper, we propose a novel bypass enhanced representation learning (BERL) method to improve face recognition under unconstrained scenarios. The proposed method integrates self-supervised learning and supervised learning together by attaching two auxiliary bypasses, a 3D reconstruction bypass and a blind inpainting bypass, to assist robust feature learning for face recognition. Among them, the 3D reconstruction bypass enforces the face recognition network to encode pose independent 3D facial information, which enhances the robustness to various poses. The blind inpainting bypass enforces the face recognition network to capture more facial context information for face inpainting, which enhances the robustness to occlusions. The whole framework is trained in end-to-end manner with two self-supervised tasks above and the classic supervised face identification task. During inference, the two auxiliary bypasses can be detached from the face recognition network, avoiding any additional computational overhead. Extensive experimental results on various face recognition benchmarks show that, without any cost of extra annotations and computations, our method outperforms state-of-the-art methods. Moreover, the learnt representations can also well generalize to other face-related downstream tasks such as the facial attribute recognition with limited labeled data.


Asunto(s)
Identificación Biométrica , Reconocimiento Facial , Humanos , Identificación Biométrica/métodos , Cara/diagnóstico por imagen , Cara/anatomía & histología , Bases de Datos Factuales , Benchmarking
12.
Animal ; 18(3): 101079, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38377806

RESUMEN

Biometrics methods, which currently identify humans, can potentially identify dairy cows. Given that animal movements cannot be easily controlled, identification accuracy and system robustness are challenging when deploying an animal biometrics recognition system on a real farm. Our proposed method performs multiple-cow face detection and face classification from videos by adjusting recent state-of-the-art deep-learning methods. As part of this study, a system was designed and installed at four meters above a feeding zone at the Volcani Institute's dairy farm. Two datasets were acquired and annotated, one for facial detection and the second for facial classification of 77 cows. We achieved for facial detection a mean average precision (at Intersection over Union of 0.5) of 97.8% using the YOLOv5 algorithm, and facial classification accuracy of 96.3% using a Vision-Transformer model with a unique loss-function borrowed from human facial recognition. Our combined system can process video frames with 10 cows' faces, localize their faces, and correctly classify their identities in less than 20 ms per frame. Thus, up to 50 frames per second video files can be processed with our system in real-time at a dairy farm. Our method efficiently performs real-time facial detection and recognition on multiple cow faces using deep neural networks, achieving a high precision in real-time operation. These qualities can make the proposed system a valuable tool for an automatic biometric cow recognition on farms.


Asunto(s)
Identificación Biométrica , Reconocimiento Facial , Femenino , Bovinos , Humanos , Animales , Granjas , Identificación Biométrica/métodos , Redes Neurales de la Computación , Algoritmos , Industria Lechera/métodos
13.
Neural Netw ; 169: 532-541, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37948971

RESUMEN

A proposed method, Enhancement, integration, and Expansion, aims to activate the representation of detailed features for occluded person re-identification. Region and context are two important and complementary features, and integrating them in an occluded environment can effectively improve the robustness of the model. Firstly, a self-enhancement module is designed. Based on the constructed multi-stream architecture, rich and meaningful feature interference is introduced in the feature extraction stage to enhance the model's ability to perceive noise. Next, a collaborative integration module similar to cascading cross-attention is proposed. By studying the intrinsic interaction patterns of regional and contextual features, it adaptively fuses features across streams and enhances the diverse and complete representation of internal information. The module is not only robust to complex occlusions, but also mitigates the feature interference problem due to similar appearances or scenes. Finally, a matching expansion module that enhances feature discriminability and completeness is proposed. Providing more stable and accurate features for recognition. Compared with state-of-the-art methods on two occluded and holistic datasets, the proposed method is proved to be advanced and the effectiveness of the module is proved by extensive ablation studies.


Asunto(s)
Identificación Biométrica , Redes Neurales de la Computación , Humanos
14.
Neural Netw ; 170: 1-17, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37972453

RESUMEN

Biometrics is a field that has been given importance in recent years and has been extensively studied. Biometrics can use physical and behavioural differences that are unique to individuals to recognize and identify them. Today, biometric information is used in many areas such as computer vision systems, entrance systems, security and recognition. In this study, a new biometrics database containing silhouette, thermal face and skeletal data based on the distance between the joints was created to be used in behavioural and physical biometrics studies. The fact that many cameras were used in previous studies increases both the processing intensity and the material cost. This study aimed to both increase the recognition performance and reduce material costs by adding thermal face data in addition to soft and behavioural biometrics with the optimum camera. The presented data set was created in accordance with both motion recognition and person identification. Various data loss scenarios and multi-biometrics approaches based on data fusion have been tried on the created data sets and the results have been given comparatively. In addition, the correlation coefficient of the motion frames method to obtain energy images from silhouette data was tested on this dataset and yielded high-accuracy results for both motion and person recognition.


Asunto(s)
Identificación Biométrica , Biometría , Humanos , Biometría/métodos , Inteligencia Artificial , Bases de Datos Factuales , Identificación Biométrica/métodos
15.
Analyst ; 149(2): 350-356, 2024 Jan 15.
Artículo en Inglés | MEDLINE | ID: mdl-38018892

RESUMEN

This study aims at proof of concept that constant monitoring of the concentrations of metabolites in three individuals' sweat over time can differentiate one from another at any given time, providing investigators and analysts with increased ability and means to individualize this bountiful biological sample. A technique was developed to collect and extract authentic sweat samples from three female volunteers for the analysis of lactate, urea, and L-alanine levels. These samples were collected 21 times over a 40-day period and quantified using a series of bioaffinity-based enzymatic assays with UV-vis spectrophotometric detection. Sweat samples were simultaneously dried, derivatized, and analyzed by a GC-MS technique for comparison. Both UV-vis and GC-MS analysis methods provided a statistically significant MANOVA result, demonstrating that the sum of the three metabolites could differentiate each individual at any given day of the time interval. Expanding upon previous studies, this experiment aims to establish a method of metabolite monitoring as opposed to single-point analyses for application to biometric identification from the skin surface.


Asunto(s)
Identificación Biométrica , Sudor , Humanos , Femenino , Cromatografía de Gases y Espectrometría de Masas , Sudor/metabolismo , Ácido Láctico , Análisis Multivariante
16.
Artículo en Inglés | MEDLINE | ID: mdl-38082835

RESUMEN

Newborn face recognition is a meaningful application for obstetrics in the hospital, as it enhances security measures against infant swapping and abduction through authentication protocols. Due to limited newborn face datasets, this topic was not thoroughly studied. We conducted a clinical trial to create a dataset that collects face images from 200 newborns within an hour after birth, namely NEWBORN200. To our best knowledge, this is the largest newborn face dataset collected in the hospital for this application. The dataset was used to evaluate the four latest ResNet-based deep models for newborn face recognition, including ArcFace, CurricularFace, MagFace, and AdaFace. The experimental results show that AdaFace has the best performance, obtaining 55.24% verification accuracy at 0.1% false accept rate in the open set while achieving 78.76% rank-1 identification accuracy in a closed set. It demonstrates the feasibility of using deep learning for newborn face recognition, also indicating the direction of improvement could be the robustness to varying postures.


Asunto(s)
Identificación Biométrica , Reconocimiento Facial , Humanos , Lactante , Recién Nacido , Benchmarking , Identificación Biométrica/métodos , Bases de Datos Factuales , Cara
17.
Artículo en Inglés | MEDLINE | ID: mdl-38083079

RESUMEN

Electrocardiograms (ECGs) have the inherent property of being intrinsic and dynamic and are shown to be unique among individuals, making them promising as a biometric trait. Although many ECG biometric recognition approaches have demonstrated accurate recognition results in small enrollment sets, they can suffer from performance degradation when many subjects are enrolled. This study proposes an ECG biometric identification system based on locality-sensitive hashing (LSH) that can accommodate a large number of registrants while maintaining satisfactory identification accuracy. By incorporating the concept of LSH, the identity of an unknown subject can be recognized without performing vector comparisons for all registered subjects. Moreover, a kernel density estimator-based method is used to exclude unregistered subjects. The ECGs of 285 subjects from the PTB dataset were used to evaluate the proposed scheme's performance. Experimental results demonstrated an IR and EER of 99% and 4%, respectively, when Nen/Nid = 15/3.


Asunto(s)
Algoritmos , Identificación Biométrica , Humanos , Electrocardiografía , Fenotipo , Reconocimiento en Psicología
18.
Artículo en Inglés | MEDLINE | ID: mdl-38082655

RESUMEN

Recently, electromyography (EMG) has been established as a promising new biometric trait that provides a unique dual mode security: biometrics and knowledge. For authentication that is used daily and long-term by general consumers, the wrist is a suitable location, which could be easily integrated into the existing form of smartwatches and fitness trackers. However, current EMG-based biometrics still follow the historical path of powered prosthetics research, where EMG signals were usually recorded from forearm positions. Moreover, the robustness of EMG processing algorithms across multiple days is still an open problem that needs to be addressed before for long-term reliable use. This study intends to investigate the difference in authentication performance between wrist and forearm EMG signals, in a within-day and two cross-day analyses. Our open dataset (GRABMyo dataset) was used to examine this difference, which contains forearm and wrist EMG data collected from 43 participants over three different days with long separation (Days 1, 8, and 29). The results showed wrist EMG signals led to at least comparable with forearm EMG signals in within-day Equal-error rate (EER). In cross-day analysis, the EER of the wrist EMG signals was higher than that of forearm signals. In general, the low median EER (<0.1) of wrist EMG in cumulative cross-day analysis demonstrates the promise of using wrist EMG signals for authentication in long-term applications.


Asunto(s)
Identificación Biométrica , Muñeca , Humanos , Antebrazo , Electromiografía/métodos , Articulación de la Muñeca
19.
Sensors (Basel) ; 23(24)2023 Dec 08.
Artículo en Inglés | MEDLINE | ID: mdl-38139551

RESUMEN

This research work focuses on a Near-Infra-Red (NIR) finger-images-based multimodal biometric system based on Finger Texture and Finger Vein biometrics. The individual results of the biometric characteristics are fused using a fuzzy system, and the final identification result is achieved. Experiments are performed for three different databases, i.e., the Near-Infra-Red Hand Images (NIRHI), Hong Kong Polytechnic University (HKPU) and University of Twente Finger Vein Pattern (UTFVP) databases. First, the Finger Texture biometric employs an efficient texture feature extracting algorithm, i.e., Linear Binary Pattern. Then, the classification is performed using Support Vector Machine, a proven machine learning classification algorithm. Second, the transfer learning of pre-trained convolutional neural networks (CNNs) is performed for the Finger Vein biometric, employing two approaches. The three selected CNNs are AlexNet, VGG16 and VGG19. In Approach 1, before feeding the images for the training of the CNN, the necessary preprocessing of NIR images is performed. In Approach 2, before the pre-processing step, image intensity optimization is also employed to regularize the image intensity. NIRHI outperforms HKPU and UTFVP for both of the modalities of focus, in a unimodal setup as well as in a multimodal one. The proposed multimodal biometric system demonstrates a better overall identification accuracy of 99.62% in comparison with 99.51% and 99.50% reported in the recent state-of-the-art systems.


Asunto(s)
Identificación Biométrica , Dedos , Humanos , Dedos/diagnóstico por imagen , Dedos/irrigación sanguínea , Identificación Biométrica/métodos , Biometría/métodos , Mano/diagnóstico por imagen , Redes Neurales de la Computación
20.
Sensors (Basel) ; 23(24)2023 Dec 15.
Artículo en Inglés | MEDLINE | ID: mdl-38139689

RESUMEN

With the rapid development of multimedia technology, personnel verification systems have become increasingly important in the security field and identity verification. However, unimodal verification systems have performance bottlenecks in complex scenarios, thus triggering the need for multimodal feature fusion methods. The main problem with audio-visual multimodal feature fusion is how to effectively integrate information from different modalities to improve the accuracy and robustness of the system for individual identity. In this paper, we focus on how to improve multimodal person verification systems and how to combine audio and visual features. In this study, we use pretrained models to extract the embeddings from each modality and then perform fusion model experiments based on these embeddings. The baseline approach in this paper involves taking the fusion feature and passing it through a fully connected (FC) layer. Building upon this baseline, we propose three fusion models based on attentional mechanisms: attention, gated, and inter-attention. These fusion models are trained on the VoxCeleb1 development set and tested on the evaluation sets of the VoxCeleb1, NIST SRE19, and CNC-AV datasets. On the VoxCeleb1 dataset, the best system performance achieved in this study was an equal error rate (EER) of 0.23% and a detection cost function (minDCF) of 0.011. On the evaluation set of NIST SRE19, the EER was 2.60% and the minDCF was 0.283. On the evaluation set of the CNC-AV set, the EER was 11.30% and the minDCF was 0.443. These experimental results strongly demonstrate that the proposed fusion method can significantly improve the performance of multimodal character verification systems.


Asunto(s)
Identificación Biométrica , Tecnología de la Información , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA