Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 705
Filter
1.
Forensic Sci Int ; 359: 111993, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38704925

ABSTRACT

There are numerous anatomical and anthropometrical standards that can be utilised for craniofacial analysis and identification. These standards originate from a wide variety of sources, such as orthodontic, maxillofacial, surgical, anatomical, anthropological and forensic literature, and numerous media have been employed to collect data from living and deceased subjects. With the development of clinical imaging and the enhanced technology associated with this field, multiple methods of data collection have become accessible, including Computed Tomography, Cone-Beam Computed Tomography, Magnetic Resonance Imaging, Radiographs, Three-dimensional Scanning, Photogrammetry and Ultrasound, alongside the more traditional in vivo methods, such as palpation and direct measurement, and cadaveric human dissection. Practitioners often struggle to identify the most appropriate standards and research results are frequently inconsistent adding to the confusion. This paper aims to clarify how practitioners can choose optimal standards, which standards are the most reliable and when to apply these standards for craniofacial identification. This paper describes the advantages and disadvantages of each mode of data collection and collates published research to review standards across different populations for each facial feature. This paper does not aim to be a practical instruction paper; since this field encompasses a wide range of 2D and 3D approaches (e.g., clay sculpture, sketch, automated, computer-modelling), the implementation of these standards is left to the individual practitioner.


Subject(s)
Forensic Anthropology , Humans , Forensic Anthropology/methods , Reproducibility of Results , Face/diagnostic imaging , Face/anatomy & histology , Imaging, Three-Dimensional , Skull/diagnostic imaging , Skull/anatomy & histology , Cephalometry/standards , Biometric Identification/methods
2.
Sci Rep ; 14(1): 10871, 2024 05 13.
Article in English | MEDLINE | ID: mdl-38740777

ABSTRACT

Reinforcement of the Internet of Medical Things (IoMT) network security has become extremely significant as these networks enable both patients and healthcare providers to communicate with each other by exchanging medical signals, data, and vital reports in a safe way. To ensure the safe transmission of sensitive information, robust and secure access mechanisms are paramount. Vulnerabilities in these networks, particularly at the access points, could expose patients to significant risks. Among the possible security measures, biometric authentication is becoming a more feasible choice, with a focus on leveraging regularly-monitored biomedical signals like Electrocardiogram (ECG) signals due to their unique characteristics. A notable challenge within all biometric authentication systems is the risk of losing original biometric traits, if hackers successfully compromise the biometric template storage space. Current research endorses replacement of the original biometrics used in access control with cancellable templates. These are produced using encryption or non-invertible transformation, which improves security by enabling the biometric templates to be changed in case an unwanted access is detected. This study presents a comprehensive framework for ECG-based recognition with cancellable templates. This framework may be used for accessing IoMT networks. An innovative methodology is introduced through non-invertible modification of ECG signals using blind signal separation and lightweight encryption. The basic idea here depends on the assumption that if the ECG signal and an auxiliary audio signal for the same person are subjected to a separation algorithm, the algorithm will yield two uncorrelated components through the minimization of a correlation cost function. Hence, the obtained outputs from the separation algorithm will be distorted versions of the ECG as well as the audio signals. The distorted versions of the ECG signals can be treated with a lightweight encryption stage and used as cancellable templates. Security enhancement is achieved through the utilization of the lightweight encryption stage based on a user-specific pattern and XOR operation, thereby reducing the processing burden associated with conventional encryption methods. The proposed framework efficacy is demonstrated through its application on the ECG-ID and MIT-BIH datasets, yielding promising results. The experimental evaluation reveals an Equal Error Rate (EER) of 0.134 on the ECG-ID dataset and 0.4 on the MIT-BIH dataset, alongside an exceptionally large Area under the Receiver Operating Characteristic curve (AROC) of 99.96% for both datasets. These results underscore the framework potential in securing IoMT networks through cancellable biometrics, offering a hybrid security model that combines the strengths of non-invertible transformations and lightweight encryption.


Subject(s)
Computer Security , Electrocardiography , Internet of Things , Electrocardiography/methods , Humans , Algorithms , Signal Processing, Computer-Assisted , Biometric Identification/methods
3.
Sensors (Basel) ; 24(9)2024 Apr 24.
Article in English | MEDLINE | ID: mdl-38732790

ABSTRACT

With the development of biometric identification technology, finger vein identification has received more and more widespread attention for its security, efficiency, and stability. However, because of the performance of the current standard finger vein image acquisition device and the complex internal organization of the finger, the acquired images are often heavily degraded and have lost their texture characteristics. This makes the topology of the finger veins inconspicuous or even difficult to distinguish, greatly affecting the identification accuracy. Therefore, this paper proposes a finger vein image recovery and enhancement algorithm using atmospheric scattering theory. Firstly, to normalize the local over-bright and over-dark regions of finger vein images within a certain threshold, the Gamma transform method is improved in this paper to correct and measure the gray value of a given image. Then, we reconstruct the image based on atmospheric scattering theory and design a pixel mutation filter to segment the venous and non-venous contact zones. Finally, the degraded finger vein images are recovered and enhanced by global image gray value normalization. Experiments on SDUMLA-HMT and ZJ-UVM datasets show that our proposed method effectively achieves the recovery and enhancement of degraded finger vein images. The image restoration and enhancement algorithm proposed in this paper performs well in finger vein recognition using traditional methods, machine learning, and deep learning. The recognition accuracy of the processed image is improved by more than 10% compared to the original image.


Subject(s)
Algorithms , Fingers , Image Processing, Computer-Assisted , Veins , Humans , Fingers/blood supply , Fingers/diagnostic imaging , Veins/diagnostic imaging , Image Processing, Computer-Assisted/methods , Biometric Identification/methods , Atmosphere
4.
Sensors (Basel) ; 24(9)2024 Apr 25.
Article in English | MEDLINE | ID: mdl-38732856

ABSTRACT

Biometric authentication plays a vital role in various everyday applications with increasing demands for reliability and security. However, the use of real biometric data for research raises privacy concerns and data scarcity issues. A promising approach using synthetic biometric data to address the resulting unbalanced representation and bias, as well as the limited availability of diverse datasets for the development and evaluation of biometric systems, has emerged. Methods for a parameterized generation of highly realistic synthetic data are emerging and the necessary quality metrics to prove that synthetic data can compare to real data are open research tasks. The generation of 3D synthetic face data using game engines' capabilities of generating varied realistic virtual characters is explored as a possible alternative for generating synthetic face data while maintaining reproducibility and ground truth, as opposed to other creation methods. While synthetic data offer several benefits, including improved resilience against data privacy concerns, the limitations and challenges associated with their usage are addressed. Our work shows concurrent behavior in comparing semi-synthetic data as a digital representation of a real identity with their real datasets. Despite slight asymmetrical performance in comparison with a larger database of real samples, a promising performance in face data authentication is shown, which lays the foundation for further investigations with digital avatars and the creation and analysis of fully synthetic data. Future directions for improving synthetic biometric data generation and their impact on advancing biometrics research are discussed.


Subject(s)
Face , Video Games , Humans , Face/anatomy & histology , Face/physiology , Biometry/methods , Biometric Identification/methods , Imaging, Three-Dimensional/methods , Male , Female , Algorithms , Reproducibility of Results
5.
Sensors (Basel) ; 24(8)2024 Apr 09.
Article in English | MEDLINE | ID: mdl-38676006

ABSTRACT

Due to their user-friendliness and reliability, biometric systems have taken a central role in everyday digital identity management for all kinds of private, financial and governmental applications with increasing security requirements. A central security aspect of unsupervised biometric authentication systems is the presentation attack detection (PAD) mechanism, which defines the robustness to fake or altered biometric features. Artifacts like photos, artificial fingers, face masks and fake iris contact lenses are a general security threat for all biometric modalities. The Biometric Evaluation Center of the Institute of Safety and Security Research (ISF) at the University of Applied Sciences Bonn-Rhein-Sieg has specialized in the development of a near-infrared (NIR)-based contact-less detection technology that can distinguish between human skin and most artifact materials. This technology is highly adaptable and has already been successfully integrated into fingerprint scanners, face recognition devices and hand vein scanners. In this work, we introduce a cutting-edge, miniaturized near-infrared presentation attack detection (NIR-PAD) device. It includes an innovative signal processing chain and an integrated distance measurement feature to boost both reliability and resilience. We detail the device's modular configuration and conceptual decisions, highlighting its suitability as a versatile platform for sensor fusion and seamless integration into future biometric systems. This paper elucidates the technological foundations and conceptual framework of the NIR-PAD reference platform, alongside an exploration of its potential applications and prospective enhancements.


Subject(s)
Biometric Identification , Humans , Biometric Identification/methods , Skin/diagnostic imaging , Biometry/methods , Computer Security , Reproducibility of Results , Infrared Rays , Spectroscopy, Near-Infrared/methods , Dermatoglyphics , Signal Processing, Computer-Assisted
6.
PLoS One ; 19(4): e0301971, 2024.
Article in English | MEDLINE | ID: mdl-38648227

ABSTRACT

This work, in a pioneering approach, attempts to build a biometric system that works purely based on the fluid mechanics governing exhaled breath. We test the hypothesis that the structure of turbulence in exhaled human breath can be exploited to build biometric algorithms. This work relies on the idea that the extrathoracic airway is unique for every individual, making the exhaled breath a biomarker. Methods including classical multi-dimensional hypothesis testing approach and machine learning models are employed in building user authentication algorithms, namely user confirmation and user identification. A user confirmation algorithm tries to verify whether a user is the person they claim to be. A user identification algorithm tries to identify a user's identity with no prior information available. A dataset of exhaled breath time series samples from 94 human subjects was used to evaluate the performance of these algorithms. The user confirmation algorithms performed exceedingly well for the given dataset with over 97% true confirmation rate. The machine learning based algorithm achieved a good true confirmation rate, reiterating our understanding of why machine learning based algorithms typically outperform classical hypothesis test based algorithms. The user identification algorithm performs reasonably well with the provided dataset with over 50% of the users identified as being within two possible suspects. We show surprisingly unique turbulent signatures in the exhaled breath that have not been discovered before. In addition to discussions on a novel biometric system, we make arguments to utilise this idea as a tool to gain insights into the morphometric variation of extrathoracic airway across individuals. Such tools are expected to have future potential in the area of personalised medicines.


Subject(s)
Algorithms , Breath Tests , Exhalation , Machine Learning , Humans , Exhalation/physiology , Breath Tests/methods , Biometric Identification/methods
7.
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi ; 41(2): 272-280, 2024 Apr 25.
Article in Chinese | MEDLINE | ID: mdl-38686407

ABSTRACT

The existing one-time identity authentication technology cannot continuously guarantee the legitimacy of user identity during the whole human-computer interaction session, and often requires active cooperation of users, which seriously limits the availability. This study proposes a new non-contact identity recognition technology based on cardiac micro-motion detection using ultra wideband (UWB) bio-radar. After the multi-point micro-motion echoes in the range dimension of the human heart surface area were continuously detected by ultra wideband bio-radar, the two-dimensional principal component analysis (2D-PCA) was exploited to extract the compressed features of the two-dimensional image matrix, namely the distance channel-heart beat sampling point (DC-HBP) matrix, in each accurate segmented heart beat cycle for identity recognition. In the practical measurement experiment, based on the proposed multi-range-bin & 2D-PCA feature scheme along with two conventional reference feature schemes, three typical classifiers were selected as representatives to conduct the heart beat identification under two states of normal breathing and breath holding. The results showed that the multi-range-bin & 2D-PCA feature scheme proposed in this paper showed the best recognition effect. Compared with the optimal range-bin & overall heart beat feature scheme, our proposed scheme held an overall average recognition accuracy of 6.16% higher (normal respiration: 6.84%; breath holding: 5.48%). Compared with the multi-distance unit & whole heart beat feature scheme, the overall average accuracy increase was 27.42% (normal respiration: 28.63%; breath holding: 26.21%) for our proposed scheme. This study is expected to provide a new method of undisturbed, all-weather, non-contact and continuous identification for authentication.


Subject(s)
Heart , Principal Component Analysis , Humans , Heart/physiology , Algorithms , Heart Rate , Signal Processing, Computer-Assisted , Motion , Biometric Identification/methods , Respiration
8.
Math Biosci Eng ; 21(2): 3129-3145, 2024 Feb 01.
Article in English | MEDLINE | ID: mdl-38454722

ABSTRACT

Biometric authentication prevents losses from identity misuse in the artificial intelligence (AI) era. The fusion method integrates palmprint and palm vein features, leveraging their stability and security and enhances counterfeiting prevention and overall system efficiency through multimodal correlations. However, most of the existing multi-modal palmprint and palm vein feature extraction methods extract only feature information independently from different modalities, ignoring the importance of the correlation between different modal samples in the class to the improvement of recognition performance. In this study, we addressed the aforementioned issues by proposing a feature-level joint learning fusion approach for palmprint and palm vein recognition based on modal correlations. The method employs a sparse unsupervised projection algorithm with a "purification matrix" constraint to enhance consistency in intra-modal features. This minimizes data reconstruction errors, eliminating noise and extracting compact, and discriminative representations. Subsequently, the partial least squares algorithm extracts high grayscale variance and category correlation subspaces from each modality. A weighted sum is then utilized to dynamically optimize the contribution of each modality for effective classification recognition. Experimental evaluations conducted for five multimodal databases, composed of six unimodal databases including the Chinese Academy of Sciences multispectral palmprint and palm vein databases, yielded equal error rates (EER) of 0.0173%, 0.0192%, 0.0059%, 0.0010%, and 0.0008%. Compared to some classical methods for palmprint and palm vein fusion recognition, the algorithm significantly improves recognition performance. The algorithm is suitable for identity recognition in scenarios with high security requirements and holds practical value.


Subject(s)
Artificial Intelligence , Biometric Identification , Biometric Identification/methods , Algorithms , Hand/anatomy & histology , Learning
9.
Animal ; 18(3): 101079, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38377806

ABSTRACT

Biometrics methods, which currently identify humans, can potentially identify dairy cows. Given that animal movements cannot be easily controlled, identification accuracy and system robustness are challenging when deploying an animal biometrics recognition system on a real farm. Our proposed method performs multiple-cow face detection and face classification from videos by adjusting recent state-of-the-art deep-learning methods. As part of this study, a system was designed and installed at four meters above a feeding zone at the Volcani Institute's dairy farm. Two datasets were acquired and annotated, one for facial detection and the second for facial classification of 77 cows. We achieved for facial detection a mean average precision (at Intersection over Union of 0.5) of 97.8% using the YOLOv5 algorithm, and facial classification accuracy of 96.3% using a Vision-Transformer model with a unique loss-function borrowed from human facial recognition. Our combined system can process video frames with 10 cows' faces, localize their faces, and correctly classify their identities in less than 20 ms per frame. Thus, up to 50 frames per second video files can be processed with our system in real-time at a dairy farm. Our method efficiently performs real-time facial detection and recognition on multiple cow faces using deep neural networks, achieving a high precision in real-time operation. These qualities can make the proposed system a valuable tool for an automatic biometric cow recognition on farms.


Subject(s)
Biometric Identification , Facial Recognition , Female , Cattle , Humans , Animals , Farms , Biometric Identification/methods , Neural Networks, Computer , Algorithms , Dairying/methods
10.
IEEE Trans Image Process ; 33: 1588-1599, 2024.
Article in English | MEDLINE | ID: mdl-38358875

ABSTRACT

Attributed to the development of deep networks and abundant data, automatic face recognition (FR) has quickly reached human-level capacity in the past few years. However, the FR problem is not perfectly solved in case of large poses and uncontrolled occlusions. In this paper, we propose a novel bypass enhanced representation learning (BERL) method to improve face recognition under unconstrained scenarios. The proposed method integrates self-supervised learning and supervised learning together by attaching two auxiliary bypasses, a 3D reconstruction bypass and a blind inpainting bypass, to assist robust feature learning for face recognition. Among them, the 3D reconstruction bypass enforces the face recognition network to encode pose independent 3D facial information, which enhances the robustness to various poses. The blind inpainting bypass enforces the face recognition network to capture more facial context information for face inpainting, which enhances the robustness to occlusions. The whole framework is trained in end-to-end manner with two self-supervised tasks above and the classic supervised face identification task. During inference, the two auxiliary bypasses can be detached from the face recognition network, avoiding any additional computational overhead. Extensive experimental results on various face recognition benchmarks show that, without any cost of extra annotations and computations, our method outperforms state-of-the-art methods. Moreover, the learnt representations can also well generalize to other face-related downstream tasks such as the facial attribute recognition with limited labeled data.


Subject(s)
Biometric Identification , Facial Recognition , Humans , Biometric Identification/methods , Face/diagnostic imaging , Face/anatomy & histology , Databases, Factual , Benchmarking
11.
PLoS One ; 19(2): e0291084, 2024.
Article in English | MEDLINE | ID: mdl-38358992

ABSTRACT

In the field of data security, biometric security is a significant emerging concern. The multimodal biometrics system with enhanced accuracy and detection rate for smart environments is still a significant challenge. The fusion of an electrocardiogram (ECG) signal with a fingerprint is an effective multimodal recognition system. In this work, unimodal and multimodal biometric systems using Convolutional Neural Network (CNN) are conducted and compared with traditional methods using different levels of fusion of fingerprint and ECG signal. This study is concerned with the evaluation of the effectiveness of proposed parallel and sequential multimodal biometric systems with various feature extraction and classification methods. Additionally, the performance of unimodal biometrics of ECG and fingerprint utilizing deep learning and traditional classification technique is examined. The suggested biometric systems were evaluated utilizing ECG (MIT-BIH) and fingerprint (FVC2004) databases. Additional tests are conducted to examine the suggested models with:1) virtual dataset without augmentation (ODB) and 2) virtual dataset with augmentation (VDB). The findings show that the optimum performance of the parallel multimodal achieved 0.96 Area Under the ROC Curve (AUC) and sequential multimodal achieved 0.99 AUC, in comparison to unimodal biometrics which achieved 0.87 and 0.99 AUCs, for the fingerprint and ECG biometrics, respectively. The overall performance of the proposed multimodal biometrics outperformed unimodal biometrics using CNN. Moreover, the performance of the suggested CNN model for ECG signal and sequential multimodal system based on neural network outperformed other systems. Lastly, the performance of the proposed systems is compared with previously existing works.


Subject(s)
Biometric Identification , Deep Learning , Biometric Identification/methods , Biometry/methods , Neural Networks, Computer , Electrocardiography/methods
12.
Neural Netw ; 170: 1-17, 2024 Feb.
Article in English | MEDLINE | ID: mdl-37972453

ABSTRACT

Biometrics is a field that has been given importance in recent years and has been extensively studied. Biometrics can use physical and behavioural differences that are unique to individuals to recognize and identify them. Today, biometric information is used in many areas such as computer vision systems, entrance systems, security and recognition. In this study, a new biometrics database containing silhouette, thermal face and skeletal data based on the distance between the joints was created to be used in behavioural and physical biometrics studies. The fact that many cameras were used in previous studies increases both the processing intensity and the material cost. This study aimed to both increase the recognition performance and reduce material costs by adding thermal face data in addition to soft and behavioural biometrics with the optimum camera. The presented data set was created in accordance with both motion recognition and person identification. Various data loss scenarios and multi-biometrics approaches based on data fusion have been tried on the created data sets and the results have been given comparatively. In addition, the correlation coefficient of the motion frames method to obtain energy images from silhouette data was tested on this dataset and yielded high-accuracy results for both motion and person recognition.


Subject(s)
Biometric Identification , Biometry , Humans , Biometry/methods , Artificial Intelligence , Databases, Factual , Biometric Identification/methods
13.
Article in English | MEDLINE | ID: mdl-38082835

ABSTRACT

Newborn face recognition is a meaningful application for obstetrics in the hospital, as it enhances security measures against infant swapping and abduction through authentication protocols. Due to limited newborn face datasets, this topic was not thoroughly studied. We conducted a clinical trial to create a dataset that collects face images from 200 newborns within an hour after birth, namely NEWBORN200. To our best knowledge, this is the largest newborn face dataset collected in the hospital for this application. The dataset was used to evaluate the four latest ResNet-based deep models for newborn face recognition, including ArcFace, CurricularFace, MagFace, and AdaFace. The experimental results show that AdaFace has the best performance, obtaining 55.24% verification accuracy at 0.1% false accept rate in the open set while achieving 78.76% rank-1 identification accuracy in a closed set. It demonstrates the feasibility of using deep learning for newborn face recognition, also indicating the direction of improvement could be the robustness to varying postures.


Subject(s)
Biometric Identification , Facial Recognition , Humans , Infant , Infant, Newborn , Benchmarking , Biometric Identification/methods , Databases, Factual , Face
14.
Sensors (Basel) ; 23(24)2023 Dec 08.
Article in English | MEDLINE | ID: mdl-38139551

ABSTRACT

This research work focuses on a Near-Infra-Red (NIR) finger-images-based multimodal biometric system based on Finger Texture and Finger Vein biometrics. The individual results of the biometric characteristics are fused using a fuzzy system, and the final identification result is achieved. Experiments are performed for three different databases, i.e., the Near-Infra-Red Hand Images (NIRHI), Hong Kong Polytechnic University (HKPU) and University of Twente Finger Vein Pattern (UTFVP) databases. First, the Finger Texture biometric employs an efficient texture feature extracting algorithm, i.e., Linear Binary Pattern. Then, the classification is performed using Support Vector Machine, a proven machine learning classification algorithm. Second, the transfer learning of pre-trained convolutional neural networks (CNNs) is performed for the Finger Vein biometric, employing two approaches. The three selected CNNs are AlexNet, VGG16 and VGG19. In Approach 1, before feeding the images for the training of the CNN, the necessary preprocessing of NIR images is performed. In Approach 2, before the pre-processing step, image intensity optimization is also employed to regularize the image intensity. NIRHI outperforms HKPU and UTFVP for both of the modalities of focus, in a unimodal setup as well as in a multimodal one. The proposed multimodal biometric system demonstrates a better overall identification accuracy of 99.62% in comparison with 99.51% and 99.50% reported in the recent state-of-the-art systems.


Subject(s)
Biometric Identification , Fingers , Humans , Fingers/diagnostic imaging , Fingers/blood supply , Biometric Identification/methods , Biometry/methods , Hand/diagnostic imaging , Neural Networks, Computer
15.
Sensors (Basel) ; 23(22)2023 Nov 14.
Article in English | MEDLINE | ID: mdl-38005564

ABSTRACT

(1) Background: The ability to recognize identities is an essential component of security. Electrocardiogram (ECG) signals have gained popularity for identity recognition because of their universal, unique, stable, and measurable characteristics. To ensure accurate identification of ECG signals, this paper proposes an approach which involves mixed feature sampling, sparse representation, and recognition. (2) Methods: This paper introduces a new method of identifying individuals through their ECG signals. This technique combines the extraction of fixed ECG features and specific frequency features to improve accuracy in ECG identity recognition. This approach uses the wavelet transform to extract frequency bands which contain personal information features from the ECG signals. These bands are reconstructed, and the single R-peak localization determines the ECG window. The signals are segmented and standardized based on the located windows. A sparse dictionary is created using the standardized ECG signals, and the KSVD (K-Orthogonal Matching Pursuit) algorithm is employed to project ECG target signals into a sparse vector-matrix representation. To extract the final representation of the target signals for identification, the sparse coefficient vectors in the signals are maximally pooled. For recognition, the co-dimensional bundle search method is used in this paper. (3) Results: This paper utilizes the publicly available European ST-T database for our study. Specifically, this paper selects ECG signals from 20, 50 and 70 subjects, each with 30 testing segments. The method proposed in this paper achieved recognition rates of 99.14%, 99.09%, and 99.05%, respectively. (4) Conclusion: The experiments indicate that the method proposed in this paper can accurately capture, represent and identify ECG signals.


Subject(s)
Biometric Identification , Humans , Biometric Identification/methods , Algorithms , Electrocardiography/methods , Wavelet Analysis , Databases, Factual
16.
Sensors (Basel) ; 23(19)2023 Sep 30.
Article in English | MEDLINE | ID: mdl-37837025

ABSTRACT

The advent of Social Behavioral Biometrics (SBB) in the realm of person identification has underscored the importance of understanding unique patterns of social interactions and communication. This paper introduces a novel multimodal SBB system that integrates human micro-expressions from text, an emerging biometric trait, with other established SBB traits in order to enhance online user identification performance. Including human micro-expression, the proposed method extracts five other original SBB traits for a comprehensive representation of the social behavioral characteristics of an individual. Upon finding the independent person identification score by every SBB trait, a rank-level fusion that leverages the weighted Borda count is employed to fuse the scores from all the traits, obtaining the final identification score. The proposed method is evaluated on a benchmark dataset of 250 Twitter users, and the results indicate that the incorporation of human micro-expression with existing SBB traits can substantially boost the overall online user identification performance, with an accuracy of 73.87% and a recall score of 74%. Furthermore, the proposed method outperforms the state-of-the-art SBB systems.


Subject(s)
Biometric Identification , Humans , Biometric Identification/methods , Biometry , Communication
17.
IEEE Trans Image Process ; 32: 5652-5663, 2023.
Article in English | MEDLINE | ID: mdl-37824317

ABSTRACT

Face recognition has achieved remarkable success owing to the development of deep learning. However, most of existing face recognition models perform poorly against pose variations. We argue that, it is primarily caused by pose-based long-tailed data - imbalanced distribution of training samples between profile faces and near-frontal faces. Additionally, self-occlusion and nonlinear warping of facial textures caused by large pose variations also increase the difficulty in learning discriminative features of profile faces. In this study, we propose a novel framework called Symmetrical Siamese Network (SSN), which can simultaneously overcome the limitation of pose-based long-tailed data and pose-invariant features learning. Specifically, two sub-modules are proposed in the SSN, i.e., Feature-Consistence Learning sub-Net (FCLN) and Identity-Consistence Learning sub-Net (ICLN). For FCLN, the inputs are all face images on training dataset. Inspired by the contrastive learning, we simulate pose variations of faces and constrain the model to focus on the consistent areas between the original face image and its corresponding virtual pose face images. For ICLN, only profile images are used as inputs, and we propose to adopt Identity Consistence Loss to minimize the intra-class feature variation across different poses. The collaborative learning of two sub-modules guarantees that the parameters of network are updated in a relatively equal probability between near-frontal face images and profile images, so that the pose-based long-tailed problem can be effectively addressed. The proposed SSN shows comparable results over the state-of-the-art methods on several public datasets. In this study, LightCNN is selected as the backbone of SSN, and existing popular networks also can be used into our framework for pose-robust face recognition.


Subject(s)
Biometric Identification , Facial Recognition , Algorithms , Biometric Identification/methods , Face/diagnostic imaging , Face/anatomy & histology , Databases, Factual
18.
Comput Intell Neurosci ; 2023: 6443786, 2023.
Article in English | MEDLINE | ID: mdl-37469627

ABSTRACT

The need for information security and the adoption of the relevant regulations is becoming an overwhelming demand worldwide. As an efficient solution, hybrid multimodal biometric systems utilize fusion to combine multiple biometric traits and sources with improving recognition accuracy, higher security assurance, and to cope with the limitations of the uni-biometric system. In this paper, three strategies for dealing with a feature-level deep fusion of five biometric traits (face, both irises, and two fingerprints) derived from three sources of evidence are proposed and compared. In the first two proposed methodologies, each feature vector is mapped from the feature space into the reproducing kernel Hilbert space (RKHS) separately by selecting the appropriate reproducing kernel. In this higher space, where the result is the conversion of nonlinear relations to linear ones, dimensionality reduction algorithms (KPCA, KLDA) and quaternion-based algorithms (KQPCA, KQPCA) are used for the fusion of the feature vectors. In the third methodology, the fusion of feature spaces based on deep learning is administered by combining feature vectors in in-depth and fully connected layers. The experimental results on 6 databases in the proposed hybrid multibiometric system clearly show the multimodal template obtained from the deep fusion of feature spaces; while being secure against spoof attacks and making the system robust, they can use the low dimensionality of the fused vector to increase the accuracy of a hybrid multimodal biometric system to 100%, showing a significant improvement compared with uni-biometric and other multimodal systems.


Subject(s)
Biometric Identification , Biometry , Algorithms , Databases, Factual , Recognition, Psychology , Biometric Identification/methods
19.
PLoS One ; 18(6): e0287349, 2023.
Article in English | MEDLINE | ID: mdl-37363919

ABSTRACT

Biometric technology is becoming increasingly prevalent in several vital applications that substitute traditional password and token authentication mechanisms. Recognition accuracy and computational cost are two important aspects that are to be considered while designing biometric authentication systems. Thermal imaging is proven to capture a unique thermal signature for a person and thus has been used in thermal face recognition. However, the literature did not thoroughly analyse the impact of feature selection on the accuracy and computational cost of face recognition which is an important aspect for limited resources applications like IoT ones. Also, the literature did not thoroughly evaluate the performance metrics of the proposed methods/solutions which are needed for the optimal configuration of the biometric authentication systems. This paper proposes a thermal face-based biometric authentication system. The proposed system comprises five phases: a) capturing the user's face with a thermal camera, b) segmenting the face region and excluding the background by optimized superpixel-based segmentation technique to extract the region of interest (ROI) of the face, c) feature extraction using wavelet and curvelet transform, d) feature selection by employing bio-inspired optimization algorithms: grey wolf optimizer (GWO), particle swarm optimization (PSO) and genetic algorithm (GA), e) the classification (user identification) performed using classifiers: random forest (RF), k-nearest neighbour (KNN), and naive bayes (NB). Upon the public dataset, Terravic Facial IR, the proposed system was evaluated using the metrics: accuracy, precision, recall, F-measure, and receiver operating characteristic (ROC) area. The results showed that the curvelet features optimized using the GWO and classified with random forest could help in authenticating users through thermal images with performance up to 99.5% which is better than the results of wavelet features by 10% while the former used 5% fewer features. In addition, the statistical analysis showed the significance of our proposed model. Compared to the related works, our system showed to be a better thermal face authentication model with a minimum set of features, making it computational-friendly.


Subject(s)
Biometric Identification , Facial Recognition , Bayes Theorem , Biometric Identification/methods , Algorithms , Biometry
20.
PLoS One ; 18(5): e0286215, 2023.
Article in English | MEDLINE | ID: mdl-37228099

ABSTRACT

Most existing secure biometric authentication schemes are server-centric, and users must fully trust the server to store, process, and manage their biometric data. As a result, users' biometric data could be leaked by outside attackers or the service provider itself. This paper first constructs the EDZKP protocol based on the inner product, which proves whether the secret value is the Euclidean distance of the secret vectors. Then, combined with the Cuproof protocol, we propose a novel user-centric biometric authentication scheme called BAZKP. In this scheme, all the biometric data remain encrypted during authentication phase, so the server will never see them directly. Meanwhile, the server can determine whether the Euclidean distance of two secret vectors is within a pre-defined threshold by calculation. Security analysis shows BAZKP satisfies completeness, soundness, and zero-knowledge. Based on BAZKP, we propose a privacy-preserving biometric authentication system, and its evaluation demonstrates that it provides reliable and secure authentication.


Subject(s)
Biometric Identification , Telemedicine , Privacy , Algorithms , Computer Security , Biometric Identification/methods , Biometry , Confidentiality
SELECTION OF CITATIONS
SEARCH DETAIL
...