RESUMO
In recent years, the advancement of generative techniques, particularly generative adversarial networks (GANs), has opened new possibilities for generating synthetic biometric data from different modalities, including-among others-images of irises, fingerprints, or faces in different representations. This study presents the process of generating synthetic images of human irises, using the recent StyleGAN3 model. The novelty presented in this work consists in producing generated content in both Cartesian and polar coordinate representations, typically used in iris recognition pipelines, such as the foundational work proposed by John Daugman, but hitherto not used in generative AI experiments. The main objective of this study was to conduct a qualitative analysis of the synthetic samples and evaluate the iris texture density and suitability for meaningful feature extraction. During this study, a total of 1327 unique irises were generated, and experimental results carried out using the well-known OSIRIS open-source iris recognition software and the equivalent software, wordlcoin-openiris, newly published at the end of 2023 to prove that (1) no "identity leak" from the training set was observed, and (2) the generated irises had enough unique textural information to be successfully differentiated between both themselves and between them and real, authentic iris samples. The results of our research demonstrate the promising potential of synthetic iris data generation as a valuable tool for augmenting training datasets and improving the overall performance of iris recognition systems. By exploring the synthetic data in both Cartesian and polar representations, we aim to understand the benefits and limitations of each approach and their implications for biometric applications. The findings suggest that synthetic iris data can significantly contribute to the advancement of iris recognition technology, enhancing its accuracy and robustness in real-world scenarios by greatly augmenting the possibilities to gather large and diversified training datasets.
Assuntos
Biometria , Iris , Humanos , Reconhecimento Psicológico , Software , TecnologiaRESUMO
Compression is a way of encoding digital data so that it takes up less storage and requires less network bandwidth to be transmitted, which is currently an imperative need for iris recognition systems due to the large amounts of data involved, while deep neural networks trained as image auto-encoders have recently emerged a promising direction for advancing the state-of-the-art in image compression, yet the generalizability of these schemes to preserve the unique biometric traits has been questioned when utilized in the corresponding recognition systems. For the first time, we thoroughly investigate the compression effectiveness of DSSLIC, a deep-learning-based image compression model specifically well suited for iris data compression, along with an additional deep-learning based lossy image compression technique. In particular, we relate Full-Reference image quality as measured in terms of Multi-scale Structural Similarity Index (MS-SSIM) and Local Feature Based Visual Security (LFBVS), as well as No-Reference images quality as measured in terms of the Blind Reference-less Image Spatial Quality Evaluator (BRISQUE), to the recognition scores as obtained by a set of concrete recognition systems. We further compare the DSSLIC model performance against several state-of-the-art (non-learning-based) lossy image compression techniques including: the ISO standard JPEG2000, JPEG, H.265 derivate BPG, HEVC, VCC, and AV1 to figure out the most suited compression algorithm which can be used for this purpose. The experimental results show superior compression and promising recognition performance of the model over all other techniques on different iris databases.
Assuntos
Compressão de Dados , Algoritmos , Compressão de Dados/métodos , Bases de Dados Factuais , Processamento de Imagem Assistida por Computador , Iris , Redes Neurais de ComputaçãoRESUMO
Biometric recognition technology has been widely used in various fields of society. Iris recognition technology, as a stable and convenient biometric recognition technology, has been widely used in security applications. However, the iris images collected in the actual non-cooperative environment have various noises. Although mainstream iris recognition methods based on deep learning have achieved good recognition accuracy, the intention is to increase the complexity of the model. On the other hand, what the actual optical system collects is the original iris image that is not normalized. The mainstream iris recognition scheme based on deep learning does not consider the iris localization stage. In order to solve the above problems, this paper proposes an effective iris recognition scheme consisting of the iris localization and iris verification stages. For the iris localization stage, we used the parallel Hough circle to extract the inner circle of the iris and the Daugman algorithm to extract the outer circle of the iris, and for the iris verification stage, we developed a new lightweight convolutional neural network. The architecture consists of a deep residual network module and a residual pooling layer which is introduced to effectively improve the accuracy of iris verification. Iris localization experiments were conducted on 400 iris images collected under a non-cooperative environment. Compared with its processing time on a graphics processing unit with a central processing unit architecture, the experimental results revealed that the speed was increased by 26, 32, 36, and 21 times at 4 different iris datasets, respectively, and the effective iris localization accuracy is achieved. Furthermore, we chose four representative iris datasets collected under a non-cooperative environment for the iris verification experiments. The experimental results demonstrated that the network structure could achieve high-precision iris verification with fewer parameters, and the equal error rates are 1.08%, 1.01%, 1.71%, and 1.11% on 4 test databases, respectively.
Assuntos
Identificação Biométrica , Aprendizado Profundo , Identificação Biométrica/métodos , Algoritmos , Redes Neurais de Computação , Iris/anatomia & histologiaRESUMO
Iris localization in non-cooperative environments is challenging and essential for accurate iris recognition. Motivated by the traditional iris-localization algorithm and the robustness of the YOLO model, we propose a novel iris-localization algorithm. First, we design a novel iris detector with a modified you only look once v4 (YOLO v4) model. We can approximate the position of the pupil center. Then, we use a modified integro-differential operator to precisely locate the iris inner and outer boundaries. Experiment results show that iris-detection accuracy can reach 99.83% with this modified YOLO v4 model, which is higher than that of a traditional YOLO v4 model. The accuracy in locating the inner and outer boundary of the iris without glasses can reach 97.72% at a short distance and 98.32% at a long distance. The locating accuracy with glasses can obtained at 93.91% and 84%, respectively. It is much higher than the traditional Daugman's algorithm. Extensive experiments conducted on multiple datasets demonstrate the effectiveness and robustness of our method for iris localization in non-cooperative environments.
Assuntos
Algoritmos , Iris , PupilaRESUMO
BACKGROUND: A partnership between the University of Antwerp and the University of Kinshasa implemented the EBOVAC3 clinical trial with an Ebola vaccine regimen administered to health care provider participants in Tshuapa Province, Democratic Republic of the Congo. This randomized controlled trial was part of an Ebola outbreak preparedness initiative financed through Innovative Medicines Initiative-European Union. The EBOVAC3 clinical trial used iris scan technology to identify all health care provider participants enrolled in the vaccine trial, to ensure that the right participant received the right vaccine at the right visit. OBJECTIVE: We aimed to assess the acceptability, accuracy, and feasibility of iris scan technology as an identification method within a population of health care provider participants in a vaccine trial in a remote setting. METHODS: We used a mixed methods study. The acceptability was assessed prior to the trial through 12 focus group discussions (FGDs) and was assessed at enrollment. Feasibility and accuracy research was conducted using a longitudinal trial study design, where iris scanning was compared with the unique study ID card to identify health care provider participants at enrollment and at their follow-up visits. RESULTS: During the FGDs, health care provider participants were mainly concerned about the iris scan technology causing physical problems to their eyes or exposing them to spiritual problems through sorcery. However, 99% (85/86; 95% CI 97.1-100.0) of health care provider participants in the FGDs agreed to be identified by the iris scan. Also, at enrollment, 99.0% (692/699; 95% CI 98.2-99.7) of health care provider participants accepted to be identified by iris scan. Iris scan technology correctly identified 93.1% (636/683; 95% CI 91.2-95.0) of the participants returning for scheduled follow-up visits. The iris scanning operation lasted 2 minutes or less for 96.0% (656/683; 95% CI 94.6-97.5), and 1 attempt was enough to identify the majority of study participants (475/683, 69.5%; 95% CI 66.1-73.0). CONCLUSIONS: Iris scans are highly acceptable as an identification tool in a clinical trial for health care provider participants in a remote setting. Its operationalization during the trial demonstrated a high level of accuracy that can reliably identify individuals. Iris scanning is found to be feasible in clinical trials but requires a trained operator to reduce the duration and the number of attempts to identify a participant. TRIAL REGISTRATION: ClinicalTrials.gov NCT04186000; https://clinicaltrials.gov/ct2/show/NCT04186000.
Assuntos
Vacinas contra Ebola , Doença pelo Vírus Ebola , Adulto , Biometria , República Democrática do Congo , Doença pelo Vírus Ebola/prevenção & controle , Humanos , IrisRESUMO
Recently, deep learning approaches, especially convolutional neural networks (CNNs), have attracted extensive attention in iris recognition. Though CNN-based approaches realize automatic feature extraction and achieve outstanding performance, they usually require more training samples and higher computational complexity than the classic methods. This work focuses on training a novel condensed 2-channel (2-ch) CNN with few training samples for efficient and accurate iris identification and verification. A multi-branch CNN with three well-designed online augmentation schemes and radial attention layers is first proposed as a high-performance basic iris classifier. Then, both branch pruning and channel pruning are achieved by analyzing the weight distribution of the model. Finally, fast finetuning is optionally applied, which can significantly improve the performance of the pruned CNN while alleviating the computational burden. In addition, we further investigate the encoding ability of 2-ch CNN and propose an efficient iris recognition scheme suitable for large database application scenarios. Moreover, the gradient-based analysis results indicate that the proposed algorithm is robust to various image contaminations. We comprehensively evaluated our algorithm on three publicly available iris databases for which the results proved satisfactory for real-time iris recognition.
Assuntos
Algoritmos , Redes Neurais de Computação , Atenção , Iris/diagnóstico por imagem , Reconhecimento PsicológicoRESUMO
In this work, we present an eye-image acquisition device that can be used as an image acquisition front-end application in compact, low-cost, and easy-to-integrate products for smart-city access control applications, based on iris recognition. We present the advantages and disadvantages of iris recognition compared to fingerprint- or face recognition. We also present the main drawbacks of the existing commercial solutions and propose a concept device design for door-mounted access control systems based on iris recognition technology. Our eye-image acquisition device was built around a low-cost camera module. An integrated infrared distance measurement was used for active image focusing. FPGA image processing was used for raw-RGB to grayscale demosaicing and passive image focusing. The integrated visible light illumination meets the IEC62471 photobiological safety standard. According to our results, we present the operation of the distance-measurement subsystem, the operation of the image-focusing subsystem, examples of acquired images of an artificial toy eye under different illumination conditions, and the calculation of illumination exposure hazards. We managed to acquire a sharp image of an artificial toy eye sized 22 mm in diameter from an approximate distance of 10 cm, with 400 pixels over the iris diameter, an average acquisition time of 1 s, and illumination below hazardous exposure levels.
Assuntos
Algoritmos , Iris , Processamento de Imagem Assistida por Computador , Luz , IluminaçãoRESUMO
Iris segmentation plays an important and significant role in the iris recognition system. The prerequisite for accurate iris recognition is the correctness of iris segmentation. However, the efficiency and robustness of traditional iris segmentation methods are severely challenged in a non-cooperative environment because of unfavorable factors, for instance, occlusion, blur, low resolution, off-axis, motion, and specular reflections. All of the above factors seriously reduce the accuracy of iris segmentation. In this paper, we present a novel iris segmentation algorithm that localizes the outer and inner boundaries of the iris image. We propose a neural network model called "Interleaved Residual U-Net" (IRUNet) for semantic segmentation and iris mask synthesis. The K-means clustering is applied to select saliency points set in order to recover the outer boundary of the iris, whereas the inner border is recovered by selecting another set of saliency points on the inner side of the mask. Experimental results demonstrate that the proposed iris segmentation algorithm can achieve the mean IOU value of 98.9% and 97.7% for inner and outer boundary estimation, respectively, which outperforms the existing approaches on the challenging CASIA-Iris-Thousand database.
Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Iris , Bases de Dados Factuais , Iris/diagnóstico por imagem , Redes Neurais de ComputaçãoRESUMO
With the increasing demand for information security and security regulations all over the world, biometric recognition technology has been widely used in our everyday life. In this regard, multimodal biometrics technology has gained interest and became popular due to its ability to overcome a number of significant limitations of unimodal biometric systems. In this paper, a new multimodal biometric human identification system is proposed, which is based on a deep learning algorithm for recognizing humans using biometric modalities of iris, face, and finger vein. The structure of the system is based on convolutional neural networks (CNNs) which extract features and classify images by softmax classifier. To develop the system, three CNN models were combined; one for iris, one for face, and one for finger vein. In order to build the CNN model, the famous pertained model VGG-16 was used, the Adam optimization method was applied and categorical cross-entropy was used as a loss function. Some techniques to avoid overfitting were applied, such as image augmentation and dropout techniques. For fusing the CNN models, different fusion approaches were employed to explore the influence of fusion approaches on recognition performance, therefore, feature and score level fusion approaches were applied. The performance of the proposed system was empirically evaluated by conducting several experiments on the SDUMLA-HMT dataset, which is a multimodal biometrics dataset. The obtained results demonstrated that using three biometric traits in biometric identification systems obtained better results than using two or one biometric traits. The results also showed that our approach comfortably outperformed other state-of-the-art methods by achieving an accuracy of 99.39%, with a feature level fusion approach and an accuracy of 100% with different methods of score level fusion.
Assuntos
Identificação Biométrica , Aprendizado Profundo , Face/anatomia & histologia , Dedos/irrigação sanguínea , Iris/anatomia & histologia , Algoritmos , Humanos , Redes Neurais de ComputaçãoRESUMO
In this study, we maneuvered a dual-band spectral imaging system to capture an iridal image from a cosmetic-contact-lens-wearing subject. By using the independent component analysis to separate individual spectral primitives, we successfully distinguished the natural iris texture from the cosmetic contact lens (CCL) pattern, and restored the genuine iris patterns from the CCL-polluted image. Based on a database containing 200 test image pairs from 20 CCL-wearing subjects as the proof of concept, the recognition accuracy (False Rejection Rate: FRR) was improved from FRR = 10.52% to FRR = 0.57% with the proposed ICA anti-spoofing scheme.
Assuntos
Iris , Algoritmos , Cosméticos , Bases de Dados FactuaisRESUMO
Among biometric recognition systems such as fingerprint, finger-vein, or face, the iris recognition system has proven to be effective for achieving a high recognition accuracy and security level. However, several recent studies have indicated that an iris recognition system can be fooled by using presentation attack images that are recaptured using high-quality printed images or by contact lenses with printed iris patterns. As a result, this potential threat can reduce the security level of an iris recognition system. In this study, we propose a new presentation attack detection (PAD) method for an iris recognition system (iPAD) using a near infrared light (NIR) camera image. To detect presentation attack images, we first localized the iris region of the input iris image using circular edge detection (CED). Based on the result of iris localization, we extracted the image features using deep learning-based and handcrafted-based methods. The input iris images were then classified into real and presentation attack categories using support vector machines (SVM). Through extensive experiments with two public datasets, we show that our proposed method effectively solves the iris recognition presentation attack detection problem and produces detection accuracy superior to previous studies.
RESUMO
This paper presents a single-bit CMOS image sensor (CIS) that uses a data processing technique with an edge detection block for simple iris segmentation. In order to recognize the iris image, the image sensor conventionally captures high-resolution image data in digital code, extracts the iris data, and then compares it with a reference image through a recognition algorithm. However, in this case, the frame rate decreases by the time required for digital signal conversion of multi-bit digital data through the analog-to-digital converter (ADC) in the CIS. In order to reduce the overall processing time as well as the power consumption, we propose a data processing technique with an exclusive OR (XOR) logic gate to obtain single-bit and edge detection image data instead of multi-bit image data through the ADC. In addition, we propose a logarithmic counter to efficiently measure single-bit image data that can be applied to the iris recognition algorithm. The effective area of the proposed single-bit image sensor (174 × 144 pixel) is 2.84 mm² with a 0.18 µm 1-poly 4-metal CMOS image sensor process. The power consumption of the proposed single-bit CIS is 2.8 mW with a 3.3 V of supply voltage and 520 frame/s of the maximum frame rates. The error rate of the ADC is 0.24 least significant bit (LSB) on an 8-bit ADC basis at a 50 MHz sampling frequency.
Assuntos
Algoritmos , Humanos , IrisRESUMO
Iris recognition systems have been used in high-security-level applications because of their high recognition rate and the distinctiveness of iris patterns. However, as reported by recent studies, an iris recognition system can be fooled by the use of artificial iris patterns and lead to a reduction in its security level. The accuracies of previous presentation attack detection research are limited because they used only features extracted from global iris region image. To overcome this problem, we propose a new presentation attack detection method for iris recognition by combining features extracted from both local and global iris regions, using convolutional neural networks and support vector machines based on a near-infrared (NIR) light camera sensor. The detection results using each kind of image features are fused, based on two fusion methods of feature level and score level to enhance the detection ability of each kind of image features. Through extensive experiments using two popular public datasets (LivDet-Iris-2017 Warsaw and Notre Dame Contact Lens Detection 2015) and their fusion, we validate the efficiency of our proposed method by providing smaller detection errors than those produced by previous studies.
Assuntos
Aprendizado Profundo , Raios Infravermelhos , Iris/anatomia & histologia , Fotografação/instrumentação , Humanos , Redes Neurais de Computação , Máquina de Vetores de SuporteRESUMO
The recent advancements in computer vision have opened new horizons for deploying biometric recognition algorithms in mobile and handheld devices. Similarly, iris recognition is now much needed in unconstraint scenarios with accuracy. These environments make the acquired iris image exhibit occlusion, low resolution, blur, unusual glint, ghost effect, and off-angles. The prevailing segmentation algorithms cannot cope with these constraints. In addition, owing to the unavailability of near-infrared (NIR) light, iris recognition in visible light environment makes the iris segmentation challenging with the noise of visible light. Deep learning with convolutional neural networks (CNN) has brought a considerable breakthrough in various applications. To address the iris segmentation issues in challenging situations by visible light and near-infrared light camera sensors, this paper proposes a densely connected fully convolutional network (IrisDenseNet), which can determine the true iris boundary even with inferior-quality images by using better information gradient flow between the dense blocks. In the experiments conducted, five datasets of visible light and NIR environments were used. For visible light environment, noisy iris challenge evaluation part-II (NICE-II selected from UBIRIS.v2 database) and mobile iris challenge evaluation (MICHE-I) datasets were used. For NIR environment, the institute of automation, Chinese academy of sciences (CASIA) v4.0 interval, CASIA v4.0 distance, and IIT Delhi v1.0 iris datasets were used. Experimental results showed the optimal segmentation of the proposed IrisDenseNet and its excellent performance over existing algorithms for all five datasets.
RESUMO
For many practical applications of image sensors, how to extend the depth-of-field (DoF) is an important research topic; if successfully implemented, it could be beneficial in various applications, from photography to biometrics. In this work, we want to examine the feasibility and practicability of a well-known "extended DoF" (EDoF) technique, or "wavefront coding," by building real-time long-range iris recognition and performing large-scale iris recognition. The key to the success of long-range iris recognition includes long DoF and image quality invariance toward various object distance, which is strict and harsh enough to test the practicality and feasibility of EDoF-empowered image sensors. Besides image sensor modification, we also explored the possibility of varying enrollment/testing pairs. With 512 iris images from 32 Asian people as the database, 400-mm focal length and F/6.3 optics over 3 m working distance, our results prove that a sophisticated coding design scheme plus homogeneous enrollment/testing setups can effectively overcome the blurring caused by phase modulation and omit Wiener-based restoration. In our experiments, which are based on 3328 iris images in total, the EDoF factor can achieve a result 3.71 times better than the original system without a loss of recognition accuracy.
Assuntos
Biometria/métodos , Algoritmos , Povo Asiático , Humanos , Processamento de Imagem Assistida por Computador/métodos , Iris/fisiologia , Fotografação/métodosRESUMO
Eye examination plays an important role when living individuals are forensically investigated. The iris colour, retinal scans and other biometric features may be used for identification purposes while visual impairments may have legal implications in employment, driving and accidents. Ocular manifestations provide clues regarding substance abuse, poisoning and toxicity, and evidence of trauma, abuse or disease can be revealed along with psychological traits and lifestyle. Thus, the eye is a valuable tool in forensic investigations of living subjects, providing identifying characteristics along with health information. This review focuses on the medico-legal aspects of the eye's contribution when the living are subjected to forensic examination.
RESUMO
Biometric systems have gained attention as a more secure alternative to traditional authentication methods. However, these systems are not without their technical limitations. This paper presents a hybrid approach that combines edge detection and segmentation techniques to enhance the security of cloud systems. The proposed method uses iris recognition as a biometric paradigm, taking advantage of the iris' unique patterns. We performed feature extraction and classification using hamming distance (HD) and convolutional neural networks (CNN). We validated the experimental findings using various datasets, such as MMU, IITD, and CASIA Iris Interval V4. We compared the proposed method's results to previous research, demonstrating recognition rates of 99.50 % on MMU using CNN, 97.18 % on IITD using CNN, and 95.07 % on CASIA using HD. These results indicate that the proposed method outperforms other classifiers used in previous research, showcasing its effectiveness in improving cloud security services.
RESUMO
The performance of iris recognition systems is frequently affected by input image quality, which in turn is vulnerable to less-than-optimal conditions due to illuminations, environments, and subject characteristics (e.g., distance, movement, face/body visibility, blinking, etc.). VASIR (Video-based Automatic System for Iris Recognition) is a state-of-the-art NIST-developed iris recognition software platform designed to systematically address these vulnerabilities. We developed VASIR as a research tool that will not only provide a reference (to assess the relative performance of alternative algorithms) for the biometrics community, but will also advance (via this new emerging iris recognition paradigm) NIST's measurement mission. VASIR is designed to accommodate both ideal (e.g., classical still images) and less-than-ideal images (e.g., face-visible videos). VASIR has three primary modules: 1) Image Acquisition 2) Video Processing, and 3) Iris Recognition. Each module consists of several sub-components that have been optimized by use of rigorous orthogonal experiment design and analysis techniques. We evaluated VASIR performance using the MBGC (Multiple Biometric Grand Challenge) NIR (Near-Infrared) face-visible video dataset and the ICE (Iris Challenge Evaluation) 2005 still-based dataset. The results showed that even though VASIR was primarily developed and optimized for the less-constrained video case, it still achieved high verification rates for the traditional still-image case. For this reason, VASIR may be used as an effective baseline for the biometrics community to evaluate their algorithm performance, and thus serves as a valuable research platform.
RESUMO
Purpose: To study the effect of pupil dilation on a biometric iris recognition (BIR) system for personal authentication and identification. Methods: A prospective, non-randomized, single-center cohort study was conducted on patients who reported for a routine eye check-up from November 2017 to November 2019 (2 years). An iris scanning device "IRITECH-MK2120U" was used to initially enroll the undilated eyes. Baseline scans were taken after matching with the enrolled database. All eyes were topically dilated and matched again with the enrolled database. The Hamming distance (a measure of disagreement between two iris codes) and recognition status were recorded from the device output, and eyes were evaluated by slit-lamp ophthalmoscopy with special emphasis on pupil shape, size, and texture. Results: All 321 enrolled eyes matched after topical dilation. The pupil size had a significant effect on Hamming distance with a P value <0.05. There were no false matches. A correct recognition rate of 100% was obtained after dilation. No loss of iris texture or pupil shape was observed after dilation. Conclusion: A BIR system is a reliable method for identification and personal authentication after pupil dilation. Topically dilated pupils are not a cause for non-recognition of iris scans.
Assuntos
Biometria , Pupila , Humanos , Estudos de Coortes , Estudos Prospectivos , Biometria/métodos , IrisRESUMO
The iris has been proven to be one of the most stable and accurate biometrics. It has been widely used in recognition systems to determine the identity of the individual who attempts to access secured or restricted areas (e.g., airports, ATM, datacenters). An iris recognition (IR) technique for identity authentication/verification is proposed in this research. Iris image pre-processing, which includes iris segmentation, normalization, and enhancement, is followed by feature extraction, and matching. First, the iris image is segmented using the Hough Transform technique. The Daugman's rubber sheet model is the used to normalize the segmented iris area. Then, using enhancing techniques (such as histogram equalization), Gabor wavelets and Discrete Wavelets Transform should be used to precisely extract the prominent characteristics. A multiclass Support Vector Machine (SVM) is used to assess the similarity of the images. The suggested method is evaluated using the IITD iris dataset, which is one of the most often used iris datasets. The benefit of the suggested method is that it reduces the number of features in each image to only 88. Experiments revealed that the proposed method was capable of collecting a moderate quantity of useful features and outperformed other methods. Furthermore, the proposed method's recognition accuracy was found to be 98.92% on tested data.