Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 84
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Alzheimers Dement ; 20(4): 3074-3079, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38324244

RESUMEN

This perspective outlines the Artificial Intelligence and Technology Collaboratories (AITC) at Johns Hopkins University, University of Pennsylvania, and University of Massachusetts, highlighting their roles in developing AI-based technologies for older adult care, particularly targeting Alzheimer's disease (AD). These National Institute on Aging (NIA) centers foster collaboration among clinicians, gerontologists, ethicists, business professionals, and engineers to create AI solutions. Key activities include identifying technology needs, stakeholder engagement, training, mentoring, data integration, and navigating ethical challenges. The objective is to apply these innovations effectively in real-world scenarios, including in rural settings. In addition, the AITC focuses on developing best practices for AI application in the care of older adults, facilitating pilot studies, and addressing ethical concerns related to technology development for older adults with cognitive impairment, with the ultimate aim of improving the lives of older adults and their caregivers. HIGHLIGHTS: Addressing the complex needs of older adults with Alzheimer's disease (AD) requires a comprehensive approach, integrating medical and social support. Current gaps in training, techniques, tools, and expertise hinder uniform access across communities and health care settings. Artificial intelligence (AI) and digital technologies hold promise in transforming care for this demographic. Yet, transitioning these innovations from concept to marketable products presents significant challenges, often stalling promising advancements in the developmental phase. The Artificial Intelligence and Technology Collaboratories (AITC) program, funded by the National Institute on Aging (NIA), presents a viable model. These Collaboratories foster the development and implementation of AI methods and technologies through projects aimed at improving care for older Americans, particularly those with AD, and promote the sharing of best practices in AI and technology integration. Why Does This Matter? The National Institute on Aging (NIA) Artificial Intelligence and Technology Collaboratories (AITC) program's mission is to accelerate the adoption of artificial intelligence (AI) and new technologies for the betterment of older adults, especially those with dementia. By bridging scientific and technological expertise, fostering clinical and industry partnerships, and enhancing the sharing of best practices, this program can significantly improve the health and quality of life for older adults with Alzheimer's disease (AD).


Asunto(s)
Enfermedad de Alzheimer , Isotiocianatos , Estados Unidos , Humanos , Anciano , Enfermedad de Alzheimer/terapia , Inteligencia Artificial , Gerociencia , Calidad de Vida , Tecnología
2.
Proc Natl Acad Sci U S A ; 115(24): 6171-6176, 2018 06 12.
Artículo en Inglés | MEDLINE | ID: mdl-29844174

RESUMEN

Achieving the upper limits of face identification accuracy in forensic applications can minimize errors that have profound social and personal consequences. Although forensic examiners identify faces in these applications, systematic tests of their accuracy are rare. How can we achieve the most accurate face identification: using people and/or machines working alone or in collaboration? In a comprehensive comparison of face identification by humans and computers, we found that forensic facial examiners, facial reviewers, and superrecognizers were more accurate than fingerprint examiners and students on a challenging face identification test. Individual performance on the test varied widely. On the same test, four deep convolutional neural networks (DCNNs), developed between 2015 and 2017, identified faces within the range of human accuracy. Accuracy of the algorithms increased steadily over time, with the most recent DCNN scoring above the median of the forensic facial examiners. Using crowd-sourcing methods, we fused the judgments of multiple forensic facial examiners by averaging their rating-based identity judgments. Accuracy was substantially better for fused judgments than for individuals working alone. Fusion also served to stabilize performance, boosting the scores of lower-performing individuals and decreasing variability. Single forensic facial examiners fused with the best algorithm were more accurate than the combination of two examiners. Therefore, collaboration among humans and between humans and machines offers tangible benefits to face identification accuracy in important applications. These results offer an evidence-based roadmap for achieving the most accurate face identification possible.


Asunto(s)
Algoritmos , Identificación Biométrica/métodos , Cara/anatomía & histología , Ciencias Forenses/métodos , Humanos , Aprendizaje Automático , Reproducibilidad de los Resultados
3.
J Opt Soc Am A Opt Image Sci Vis ; 31(5): 1090-103, 2014 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-24979642

RESUMEN

In recent years, sparse representation and dictionary-learning-based methods have emerged as powerful tools for efficiently processing data in nontraditional ways. A particular area of promise for these theories is face recognition. In this paper, we review the role of sparse representation and dictionary learning for efficient face identification and verification. Recent face recognition algorithms from still images, videos, and ambiguously labeled imagery are reviewed. In particular, discriminative dictionary learning algorithms as well as methods based on weakly supervised learning and domain adaptation are summarized. Some of the compelling challenges and issues that confront research in face recognition using sparse representations and dictionary learning are outlined.

4.
Diabetes Metab Syndr ; 17(3): 102732, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-36867973

RESUMEN

AIMS: Although obesity is associated with chronic disease, a large section of the population with high BMI does not have an increased risk of metabolic disease. Increased visceral adiposity and sarcopenia are also risk factors for metabolic disease in people with normal BMI. Artificial Intelligence (AI) techniques can help assess and analyze body composition parameters for predicting cardiometabolic health. The purpose of the study was to systematically explore literature involving AI techniques for body composition assessment and observe general trends. METHODS: We searched the following databases: Embase, Web of Science, and PubMed. There was a total of 354 search results. After removing duplicates, irrelevant studies, and reviews(a total of 303), 51 studies were included in the systematic review. RESULTS: AI techniques have been studied for body composition analysis in the context of diabetes mellitus, hypertension, cancer and many specialized diseases. Imaging techniques employed for AI methods include CT (Computerized Tomography), MRI (Magnetic Resonance Imaging), ultrasonography, plethysmography, and EKG(Electrocardiogram). Automatic segmentation of body composition by deep learning with convolutional networks has helped determine and quantify muscle mass. Limitations include heterogeneity of study populations, inherent bias in sampling, and lack of generalizability. Different bias mitigation strategies should be evaluated to address these problems and improve the applicability of AI to body composition analysis. CONCLUSIONS: AI assisted measurement of body composition might assist in improved cardiovascular risk stratification when applied in the appropriate clinical context.


Asunto(s)
Inteligencia Artificial , Hipertensión , Humanos , Composición Corporal , Electrocardiografía , Factores de Riesgo de Enfermedad Cardiaca
5.
IEEE Trans Pattern Anal Mach Intell ; 45(11): 13054-13067, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-37335791

RESUMEN

Adversarial training (AT) is considered to be one of the most reliable defenses against adversarial attacks. However, models trained with AT sacrifice standard accuracy and do not generalize well to unseen attacks. Recent works show generalization improvement with adversarial samples under unseen threat models such as on-manifold threat model or neural perceptual threat model. However, the former requires exact manifold information while the latter requires algorithm relaxation. Motivated by these considerations, we propose a novel threat model called Joint Space Threat Model (JSTM), which exploits the underlying manifold information with Normalizing Flow, ensuring that the exact manifold assumption holds. Under JSTM, we develop novel adversarial attacks and defenses. Specifically, we propose the Robust Mixup strategy in which we maximize the adversity of the interpolated images and gain robustness and prevent overfitting. Our experiments show that Interpolated Joint Space Adversarial Training (IJSAT) achieves good performance in standard accuracy, robustness, and generalization. IJSAT is also flexible and can be used as a data augmentation method to improve standard accuracy and combined with many existing AT approaches to improve robustness. We demonstrate the effectiveness of our approach on three benchmark datasets, CIFAR-10/100, OM-ImageNet and CIFAR-10-C.

6.
Trends Cogn Sci ; 26(2): 174-187, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-34955426

RESUMEN

Deep learning (DL) is being successfully applied across multiple domains, yet these models learn in a most artificial way: they require large quantities of labeled data to grasp even simple concepts. Thus, the main bottleneck is often access to supervised data. Here, we highlight a trend in a potential solution to this challenge: synthetic data. Synthetic data are becoming accessible due to progress in rendering pipelines, generative adversarial models, and fusion models. Moreover, advancements in domain adaptation techniques help close the statistical gap between synthetic and real data. Paradoxically, this artificial solution is also likely to enable more natural learning, as seen in biological systems, including continual, multimodal, and embodied learning. Complementary to this, simulators and deep neural networks (DNNs) will also have a critical role in providing insight into the cognitive and neural functioning of biological systems. We also review the strengths of, and opportunities and novel challenges associated with, synthetic data.


Asunto(s)
Aprendizaje Profundo , Humanos , Redes Neurales de la Computación
7.
Appl Opt ; 50(10): 1425-33, 2011 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-21460910

RESUMEN

We present an automatic target recognition algorithm using the recently developed theory of sparse representations and compressive sensing. We show how sparsity can be helpful for efficient utilization of data for target recognition. We verify the efficacy of the proposed algorithm in terms of the recognition rate and confusion matrices on the well known Comanche (Boeing-Sikorsky, USA) forward-looking IR data set consisting of ten different military targets at different orientations.

8.
IEEE Trans Pattern Anal Mach Intell ; 43(6): 1914-1927, 2021 06.
Artículo en Inglés | MEDLINE | ID: mdl-31804929

RESUMEN

In this article, we propose a novel object detection algorithm named "Deep Regionlets" by integrating deep neural networks and a conventional detection schema for accurate generic object detection. Motivated by the effectiveness of regionlets for modeling object deformations and multiple aspect ratios, we incorporate regionlets into an end-to-end trainable deep learning framework. The deep regionlets framework consists of a region selection network and a deep regionlet learning module. Specifically, given a detection bounding box proposal, the region selection network provides guidance on where to select sub-regions from which features can be learned from. An object proposal typically contains three - 16 sub-regions. The regionlet learning module focuses on local feature selection and transformations to alleviate the effects of appearance variations. To this end, we first realize non-rectangular region selection within the detection framework to accommodate variations in object appearance. Moreover, we design a "gating network" within the regionlet leaning module to enable instance dependent soft feature selection and pooling. The Deep Regionlets framework is trained end-to-end without additional efforts. We present ablation studies and extensive experiments on the PASCAL VOC dataset and the Microsoft COCO dataset. The proposed method yields competitive performance over state-of-the-art algorithms, such as RetinaNet and Mask R-CNN, even without additional segmentation labels.

9.
IEEE Trans Pattern Anal Mach Intell ; 31(5): 884-99, 2009 May.
Artículo en Inglés | MEDLINE | ID: mdl-19299862

RESUMEN

We present a nonstationary stochastic filtering framework for the task of albedo estimation from a single image. There are several approaches in the literature for albedo estimation, but few include the errors in estimates of surface normals and light source direction to improve the albedo estimate. The proposed approach effectively utilizes the error statistics of surface normals and illumination direction for robust estimation of albedo, for images illuminated by single and multiple light sources. The albedo estimate obtained is subsequently used to generate albedo-free normalized images for recovering the shape of an object. Traditional Shape-from-Shading (SFS) approaches often assume constant/piecewise constant albedo and known light source direction to recover the underlying shape. Using the estimated albedo, the general problem of estimating the shape of an object with varying albedo map and unknown illumination source is reduced to one that can be handled by traditional SFS approaches. Experimental results are provided to show the effectiveness of the approach and its application to illumination-invariant matching and shape recovery. The estimated albedo maps are compared with the ground truth. The maps are used as illumination-invariant signatures for the task of face recognition across illumination variations. The recognition results obtained compare well with the current state-of-the-art approaches. Impressive shape recovery results are obtained using images downloaded from the Web with little control over imaging conditions. The recovered shapes are also used to synthesize novel views under novel illumination conditions.


Asunto(s)
Algoritmos , Inteligencia Artificial , Interpretación de Imagen Asistida por Computador/métodos , Imagenología Tridimensional/métodos , Iluminación/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Fotometría/métodos , Humanos , Aumento de la Imagen/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
10.
IEEE Trans Image Process ; 18(9): 2114-26, 2009 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-19423441

RESUMEN

We present a completely automatic algorithm for initializing and tracking the articulated motion of humans using image sequences obtained from multiple cameras. A detailed articulated human body model composed of sixteen rigid segments that allows both translation and rotation at joints is used. Voxel data of the subject obtained from the images is segmented into the different articulated chains using Laplacian Eigenmaps. The segmented chains are registered in a subset of the frames using a single-frame registration technique and subsequently used to initialize the pose in the sequence. A temporal registration method is proposed to identify the partially segmented or unregistered articulated chains in the remaining frames in the sequence. The proposed tracker uses motion cues such as pixel displacement as well as 2-D and 3-D shape cues such as silhouettes, motion residue, and skeleton curves. The tracking algorithm consists of a predictor that uses motion cues and a corrector that uses shape cues. The use of complementary cues in the tracking alleviates the twin problems of drift and convergence to local minima. The use of multiple cameras also allows us to deal with the problems due to self-occlusion and kinematic singularity. We present tracking results on sequences with different kinds of motion to illustrate the effectiveness of our approach. The pose of the subject is correctly tracked for the duration of the sequence as can be verified by inspection.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Modelos Anatómicos , Modelos Biológicos , Movimiento/fisiología , Humanos , Postura/fisiología , Esqueleto , Grabación en Video/métodos
11.
IEEE Trans Image Process ; 18(4): 889-902, 2009 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-19278925

RESUMEN

A general transform, called the geometric transform (GeT), that models the appearance inside a closed contour is proposed. The proposed GeT is a functional of an image intensity function and a region indicator function derived from a closed contour. It can be designed to combine the shape and appearance information at different resolutions and to generate models invariant to deformation, articulation, or occlusion. By choosing appropriate functionals and region indicator functions, the GeT unifies Radon transform, trace transform, and a class of image warpings. By varying the region indicator and the types of features used for appearance modeling, five novel types of GeTs are introduced and applied to fingerprinting the appearance inside a contour. They include the GeTs based on a level set, shape matching, feature curves, and the GeT invariant to occlusion, and a multiresolution GeT (MRGeT). Applications of GeT to pedestrian identity recognition, human body part segmentation, and image synthesis are illustrated. The proposed approach produces promising results when applied to fingerprinting the appearance of a human and body parts despite the presence of nonrigid deformations and articulated motion.

12.
IEEE Trans Image Process ; 18(6): 1326-39, 2009 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-19398409

RESUMEN

Pattern recognition in video is a challenging task because of the multitude of spatio-temporal variations that occur in different videos capturing the exact same event. While traditional pattern-theoretic approaches account for the spatial changes that occur due to lighting and pose, very little has been done to address the effect of temporal rate changes in the executions of an event. In this paper, we provide a systematic model-based approach to learn the nature of such temporal variations (time warps) while simultaneously allowing for the spatial variations in the descriptors. We illustrate our approach for the problem of action recognition and provide experimental justification for the importance of accounting for rate variations in action recognition. The model is composed of a nominal activity trajectory and a function space capturing the probability distribution of activity-specific time warping transformations. We use the square-root parameterization of time warps to derive geodesics, distance measures, and probability distributions on the space of time warping functions. We then design a Bayesian algorithm which treats the execution rate function as a nuisance variable and integrates it out using Monte Carlo sampling, to generate estimates of class posteriors. This approach allows us to learn the space of time warps for each activity while simultaneously capturing other intra- and interclass variations. Next, we discuss a special case of this approach which assumes a uniform distribution on the space of time warping functions and show how computationally efficient inference algorithms may be derived for this special case. We discuss the relative advantages and disadvantages of both approaches and show their efficacy using experiments on gait-based person identification and activity recognition.


Asunto(s)
Algoritmos , Modelos Estadísticos , Movimiento/fisiología , Reconocimiento de Normas Patrones Automatizadas/métodos , Antropometría , Teorema de Bayes , Marcha/fisiología , Humanos , Método de Montecarlo , Grabación en Video
14.
IEEE Trans Pattern Anal Mach Intell ; 41(1): 121-135, 2019 01.
Artículo en Inglés | MEDLINE | ID: mdl-29990235

RESUMEN

We present an algorithm for simultaneous face detection, landmarks localization, pose estimation and gender recognition using deep convolutional neural networks (CNN). The proposed method called, HyperFace, fuses the intermediate layers of a deep CNN using a separate CNN followed by a multi-task learning algorithm that operates on the fused features. It exploits the synergy among the tasks which boosts up their individual performances. Additionally, we propose two variants of HyperFace: (1) HyperFace-ResNet that builds on the ResNet-101 model and achieves significant improvement in performance, and (2) Fast-HyperFace that uses a high recall fast face detector for generating region proposals to improve the speed of the algorithm. Extensive experiments show that the proposed models are able to capture both global and local information in faces and performs significantly better than many competitive algorithms for each of these four tasks.


Asunto(s)
Aprendizaje Profundo , Cara/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Postura/fisiología , Algoritmos , Femenino , Identidad de Género , Humanos , Masculino
16.
IEEE Trans Pattern Anal Mach Intell ; 30(10): 1771-85, 2008 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-18703830

RESUMEN

We propose a general approach using Laplacian Eigenmaps and a graphical model of the human body to segment 3D voxel data of humans into different articulated chains. In the bottom-up stage, the voxels are transformed into a high-dimensional (6D or less) Laplacian Eigenspace (LE) of the voxel neighborhood graph. We show that LE is effective at mapping voxels on long articulated chains to nodes on smooth 1D curves that can be easily discriminated, and prove these properties using representative graphs. We fit 1D splines to voxels belonging to different articulated chains such as the limbs, head and trunk, and we determine the boundary between splines by thresholding the spline fit error, which is high at junctions. A top-down probabilistic approach is then used to register the segmented chains, utilizing both their mutual connectivity and their individual properties such as length and thickness. Our approach enables us to deal with complex poses such as those where the limbs form loops. We use the segmentation results to automatically estimate the human body models. Although we use human subjects in our experiments, the method is fairly general and can be applied to voxel-based registration of any articulated object, which is composed of long chains. We present results on real and synthetic data that illustrate the usefulness of this approach.


Asunto(s)
Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Imagenología Tridimensional/métodos , Articulaciones/anatomía & histología , Modelos Anatómicos , Reconocimiento de Normas Patrones Automatizadas/métodos , Imagen de Cuerpo Entero/métodos , Algoritmos , Inteligencia Artificial , Simulación por Computador , Humanos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
17.
IEEE Trans Pattern Anal Mach Intell ; 30(3): 463-76, 2008 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-18195440

RESUMEN

Behavior analysis of social insects has garnered impetus in recent years and has led to some advances in fields like control systems, flight navigation etc. Manual labeling of insect motions required for analyzing the behaviors of insects requires significant investment of time and effort. In this paper, we propose certain general principles that help in simultaneous automatic tracking and behavior analysis with applications in tracking bees and recognizing specific behaviors exhibited by them. The state space for tracking is defined using position, orientation and the current behavior of the insect being tracked. The position and orientation are parametrized using a shape model while the behavior is explicitly modeled using a three-tier hierarchical motion model. The first tier (dynamics) models the local motions exhibited and the models built in this tier act as a vocabulary for behavior modeling. The second tier is a Markov motion model built on top of the local motion vocabulary which serves as the behavior model. The third tier of the hierarchy models the switching between behaviors and this is also modeled as a Markov model. We address issues in learning the three-tier behavioral model, in discriminating between models, detecting and in modeling abnormal behaviors. Another important aspect of this work is that it leads to joint tracking and behavior analysis instead of the traditional track and then recognize approach. We apply these principles for tracking bees in a hive while they are executing the waggle dance and the round dance.


Asunto(s)
Comunicación Animal , Inteligencia Artificial , Abejas/fisiología , Baile/fisiología , Interpretación de Imagen Asistida por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Conducta Social , Algoritmos , Animales , Abejas/anatomía & histología , Femenino , Aumento de la Imagen/métodos , Imagenología Tridimensional/métodos , Almacenamiento y Recuperación de la Información/métodos , Masculino , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
18.
IEEE Trans Image Process ; 17(5): 737-48, 2008 May.
Artículo en Inglés | MEDLINE | ID: mdl-18390378

RESUMEN

In this paper, we analyze the computational challenges in implementing particle filtering, especially to video sequences. Particle filtering is a technique used for filtering nonlinear dynamical systems driven by non-Gaussian noise processes. It has found widespread applications in detection, navigation, and tracking problems. Although, in general, particle filtering methods yield improved results, it is difficult to achieve real time performance. In this paper, we analyze the computational drawbacks of traditional particle filtering algorithms, and present a method for implementing the particle filter using the Independent Metropolis Hastings sampler, that is highly amenable to pipelined implementations and parallelization. We analyze the implementations of the proposed algorithm, and, in particular, concentrate on implementations that have minimum processing times. It is shown that the design parameters for the fastest implementation can be chosen by solving a set of convex programs. The proposed computational methodology was verified using a cluster of PCs for the application of visual tracking. We demonstrate a linear speed-up of the algorithm using the methodology proposed in the paper.


Asunto(s)
Algoritmos , Compresión de Datos/métodos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Procesamiento de Señales Asistido por Computador , Grabación en Video/métodos , Inteligencia Artificial , Simulación por Computador , Modelos Estadísticos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
19.
IEEE Trans Image Process ; 27(4): 2022-2037, 2018 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-29989985

RESUMEN

The use of multiple features has been shown to be an effective strategy for visual tracking because of their complementary contributions to appearance modeling. The key problem is how to learn a fused representation from multiple features for appearance modeling. Different features extracted from the same object should share some commonalities in their representations while each feature should also have some feature-specific representation patterns which reflect its complementarity in appearance modeling. Different from existing multi-feature sparse trackers which only consider the commonalities among the sparsity patterns of multiple features, this paper proposes a novel multiple sparse representation framework for visual tracking which jointly exploits the shared and feature-specific properties of different features by decomposing multiple sparsity patterns. Moreover, we introduce a novel online multiple metric learning to efficiently and adaptively incorporate the appearance proximity constraint, which ensures that the learned commonalities of multiple features are more representative. Experimental results on tracking benchmark videos and other challenging videos demonstrate the effectiveness of the proposed tracker.

20.
IEEE Trans Pattern Anal Mach Intell ; 40(7): 1653-1667, 2018 07.
Artículo en Inglés | MEDLINE | ID: mdl-28692963

RESUMEN

Learning a classifier from ambiguously labeled face images is challenging since training images are not always explicitly-labeled. For instance, face images of two persons in a news photo are not explicitly labeled by their names in the caption. We propose a Matrix Completion for Ambiguity Resolution (MCar) method for predicting the actual labels from ambiguously labeled images. This step is followed by learning a standard supervised classifier from the disambiguated labels to classify new images. To prevent the majority labels from dominating the result of MCar, we generalize MCar to a weighted MCar (WMCar) that handles label imbalance. Since WMCar outputs a soft labeling vector of reduced ambiguity for each instance, we can iteratively refine it by feeding it as the input to WMCar. Nevertheless, such an iterative implementation can be affected by the noisy soft labeling vectors, and thus the performance may degrade. Our proposed Iterative Candidate Elimination (ICE) procedure makes the iterative ambiguity resolution possible by gradually eliminating a portion of least likely candidates in ambiguously labeled faces. We further extend MCar to incorporate the labeling constraints among instances when such prior knowledge is available. Compared to existing methods, our approach demonstrates improvements on several ambiguously labeled datasets.


Asunto(s)
Cara/anatomía & histología , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático , Reconocimiento de Normas Patrones Automatizadas/métodos , Algoritmos , Identificación Biométrica , Bases de Datos Factuales , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA