Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Phys Med ; 125: 104505, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39208517

RESUMO

PURPOSE: The purpose of this study is to develop an automated method using deep learning for the reliable and precise quantification of left ventricle structure and function from echocardiogram videos, eliminating the need to identify end-systolic and end-diastolic frames. This addresses the variability and potential inaccuracies associated with manual quantification, aiming to improve the diagnosis and management of cardiovascular conditions. METHODS: A single, fully automated multitask network, the EchoFused Network (EFNet) is introduced that simultaneously addresses both left ventricle segmentation and ejection fraction estimation tasks through cross-module fusion. Our proposed approach utilizes semi-supervised learning to estimate the ejection fraction from the entire cardiac cycle, yielding more dependable estimations and obviating the need to identify specific frames. To facilitate joint optimization, the losses from task-specific modules are combined using a normalization technique, ensuring commensurability on a comparable scale. RESULTS: The assessment of the proposed model on a publicly available dataset, EchoNet-Dynamic, shows significant performance improvement, achieving an MAE of 4.35% for ejection fraction estimation and DSC values of 0.9309 (end-diastolic) and 0.9135 (end-systolic) for left ventricle segmentation. CONCLUSIONS: The study demonstrates the efficacy of EFNet, a multitask deep learning network, in simultaneously quantifying left ventricle structure and function through cross-module fusion on echocardiogram data.


Assuntos
Aprendizado Profundo , Ecocardiografia , Ventrículos do Coração , Processamento de Imagem Assistida por Computador , Ventrículos do Coração/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador/métodos , Função Ventricular Esquerda
2.
Diagnostics (Basel) ; 13(13)2023 Jun 24.
Artigo em Inglês | MEDLINE | ID: mdl-37443550

RESUMO

Echocardiography is one of the imaging systems most often utilized for assessing heart anatomy and function. Left ventricle ejection fraction (LVEF) is an important clinical variable assessed from echocardiography via the measurement of left ventricle (LV) parameters. Significant inter-observer and intra-observer variability is seen when LVEF is quantified by cardiologists using huge echocardiography data. Machine learning algorithms have the capability to analyze such extensive datasets and identify intricate patterns of structure and function of the heart that highly skilled observers might overlook, hence paving the way for computer-assisted diagnostics in this field. In this study, LV segmentation is performed on echocardiogram data followed by feature extraction from the left ventricle based on clinical methods. The extracted features are then subjected to analysis using both neural networks and traditional machine learning algorithms to estimate the LVEF. The results indicate that employing machine learning techniques on the extracted features from the left ventricle leads to higher accuracy than the utilization of Simpson's method for estimating the LVEF. The evaluations are performed on a publicly available echocardiogram dataset, EchoNet-Dynamic. The best results are obtained when DeepLab, a convolutional neural network architecture, is used for LV segmentation along with Long Short-Term Memory Networks (LSTM) for the regression of LVEF, obtaining a dice similarity coefficient of 0.92 and a mean absolute error of 5.736%.

3.
IEEE Trans Cybern ; 51(9): 4515-4527, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-31880579

RESUMO

This article presents an efficient fingerprint identification system that implements an initial classification for search-space reduction followed by minutiae neighbor-based feature encoding and matching. The current state-of-the-art fingerprint classification methods use a deep convolutional neural network (DCNN) to assign confidence for the classification prediction, and based on this prediction, the input fingerprint is matched with only the subset of the database that belongs to the predicted class. It can be observed for the DCNNs that as the architectures deepen, the farthest layers of the network learn more abstract information from the input images that result in higher prediction accuracies. However, the downside is that the DCNNs are data hungry and require lots of annotated (labeled) data to learn generalized network parameters for deeper layers. In this article, a shallow multifeature view CNN (SMV-CNN) fingerprint classifier is proposed that extracts: 1) fine-grained features from the input image and 2) abstract features from explicitly derived representations obtained from the input image. The multifeature views are fed to a fully connected neural network (NN) to compute a global classification prediction. The classification results show that the SMV-CNN demonstrated an improvement of 2.8% when compared to baseline CNN consisting of a single grayscale view on an open-source database. Moreover, in comparison with the state-of-the-art residual network (ResNet-50) image classification model, the proposed method performs comparably while being less complex and more efficient during training. The result of classification-based fingerprint identification has shown that the search space is reduced by over 50% without degradation of identification accuracies.


Assuntos
Redes Neurais de Computação
4.
J Forensic Sci ; 63(6): 1727-1749, 2018 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-29684935

RESUMO

Face recognition aims to establish the identity of a person based on facial characteristics. On the other hand, age group estimation is the automatic calculation of an individual's age range based on facial features. Recognizing age-separated face images is still a challenging research problem due to complex aging processes involving different types of facial tissues, skin, fat, muscles, and bones. Certain holistic and local facial features are used to recognize age-separated face images. However, most of the existing methods recognize face images without incorporating the knowledge learned from age group estimation. In this paper, we propose an age-assisted face recognition approach to handle aging variations. Inspired by the observation that facial asymmetry is an age-dependent intrinsic facial feature, we first use asymmetric facial dimensions to estimate the age group of a given face image. Deeply learned asymmetric facial features are then extracted for face recognition using a deep convolutional neural network (dCNN). Finally, we integrate the knowledge learned from the age group estimation into the face recognition algorithm using the same dCNN. This integration results in a significant improvement in the overall performance compared to using the face recognition algorithm alone. The experimental results on two large facial aging datasets, the MORPH and FERET sets, show that the proposed age group estimation based on the face recognition approach yields superior performance compared to some existing state-of-the-art methods.


Assuntos
Envelhecimento/fisiologia , Assimetria Facial/fisiopatologia , Adolescente , Adulto , Idoso , Algoritmos , Pontos de Referência Anatômicos , Criança , Bases de Dados Factuais , Feminino , Ciências Forenses , Humanos , Masculino , Pessoa de Meia-Idade , Modelos Estatísticos , Redes Neurais de Computação , Adulto Jovem
5.
PLoS One ; 8(2): e56510, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23451054

RESUMO

Face recognition has emerged as the fastest growing biometric technology and has expanded a lot in the last few years. Many new algorithms and commercial systems have been proposed and developed. Most of them use Principal Component Analysis (PCA) as a base for their techniques. Different and even conflicting results have been reported by researchers comparing these algorithms. The purpose of this study is to have an independent comparative analysis considering both performance and computational complexity of six appearance based face recognition algorithms namely PCA, 2DPCA, A2DPCA, (2D)(2)PCA, LPP and 2DLPP under equal working conditions. This study was motivated due to the lack of unbiased comprehensive comparative analysis of some recent subspace methods with diverse distance metric combinations. For comparison with other studies, FERET, ORL and YALE databases have been used with evaluation criteria as of FERET evaluations which closely simulate real life scenarios. A comparison of results with previous studies is performed and anomalies are reported. An important contribution of this study is that it presents the suitable performance conditions for each of the algorithms under consideration.


Assuntos
Biometria/métodos , Face , Análise de Componente Principal/métodos , Algoritmos , Reconhecimento Automatizado de Padrão
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA