Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Sci Rep ; 14(1): 19070, 2024 08 17.
Artigo em Inglês | MEDLINE | ID: mdl-39154133

RESUMO

Independent component analysis (ICA) and dictionary learning (DL) are the most successful blind source separation (BSS) methods for functional magnetic resonance imaging (fMRI) data analysis. However, ICA to higher and DL to lower extent may suffer from performance degradation by the presence of anomalous observations in the recovered time courses (TCs) and high overlaps among spatial maps (SMs). This paper addressed both problems using a novel three-layered sparse DL (TLSDL) algorithm that incorporated prior information in the dictionary update process and recovered full-rank outlier-free TCs from highly corrupted measurements. The associated sequential DL model involved factorizing each subject's data into a multi-subject (MS) dictionary and MS sparse code while imposing a low-rank and a sparse matrix decomposition restriction on the dictionary matrix. It is derived by solving three layers of feature extraction and component estimation. The first and second layers captured brain regions with low and moderate spatial overlaps, respectively. The third layer that segregated regions with significant spatial overlaps solved a sequence of vector decomposition problems using the proximal alternating linearized minimization (PALM) method and solved a decomposition restriction using the alternating directions method (ALM). It learned outlier-free dynamics that integrate spatiotemporal diversities across brains and external information. It differs from existing DL methods owing to its unique optimization model, which incorporates prior knowledge, subject-wise/multi-subject representation matrices, and outlier handling. The TLSDL algorithm was compared with existing dictionary learning algorithms using experimental and synthetic fMRI datasets to verify its performance. Overall, the mean correlation value was found to be 26 % higher for the TLSDL than for the state-of-the-art subject-wise sequential DL (swsDL) technique.


Assuntos
Algoritmos , Encéfalo , Imageamento por Ressonância Magnética , Humanos , Imageamento por Ressonância Magnética/métodos , Encéfalo/diagnóstico por imagem , Encéfalo/fisiologia , Processamento de Imagem Assistida por Computador/métodos , Mapeamento Encefálico/métodos , Rede Nervosa/fisiologia , Rede Nervosa/diagnóstico por imagem , Aprendizado de Máquina
2.
PLoS One ; 19(7): e0304757, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38990817

RESUMO

Recent advancements in AI, driven by big data technologies, have reshaped various industries, with a strong focus on data-driven approaches. This has resulted in remarkable progress in fields like computer vision, e-commerce, cybersecurity, and healthcare, primarily fueled by the integration of machine learning and deep learning models. Notably, the intersection of oncology and computer science has given rise to Computer-Aided Diagnosis (CAD) systems, offering vital tools to aid medical professionals in tumor detection, classification, recurrence tracking, and prognosis prediction. Breast cancer, a significant global health concern, is particularly prevalent in Asia due to diverse factors like lifestyle, genetics, environmental exposures, and healthcare accessibility. Early detection through mammography screening is critical, but the accuracy of mammograms can vary due to factors like breast composition and tumor characteristics, leading to potential misdiagnoses. To address this, an innovative CAD system leveraging deep learning and computer vision techniques was introduced. This system enhances breast cancer diagnosis by independently identifying and categorizing breast lesions, segmenting mass lesions, and classifying them based on pathology. Thorough validation using the Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM) demonstrated the CAD system's exceptional performance, with a 99% success rate in detecting and classifying breast masses. While the accuracy of detection is 98.5%, when segmenting breast masses into separate groups for examination, the method's performance was approximately 95.39%. Upon completing all the analysis, the system's classification phase yielded an overall accuracy of 99.16% for classification. The potential for this integrated framework to outperform current deep learning techniques is proposed, despite potential challenges related to the high number of trainable parameters. Ultimately, this recommended framework offers valuable support to researchers and physicians in breast cancer diagnosis by harnessing cutting-edge AI and image processing technologies, extending recent advances in deep learning to the medical domain.


Assuntos
Neoplasias da Mama , Aprendizado Profundo , Diagnóstico por Computador , Mamografia , Humanos , Neoplasias da Mama/diagnóstico , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/classificação , Feminino , Mamografia/métodos , Diagnóstico por Computador/métodos , Detecção Precoce de Câncer/métodos
3.
Diagnostics (Basel) ; 13(3)2023 Jan 20.
Artigo em Inglês | MEDLINE | ID: mdl-36766498

RESUMO

Diabetic Retinopathy (DR) is the most common complication that arises due to diabetes, and it affects the retina. It is the leading cause of blindness globally, and early detection can protect patients from losing sight. However, the early detection of Diabetic Retinopathy is an difficult task that needs clinical experts' interpretation of fundus images. In this study, a deep learning model was trained and validated on a private dataset and tested in real time at the Sindh Institute of Ophthalmology & Visual Sciences (SIOVS). The intelligent model evaluated the quality of the test images. The implemented model classified the test images into DR-Positive and DR-Negative ones. Furthermore, the results were reviewed by clinical experts to assess the model's performance. A total number of 398 patients, including 232 male and 166 female patients, were screened for five weeks. The model achieves 93.72% accuracy, 97.30% sensitivity, and 92.90% specificity on the test data as labelled by clinical experts on Diabetic Retinopathy.

4.
Sensors (Basel) ; 22(12)2022 Jun 11.
Artigo em Inglês | MEDLINE | ID: mdl-35746208

RESUMO

The convolutional neural network (CNN) has become a powerful tool in machine learning (ML) that is used to solve complex problems such as image recognition, natural language processing, and video analysis. Notably, the idea of exploring convolutional neural network architecture has gained substantial attention as well as popularity. This study focuses on the intrinsic various CNN architectures: LeNet, AlexNet, VGG16, ResNet-50, and Inception-V1, which have been scrutinized and compared with each other for the detection of lung cancer using publicly available LUNA16 datasets. Furthermore, multiple performance optimizers: root mean square propagation (RMSProp), adaptive moment estimation (Adam), and stochastic gradient descent (SGD), were applied for this comparative study. The performances of the three CNN architectures were measured for accuracy, specificity, sensitivity, positive predictive value, false omission rate, negative predictive value, and F1 score. The experimental results showed that the CNN AlexNet architecture with the SGD optimizer achieved the highest validation accuracy for CT lung cancer with an accuracy of 97.42%, misclassification rate of 2.58%, 97.58% sensitivity, 97.25% specificity, 97.58% positive predictive value, 97.25% negative predictive value, false omission rate of 2.75%, and F1 score of 97.58%. AlexNet with the SGD optimizer was the best and outperformed compared to the other state-of-the-art CNN architectures.


Assuntos
Neoplasias Pulmonares , Redes Neurais de Computação , Humanos , Neoplasias Pulmonares/diagnóstico , Aprendizado de Máquina , Tomografia Computadorizada por Raios X
5.
ScientificWorldJournal ; 2014: 672630, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24967437

RESUMO

Face recognition in today's technological world, and face recognition applications attain much more importance. Most of the existing work used frontal face images to classify face image. However these techniques fail when applied on real world face images. The proposed technique effectively extracts the prominent facial features. Most of the features are redundant and do not contribute to representing face. In order to eliminate those redundant features, computationally efficient algorithm is used to select the more discriminative face features. Extracted features are then passed to classification step. In the classification step, different classifiers are ensemble to enhance the recognition accuracy rate as single classifier is unable to achieve the high accuracy. Experiments are performed on standard face database images and results are compared with existing techniques.


Assuntos
Inteligência Artificial , Face , Reconhecimento Automatizado de Padrão , Expressão Facial , Feminino , Humanos , Masculino , Reprodutibilidade dos Testes
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA