Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 59
Filtrar
1.
Brief Bioinform ; 24(1)2023 01 19.
Artigo em Inglês | MEDLINE | ID: mdl-36577448

RESUMO

With the improvement of single-cell measurement techniques, there is a growing awareness that individual differences exist among cells, and protein expression distribution can vary across cells in the same tissue or cell line. Pinpointing the protein subcellular locations in single cells is crucial for mapping functional specificity of proteins and studying related diseases. Currently, research about single-cell protein location is still in its infancy, and most studies and databases do not annotate proteins at the cell level. For example, in the human protein atlas database, an immunofluorescence image stained for a particular protein shows multiple cells, but the subcellular location annotation is for the whole image, ignoring intercellular difference. In this study, we used large-scale immunofluorescence images and image-level subcellular locations to develop a deep-learning-based pipeline that could accurately recognize protein localizations in single cells. The pipeline consisted of two deep learning models, i.e. an image-based model and a cell-based model. The former used a multi-instance learning framework to comprehensively model protein distribution in multiple cells in each image, and could give both image-level and cell-level predictions. The latter firstly used clustering and heuristics algorithms to assign pseudo-labels of subcellular locations to the segmented cell images, and then used the pseudo-labels to train a classification model. Finally, the image-based model was fused with the cell-based model at the decision level to obtain the final ensemble model for single-cell prediction. Our experimental results showed that the ensemble model could achieve higher accuracy and robustness on independent test sets than state-of-the-art methods.


Assuntos
Aprendizado Profundo , Humanos , Proteínas/metabolismo , Algoritmos , Linhagem Celular , Imunofluorescência
2.
Mod Pathol ; 37(1): 100373, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37925056

RESUMO

The current flow cytometric analysis of blood and bone marrow samples for diagnosis of acute myeloid leukemia (AML) relies heavily on manual intervention in the processing and analysis steps, introducing significant subjectivity into resulting diagnoses and necessitating highly trained personnel. Furthermore, concurrent molecular characterization via cytogenetics and targeted sequencing can take multiple days, delaying patient diagnosis and treatment. Attention-based multi-instance learning models (ABMILMs) are deep learning models that make accurate predictions and generate interpretable insights regarding the classification of a sample from individual events/cells; nonetheless, these models have yet to be applied to flow cytometry data. In this study, we developed a computational pipeline using ABMILMs for the automated diagnosis of AML cases based exclusively on flow cytometric data. Analysis of 1820 flow cytometry samples shows that this pipeline provides accurate diagnoses of acute leukemia (area under the receiver operating characteristic curve [AUROC] 0.961) and accurately differentiates AML vs B- and T-lymphoblastic leukemia (AUROC 0.965). Models for prediction of 9 cytogenetic aberrancies and 32 pathogenic variants in AML provide accurate predictions, particularly for t(15;17)(PML::RARA) [AUROC 0.929], t(8;21)(RUNX1::RUNX1T1) (AUROC 0.814), and NPM1 variants (AUROC 0.807). Finally, we demonstrate how these models generate interpretable insights into which individual flow cytometric events and markers deliver optimal diagnostic utility, providing hematopathologists with a data visualization tool for improved data interpretation, as well as novel biological associations between flow cytometric marker expression and cytogenetic/molecular variants in AML. Our study is the first to illustrate the feasibility of using deep learning-based analysis of flow cytometric data for automated AML diagnosis and molecular characterization.


Assuntos
Aprendizado Profundo , Leucemia Mieloide Aguda , Humanos , Citometria de Fluxo/métodos , Leucemia Mieloide Aguda/diagnóstico , Leucemia Mieloide Aguda/genética , Leucemia Mieloide Aguda/metabolismo , Doença Aguda , Citogenética
3.
Am J Otolaryngol ; 45(4): 104342, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38703609

RESUMO

OBJECTIVE: To develop a multi-instance learning (MIL) based artificial intelligence (AI)-assisted diagnosis models by using laryngoscopic images to differentiate benign and malignant vocal fold leukoplakia (VFL). METHODS: The AI system was developed, trained and validated on 5362 images of 551 patients from three hospitals. Automated regions of interest (ROI) segmentation algorithm was utilized to construct image-level features. MIL was used to fusion image level results to patient level features, then the extracted features were modeled by seven machine learning algorithms. Finally, we evaluated the image level and patient level results. Additionally, 50 videos of VFL were prospectively gathered to assess the system's real-time diagnostic capabilities. A human-machine comparison database was also constructed to compare the diagnostic performance of otolaryngologists with and without AI assistance. RESULTS: In internal and external validation sets, the maximum area under the curve (AUC) for image level segmentation models was 0.775 (95 % CI 0.740-0.811) and 0.720 (95 % CI 0.684-0.756), respectively. Utilizing a MIL-based fusion strategy, the AUC at the patient level increased to 0.869 (95 % CI 0.798-0.940) and 0.851 (95 % CI 0.756-0.945). For real-time video diagnosis, the maximum AUC at the patient level reached 0.850 (95 % CI, 0.743-0.957). With AI assistance, the AUC improved from 0.720 (95 % CI 0.682-0.755) to 0.808 (95 % CI 0.775-0.839) for senior otolaryngologists and from 0.647 (95 % CI 0.608-0.686) to 0.807 (95 % CI 0.773-0.837) for junior otolaryngologists. CONCLUSIONS: The MIL based AI-assisted diagnosis system can significantly improve the diagnostic performance of otolaryngologists for VFL and help to make proper clinical decisions.


Assuntos
Inteligência Artificial , Laringoscopia , Leucoplasia , Prega Vocal , Humanos , Prega Vocal/diagnóstico por imagem , Prega Vocal/patologia , Laringoscopia/métodos , Masculino , Leucoplasia/diagnóstico , Leucoplasia/patologia , Feminino , Pessoa de Meia-Idade , Idoso , Diagnóstico por Computador/métodos , Aprendizado de Máquina , Diagnóstico Diferencial , Adulto , Algoritmos , Neoplasias Laríngeas/diagnóstico , Neoplasias Laríngeas/patologia , Neoplasias Laríngeas/diagnóstico por imagem
4.
Int J Mol Sci ; 23(19)2022 Sep 22.
Artigo em Inglês | MEDLINE | ID: mdl-36232434

RESUMO

The prediction of the strengths of drug-target interactions, also called drug-target binding affinities (DTA), plays a fundamental role in facilitating drug discovery, where the goal is to find prospective drug candidates. With the increase in the number of drug-protein interactions, machine learning techniques, especially deep learning methods, have become applicable for drug-target interaction discovery because they significantly reduce the required experimental workload. In this paper, we present a spontaneous formulation of the DTA prediction problem as an instance of multi-instance learning. We address the problem in three stages, first organizing given drug and target sequences into instances via a private-public mechanism, then identifying the predicted scores of all instances in the same bag, and finally combining all the predicted scores as the output prediction. A comprehensive evaluation demonstrates that the proposed method outperforms other state-of-the-art methods on three benchmark datasets.


Assuntos
Algoritmos , Aprendizado de Máquina , Desenvolvimento de Medicamentos , Descoberta de Drogas , Proteínas
5.
Entropy (Basel) ; 25(1)2022 Dec 23.
Artigo em Inglês | MEDLINE | ID: mdl-36673169

RESUMO

The aim of this study is to develop a new approach to be able to correctly predict the outcome of electronic sports (eSports) matches using machine learning methods. Previous research has emphasized player-centric prediction and has used standard (single-instance) classification techniques. However, a team-centric classification is required since team cooperation is essential in completing game missions and achieving final success. To bridge this gap, in this study, we propose a new approach, called Multi-Objective Multi-Instance Learning (MOMIL). It is the first study that applies the multi-instance learning technique to make win predictions in eSports. The proposed approach jointly considers the objectives of the players in a team to capture relationships between players during the classification. In this study, entropy was used as a measure to determine the impurity (uncertainty) of the training dataset when building decision trees for classification. The experiments that were carried out on a publicly available eSports dataset show that the proposed multi-objective multi-instance classification approach outperforms the standard classification approach in terms of accuracy. Unlike the previous studies, we built the models on season-based data. Our approach is up to 95% accurate for win prediction in eSports. Our method achieved higher performance than the state-of-the-art methods tested on the same dataset.

6.
Knowl Based Syst ; 252: 109278, 2022 Sep 27.
Artigo em Inglês | MEDLINE | ID: mdl-35783000

RESUMO

Coronavirus Disease 2019 (COVID-19) still presents a pandemic trend globally. Detecting infected individuals and analyzing their status can provide patients with proper healthcare while protecting the normal population. Chest CT (computed tomography) is an effective tool for screening of COVID-19. It displays detailed pathology-related information. To achieve automated COVID-19 diagnosis and lung CT image segmentation, convolutional neural networks (CNNs) have become mainstream methods. However, most of the previous works consider automated diagnosis and image segmentation as two independent tasks, in which some focus on lung fields segmentation and the others focus on single-lesion segmentation. Moreover, lack of clinical explainability is a common problem for CNN-based methods. In such context, we develop a multi-task learning framework in which the diagnosis of COVID-19 and multi-lesion recognition (segmentation of CT images) are achieved simultaneously. The core of the proposed framework is an explainable multi-instance multi-task network. The network learns task-related features adaptively with learnable weights, and gives explicable diagnosis results by suggesting local CT images with lesions as additional evidence. Then, severity assessment of COVID-19 and lesion quantification are performed to analyze patient status. Extensive experimental results on real-world datasets show that the proposed framework outperforms all the compared approaches for COVID-19 diagnosis and multi-lesion segmentation.

7.
Appl Intell (Dordr) ; 52(12): 13902-13915, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35250175

RESUMO

Although using single-instance learning methods to solve multi-instance problems has achieved excellent performance in many tasks, the reasons for this success still lack a rigorous theoretical explanation. In particular, the potential relation between the number of causal factors (also called causal instances) in a bag and the model performance is not transparent. The goal of our study is to use the causal relationship between instances and bags to enhance the interpretability of multi-instance learning. First, we provide a lower bound on the number of instances required to determine causal factors in a real multi-instance learning task. Then, we provide a lower bound on the single-instance learning loss function when testing instances and training instances follow the same distribution and extend this conclusion to the situation where the distribution changes. Thus, theoretically, we demonstrate that the number of causal factors in the bag is an important parameter that affects the performance of the model when using single-instance learning methods to solve multi-instance learning problems. Finally, combining with a specific classification task, we experimentally validate our theoretical analysis.

8.
Neuroimage ; 244: 118586, 2021 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-34563678

RESUMO

Mild cognitive impairment (MCI) conversion prediction, i.e., identifying MCI patients of high risks converting to Alzheimer's disease (AD), is essential for preventing or slowing the progression of AD. Although previous studies have shown that the fusion of multi-modal data can effectively improve the prediction accuracy, their applications are largely restricted by the limited availability or high cost of multi-modal data. Building an effective prediction model using only magnetic resonance imaging (MRI) remains a challenging research topic. In this work, we propose a multi-modal multi-instance distillation scheme, which aims to distill the knowledge learned from multi-modal data to an MRI-based network for MCI conversion prediction. In contrast to existing distillation algorithms, the proposed multi-instance probabilities demonstrate a superior capability of representing the complicated atrophy distributions, and can guide the MRI-based network to better explore the input MRI. To our best knowledge, this is the first study that attempts to improve an MRI-based prediction model by leveraging extra supervision distilled from multi-modal information. Experiments demonstrate the advantage of our framework, suggesting its potentials in the data-limited clinical settings.


Assuntos
Doença de Alzheimer/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Idoso , Idoso de 80 Anos ou mais , Algoritmos , Atrofia , Encéfalo/patologia , Disfunção Cognitiva/diagnóstico por imagem , Feminino , Humanos , Conhecimento , Aprendizagem , Masculino , Pessoa de Meia-Idade , Probabilidade
9.
J Magn Reson Imaging ; 54(3): 818-829, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-33891778

RESUMO

BACKGROUND: Due to random motion of fetuses and maternal respirations, image quality of fetal brain MRIs varies considerably. To address this issue, visual inspection of the images is performed during acquisition phase and after 3D-reconstruction, and the images are re-acquired if they are deemed to be of insufficient quality. However, this process is time-consuming and subjective. Multi-instance (MI) deep learning methods (DLMs) may perform this task automatically. PURPOSE: To propose an MI count-based DLM (MI-CB-DLM), an MI vote-based DLM (MI-VB-DLM), and an MI feature-embedding DLM (MI-FE-DLM) for automatic assessment of 3D fetal-brain MR image quality. To quantify influence of fetal gestational age (GA) on DLM performance. STUDY TYPE: Retrospective. SUBJECTS: Two hundred and seventy-one MR exams from 211 fetuses (mean GA ± SD = 30.9 ± 5.5 weeks). FIELD STRENGTH/SEQUENCE: T2 -weighted single-shot fast spin-echo acquired at 1.5 T. ASSESSMENT: The T2 -weighted images were reconstructed in 3D. Then, two fetal neuroradiologists, a clinical neuroscientist, and a fetal MRI technician independently labeled the reconstructed images as 1 or 0 based on image quality (1 = high; 0 = low). These labels were fused and served as ground truth. The proposed DLMs were trained and evaluated using three repeated 10-fold cross-validations (training and validation sets of 244 and 27 scans). To quantify GA influence, this variable was included as an input of the DLMs. STATISTICAL TESTS: DLM performance was evaluated using precision, recall, F-score, accuracy, and AUC values. RESULTS: Precision, recall, F-score, accuracy, and AUC averaged over the three cross validations were 0.85 ± 0.01, 0.85 ± 0.01, 0.85 ± 0.01, 0.85 ± 0.01, 0.93 ± 0.01, for MI-CB-DLM (without GA); 0.75 ± 0.03, 0.75 ± 0.03, 0.75 ± 0.03, 0.75 ± 0.03, 0.81 ± 0.03, for MI-VB-DLM (without GA); 0.81 ± 0.01, 0.81 ± 0.01, 0.81 ± 0.01, 0.81 ± 0.01, 0.89 ± 0.01, for MI-FE-DLM (without GA); and 0.86 ± 0.01, 0.86 ± 0.01, 0.86 ± 0.01, 0.86 ± 0.01, 0.93 ± 0.01, for MI-CB-DLM with GA. DATA CONCLUSION: MI-CB-DLM performed better than other DLMs. Including GA as an input of MI-CB-DLM improved its performance. MI-CB-DLM may potentially be used to objectively and rapidly assess fetal MR image quality. EVIDENCE LEVEL: 4 TECHNICAL EFFICACY: Stage 3.


Assuntos
Aprendizado Profundo , Encéfalo/diagnóstico por imagem , Feto/diagnóstico por imagem , Humanos , Imageamento por Ressonância Magnética , Estudos Retrospectivos
10.
Pattern Recognit ; 113: 107828, 2021 May.
Artigo em Inglês | MEDLINE | ID: mdl-33495661

RESUMO

Understanding chest CT imaging of the coronavirus disease 2019 (COVID-19) will help detect infections early and assess the disease progression. Especially, automated severity assessment of COVID-19 in CT images plays an essential role in identifying cases that are in great need of intensive clinical care. However, it is often challenging to accurately assess the severity of this disease in CT images, due to variable infection regions in the lungs, similar imaging biomarkers, and large inter-case variations. To this end, we propose a synergistic learning framework for automated severity assessment of COVID-19 in 3D CT images, by jointly performing lung lobe segmentation and multi-instance classification. Considering that only a few infection regions in a CT image are related to the severity assessment, we first represent each input image by a bag that contains a set of 2D image patches (with each cropped from a specific slice). A multi-task multi-instance deep network (called M 2 UNet) is then developed to assess the severity of COVID-19 patients and also segment the lung lobe simultaneously. Our M 2 UNet consists of a patch-level encoder, a segmentation sub-network for lung lobe segmentation, and a classification sub-network for severity assessment (with a unique hierarchical multi-instance learning strategy). Here, the context information provided by segmentation can be implicitly employed to improve the performance of severity assessment. Extensive experiments were performed on a real COVID-19 CT image dataset consisting of 666 chest CT images, with results suggesting the effectiveness of our proposed method compared to several state-of-the-art methods.

11.
Sensors (Basel) ; 21(20)2021 Oct 12.
Artigo em Inglês | MEDLINE | ID: mdl-34695987

RESUMO

In smart buildings, many different systems work in coordination to accomplish their tasks. In this process, the sensors associated with these systems collect large amounts of data generated in a streaming fashion, which is prone to concept drift. Such data are heterogeneous due to the wide range of sensors collecting information about different characteristics of the monitored systems. All these make the monitoring task very challenging. Traditional clustering algorithms are not well equipped to address the mentioned challenges. In this work, we study the use of MV Multi-Instance Clustering algorithm for multi-view analysis and mining of smart building systems' sensor data. It is demonstrated how this algorithm can be used to perform contextual as well as integrated analysis of the systems. Various scenarios in which the algorithm can be used to analyze the data generated by the systems of a smart building are examined and discussed in this study. In addition, it is also shown how the extracted knowledge can be visualized to detect trends in the systems' behavior and how it can aid domain experts in the systems' maintenance. In the experiments conducted, the proposed approach was able to successfully detect the deviating behaviors known to have previously occurred and was also able to identify some new deviations during the monitored period. Based on the results obtained from the experiments, it can be concluded that the proposed algorithm has the ability to be used for monitoring, analysis, and detecting deviating behaviors of the systems in a smart building domain.


Assuntos
Análise de Dados , Eletrocardiografia , Algoritmos , Análise por Conglomerados , Monitorização Fisiológica
12.
Sensors (Basel) ; 18(3)2018 Mar 05.
Artigo em Inglês | MEDLINE | ID: mdl-29510547

RESUMO

The diverse density (DD) algorithm was proposed to handle the problem of low classification accuracy when training samples contain interference such as mixed pixels. The DD algorithm can learn a feature vector from training bags, which comprise instances (pixels). However, the feature vector learned by the DD algorithm cannot always effectively represent one type of ground cover. To handle this problem, an instance space-based diverse density (ISBDD) model that employs a novel training strategy is proposed in this paper. In the ISBDD model, DD values of each pixel are computed instead of learning a feature vector, and as a result, the pixel can be classified according to its DD values. Airborne hyperspectral data collected by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor and the Push-broom Hyperspectral Imager (PHI) are applied to evaluate the performance of the proposed model. Results show that the overall classification accuracy of ISBDD model on the AVIRIS and PHI images is up to 97.65% and 89.02%, respectively, while the kappa coefficient is up to 0.97 and 0.88, respectively.

13.
Ophthalmol Sci ; 4(3): 100428, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38284101

RESUMO

Purpose: Nascent geographic atrophy (nGA) refers to specific features seen on OCT B-scans, which are strongly associated with the future development of geographic atrophy (GA). This study sought to develop a deep learning model to screen OCT B-scans for nGA that warrant further manual review (an artificial intelligence [AI]-assisted approach), and to determine the extent of reduction in OCT B-scan load requiring manual review while maintaining near-perfect nGA detection performance. Design: Development and evaluation of a deep learning model. Participants: One thousand eight hundred and eighty four OCT volume scans (49 B-scans per volume) without neovascular age-related macular degeneration from 280 eyes of 140 participants with bilateral large drusen at baseline, seen at 6-monthly intervals up to a 36-month period (from which 40 eyes developed nGA). Methods: OCT volume and B-scans were labeled for the presence of nGA. Their presence at the volume scan level provided the ground truth for training a deep learning model to identify OCT B-scans that potentially showed nGA requiring manual review. Using a threshold that provided a sensitivity of 0.99, the B-scans identified were assigned the ground truth label with the AI-assisted approach. The performance of this approach for detecting nGA across all visits, or at the visit of nGA onset, was evaluated using fivefold cross-validation. Main Outcome Measures: Sensitivity for detecting nGA, and proportion of OCT B-scans requiring manual review. Results: The AI-assisted approach (utilizing outputs from the deep learning model to guide manual review) had a sensitivity of 0.97 (95% confidence interval [CI] = 0.93-1.00) and 0.95 (95% CI = 0.87-1.00) for detecting nGA across all visits and at the visit of nGA onset, respectively, when requiring manual review of only 2.7% and 1.9% of selected OCT B-scans, respectively. Conclusions: A deep learning model could be used to enable near-perfect detection of nGA onset while reducing the number of OCT B-scans requiring manual review by over 50-fold. This AI-assisted approach shows promise for substantially reducing the current burden of manual review of OCT B-scans to detect this crucial feature that portends future development of GA. Financial Disclosures: Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.

14.
Med Image Anal ; 94: 103124, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38428271

RESUMO

Analyzing high resolution whole slide images (WSIs) with regard to information across multiple scales poses a significant challenge in digital pathology. Multi-instance learning (MIL) is a common solution for working with high resolution images by classifying bags of objects (i.e. sets of smaller image patches). However, such processing is typically performed at a single scale (e.g., 20× magnification) of WSIs, disregarding the vital inter-scale information that is key to diagnoses by human pathologists. In this study, we propose a novel cross-scale MIL algorithm to explicitly aggregate inter-scale relationships into a single MIL network for pathological image diagnosis. The contribution of this paper is three-fold: (1) A novel cross-scale MIL (CS-MIL) algorithm that integrates the multi-scale information and the inter-scale relationships is proposed; (2) A toy dataset with scale-specific morphological features is created and released to examine and visualize differential cross-scale attention; (3) Superior performance on both in-house and public datasets is demonstrated by our simple cross-scale MIL strategy. The official implementation is publicly available at https://github.com/hrlblab/CS-MIL.


Assuntos
Algoritmos , Diagnóstico por Imagem , Humanos
15.
Med Image Anal ; 97: 103251, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-38954942

RESUMO

Accurate histopathological subtype prediction is clinically significant for cancer diagnosis and tumor microenvironment analysis. However, achieving accurate histopathological subtype prediction is a challenging task due to (1) instance-level discrimination of histopathological images, (2) low inter-class and large intra-class variances among histopathological images in their shape and chromatin texture, and (3) heterogeneous feature distribution over different images. In this paper, we formulate subtype prediction as fine-grained representation learning and propose a novel multi-instance selective transformer (MIST) framework, effectively achieving accurate histopathological subtype prediction. The proposed MIST designs an effective selective self-attention mechanism with multi-instance learning (MIL) and vision transformer (ViT) to adaptive identify informative instances for fine-grained representation. Innovatively, the MIST entrusts each instance with different contributions to the bag representation based on its interactions with instances and bags. Specifically, a SiT module with selective multi-head self-attention (S-MSA) is well-designed to identify the representative instances by modeling the instance-to-instance interactions. On the contrary, a MIFD module with the information bottleneck is proposed to learn the discriminative fine-grained representation for histopathological images by modeling instance-to-bag interactions with the selected instances. Substantial experiments on five clinical benchmarks demonstrate that the MIST achieves accurate histopathological subtype prediction and obtains state-of-the-art performance with an accuracy of 0.936. The MIST shows great potential to handle fine-grained medical image analysis, such as histopathological subtype prediction in clinical applications.


Assuntos
Algoritmos , Humanos , Neoplasias/diagnóstico por imagem , Neoplasias/patologia , Interpretação de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Microambiente Tumoral
16.
Front Oncol ; 14: 1362850, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39267824

RESUMO

Introduction: Early detection of pancreatic cancer continues to be a challenge due to the difficulty in accurately identifying specific signs or symptoms that might correlate with the onset of pancreatic cancer. Unlike breast or colon or prostate cancer where screening tests are often useful in identifying cancerous development, there are no tests to diagnose pancreatic cancers. As a result, most pancreatic cancers are diagnosed at an advanced stage, where treatment options, whether systemic therapy, radiation, or surgical interventions, offer limited efficacy. Methods: A two-stage weakly supervised deep learning-based model has been proposed to identify pancreatic tumors using computed tomography (CT) images from Henry Ford Health (HFH) and publicly available Memorial Sloan Kettering Cancer Center (MSKCC) data sets. In the first stage, the nnU-Net supervised segmentation model was used to crop an area in the location of the pancreas, which was trained on the MSKCC repository of 281 patient image sets with established pancreatic tumors. In the second stage, a multi-instance learning-based weakly supervised classification model was applied on the cropped pancreas region to segregate pancreatic tumors from normal appearing pancreas. The model was trained, tested, and validated on images obtained from an HFH repository with 463 cases and 2,882 controls. Results: The proposed deep learning model, the two-stage architecture, offers an accuracy of 0.907   ±   0.01, sensitivity of 0.905   ±   0.01, specificity of 0.908   ±   0.02, and AUC (ROC) 0.903   ±   0.01. The two-stage framework can automatically differentiate pancreatic tumor from non-tumor pancreas with improved accuracy on the HFH dataset. Discussion: The proposed two-stage deep learning architecture shows significantly enhanced performance for predicting the presence of a tumor in the pancreas using CT images compared with other reported studies in the literature.

17.
Comput Methods Programs Biomed ; 250: 108164, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38718709

RESUMO

BACKGROUND AND OBJECTIVE: Current automatic electrocardiogram (ECG) diagnostic systems could provide classification outcomes but often lack explanations for these results. This limitation hampers their application in clinical diagnoses. Previous supervised learning could not highlight abnormal segmentation output accurately enough for clinical application without manual labeling of large ECG datasets. METHOD: In this study, we present a multi-instance learning framework called MA-MIL, which has designed a multi-layer and multi-instance structure that is aggregated step by step at different scales. We evaluated our method using the public MIT-BIH dataset and our private dataset. RESULTS: The results show that our model performed well in both ECG classification output and heartbeat level, sub-heartbeat level abnormal segment detection, with accuracy and F1 scores of 0.987 and 0.986 for ECG classification and 0.968 and 0.949 for heartbeat level abnormal detection, respectively. Compared to visualization methods, the IoU values of MA-MIL improved by at least 17 % and at most 31 % across all categories. CONCLUSIONS: MA-MIL could accurately locate the abnormal ECG segment, offering more trustworthy results for clinical application.


Assuntos
Algoritmos , Eletrocardiografia , Aprendizado de Máquina Supervisionado , Eletrocardiografia/métodos , Humanos , Frequência Cardíaca , Bases de Dados Factuais , Processamento de Sinais Assistido por Computador
18.
Comput Biol Med ; 174: 108461, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38626509

RESUMO

BACKGROUND: Positron emission tomography (PET) is extensively employed for diagnosing and staging various tumors, including liver cancer, lung cancer, and lymphoma. Accurate subtype classification of tumors plays a crucial role in formulating effective treatment plans for patients. Notably, lymphoma comprises subtypes like diffuse large B-cell lymphoma and Hodgkin's lymphoma, while lung cancer encompasses adenocarcinoma, small cell carcinoma, and squamous cell carcinoma. Similarly, liver cancer consists of subtypes such as cholangiocarcinoma and hepatocellular carcinoma. Consequently, the subtype classification of tumors based on PET images holds immense clinical significance. However, in clinical practice, the number of cases available for each subtype is often limited and imbalanced. Therefore, the primary challenge lies in achieving precise subtype classification using a small dataset. METHOD: This paper presents a novel approach for tumor subtype classification in small datasets using RA-DL (Radiomics-DeepLearning) attention. To address the limited sample size, Support Vector Machines (SVM) is employed as the classifier for tumor subtypes instead of deep learning methods. Emphasizing the importance of texture information in tumor subtype recognition, radiomics features are extracted from the tumor regions during the feature extraction stage. These features are compressed using an autoencoder to reduce redundancy. In addition to radiomics features, deep features are also extracted from the tumors to leverage the feature extraction capabilities of deep learning. In contrast to existing methods, our proposed approach utilizes the RA-DL-Attention mechanism to guide the deep network in extracting complementary deep features that enhance the expressive capacity of the final features while minimizing redundancy. To address the challenges of limited and imbalanced data, our method avoids using classification labels during deep feature extraction and instead incorporates 2D Region of Interest (ROI) segmentation and image reconstruction as auxiliary tasks. Subsequently, all lesion features of a single patient are aggregated into a feature vector using a multi-instance aggregation layer. RESULT: Validation experiments were conducted on three PET datasets, specifically the liver cancer dataset, lung cancer dataset, and lymphoma dataset. In the context of lung cancer, our proposed method achieved impressive performance with Area Under Curve (AUC) values of 0.82, 0.84, and 0.83 for the three-classification task. For the binary classification task of lymphoma, our method demonstrated notable results with AUC values of 0.95 and 0.75. Moreover, in the binary classification task of liver tumor, our method exhibited promising performance with AUC values of 0.84 and 0.86. CONCLUSION: The experimental results clearly indicate that our proposed method outperforms alternative approaches significantly. Through the extraction of complementary radiomics features and deep features, our method achieves a substantial improvement in tumor subtype classification performance using small PET datasets.


Assuntos
Tomografia por Emissão de Pósitrons , Máquina de Vetores de Suporte , Humanos , Tomografia por Emissão de Pósitrons/métodos , Neoplasias/diagnóstico por imagem , Neoplasias/classificação , Bases de Dados Factuais , Aprendizado Profundo , Interpretação de Imagem Assistida por Computador/métodos , Neoplasias Hepáticas/diagnóstico por imagem , Neoplasias Hepáticas/classificação , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/classificação , Radiômica
19.
Brain Res ; 1842: 149103, 2024 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-38955250

RESUMO

Amyloid PET scans help in identifying the beta-amyloid deposition in different brain regions. The purpose of this study is to develop a deep learning model that can automate the task of finding amyloid deposition in different regions of the brain only by using PET scan and without the corresponding MRI scan. 2647 18F-Florbetapir PET scans are collected from Alzheimer's Disease Neuroimaging Initiative (ADNI) from multiple centres taken over a period. A deep learning model based on multi-instance learning and attention is proposed which is trained and validated using 80% of the scans and the remaining 20% of the scans are used for testing the model. The performance of the model is validated using Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE). The proposed model is further tested upon an external dataset consisting of 1413 18F-Florbetapir PET scans from the Anti-Amyloid Treatment in Asymptomatic Alzheimer's (A4) study. The proposed model achieves MAE of 0.0243 and RMSE of 0.0320 for summary Standardized Uptake Value Ratio (SUVR) based on composite reference region for ADNI test set. When tested on the A4-study dataset, the proposed model achieves MAE of 0.038 and RMSE of 0.0495 for summary SUVR based on the composite region. The results show that the proposed model provides less MAE and RMSE when compared with existing models. A graphical user interface is developed based on the proposed model where the predictions are made by selecting the files of 18F-Florbetapir PET scans.


Assuntos
Doença de Alzheimer , Encéfalo , Disfunção Cognitiva , Tomografia por Emissão de Pósitrons , Humanos , Tomografia por Emissão de Pósitrons/métodos , Disfunção Cognitiva/diagnóstico por imagem , Disfunção Cognitiva/metabolismo , Encéfalo/metabolismo , Encéfalo/diagnóstico por imagem , Doença de Alzheimer/diagnóstico por imagem , Doença de Alzheimer/metabolismo , Idoso , Masculino , Feminino , Peptídeos beta-Amiloides/metabolismo , Neuroimagem/métodos , Aprendizado Profundo , Idoso de 80 Anos ou mais , Imageamento por Ressonância Magnética/métodos , Etilenoglicóis , Compostos de Anilina , Amiloide/metabolismo
20.
Int J Neural Syst ; 34(9): 2450049, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39010725

RESUMO

Abnormal behavior recognition is an important technology used to detect and identify activities or events that deviate from normal behavior patterns. It has wide applications in various fields such as network security, financial fraud detection, and video surveillance. In recent years, Deep Convolution Networks (ConvNets) have been widely applied in abnormal behavior recognition algorithms and have achieved significant results. However, existing abnormal behavior detection algorithms mainly focus on improving the accuracy of the algorithms and have not explored the real-time nature of abnormal behavior recognition. This is crucial to quickly identify abnormal behavior in public places and improve urban public safety. Therefore, this paper proposes an abnormal behavior recognition algorithm based on three-dimensional (3D) dense connections. The proposed algorithm uses a multi-instance learning strategy to classify various types of abnormal behaviors, and employs dense connection modules and soft-threshold attention mechanisms to reduce the model's parameter count and enhance network computational efficiency. Finally, redundant information in the sequence is reduced by attention allocation to mitigate its negative impact on recognition results. Experimental verification shows that our method achieves a recognition accuracy of 95.61% on the UCF-crime dataset. Comparative experiments demonstrate that our model has strong performance in terms of recognition accuracy and speed.


Assuntos
Redes Neurais de Computação , Humanos , Reconhecimento Automatizado de Padrão/métodos , Aprendizado Profundo , Algoritmos , Crime , Comportamento/fisiologia
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa