Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 57
Filtrar
Más filtros

Base de datos
Tipo del documento
Intervalo de año de publicación
1.
Eur J Radiol ; 180: 111712, 2024 Aug 28.
Artículo en Inglés | MEDLINE | ID: mdl-39222565

RESUMEN

BACKGROUND: Brain metastases (BMs) represents a severe neurological complication stemming from cancers originating from various sources. It is a highly challenging clinical task to accurately distinguish the pathological subtypes of brain metastatic tumors from lung cancer (LC).The utility of 2.5-dimensional (2.5D) deep learning (DL) in distinguishing pathological subtypes of LC with BMs is yet to be determined. METHODS: A total of 250 patients were included in this retrospective study, divided in a 7:3 ratio into training set (N=175) and testing set (N=75). We devised a method to assemble a series of two-dimensional (2D) images by extracting adjacent slices from a central slice in both superior-inferior and anterior-posterior directions to form a 2.5D dataset. Multi-Instance learning (MIL) is a weakly supervised learning method that organizes training instances into "bags" and provides labels for entire bags, with the purpose of learning a classifier based on the labeled positive and negative bags to predict the corresponding class for an unknown bag. Therefore, we employed MIL to construct a comprehensive 2.5D feature set. Then we used the single-slice as input for constructing the 2D model. DL features were extracted from these slices using the pre-trained ResNet101. All feature sets were inputted into the support vector machine (SVM) for evaluation. The diagnostic performance of the classification models were evaluated using five-fold cross-validation, with accuracy and area under the curve (AUC) metrics calculated for analysis. RESULTS: The optimal performance was obtained using the 2.5D DL model, which achieved the micro-AUC of 0.868 (95% confidence interval [CI], 0.817-0.919) and accuracy of 0.836 in the test cohort. The 2D model achieved the micro-AUC of 0.836 (95 % CI, 0.778-0.894) and accuracy of 0.827 in the test cohort. CONCLUSIONS: The proposed 2.5D DL model is feasible and effective in identifying pathological subtypes of BMs from lung cancer.

2.
Front Genet ; 15: 1381851, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39211737

RESUMEN

Patients with the target gene mutation frequently derive significant clinical benefits from target therapy. However, differences in the abundance level of mutations among patients resulted in varying survival benefits, even among patients with the same target gene mutations. Currently, there is a lack of rational and interpretable models to assess the risk of treatment failure. In this study, we investigated the underlying coupled factors contributing to variations in medication sensitivity and established a statistically interpretable framework, named SAFE-MIL, for risk estimation. We first constructed an effectiveness label for each patient from the perspective of exploring the optimal grouping of patients' positive judgment values and sampled patients into 600 and 1,000 groups, respectively, based on multi-instance learning (MIL). A novel and interpretable loss function was further designed based on the Hosmer-Lemeshow test for this framework. By integrating multi-instance learning with the Hosmer-Lemeshow test, SAFE-MIL is capable of accurately estimating the risk of drug treatment failure across diverse patient cohorts and providing the optimal threshold for assessing the risk stratification simultaneously. We conducted a comprehensive case study involving 457 non-small cell lung cancer patients with EGFR mutations treated with EGFR tyrosine kinase inhibitors. Results demonstrate that SAFE-MIL outperforms traditional regression methods with higher accuracy and can accurately assess patients' risk stratification. This underscores its ability to accurately capture inter-patient variability in risk while providing statistical interpretability. SAFE-MIL is able to effectively guide clinical decision-making regarding the use of drugs in targeted therapy and provides an interpretable computational framework for other patient stratification problems. The SAFE-MIL framework has proven its effectiveness in capturing inter-patient variability in risk and providing statistical interpretability. It outperforms traditional regression methods and can effectively guide clinical decision-making in the use of drugs for targeted therapy. SAFE-MIL offers a valuable interpretable computational framework that can be applied to other patient stratification problems, enhancing the precision of risk assessment in personalized medicine. The source code for SAFE-MIL is available for further exploration and application at https://github.com/Nevermore233/SAFE-MIL.

3.
Front Transplant ; 3: 1305468, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38993786

RESUMEN

Two common obstacles limiting the performance of data-driven algorithms in digital histopathology classification tasks are the lack of expert annotations and the narrow diversity of datasets. Multi-instance learning (MIL) can address the former challenge for the analysis of whole slide images (WSI), but performance is often inferior to full supervision. We show that the inclusion of weak annotations can significantly enhance the effectiveness of MIL while keeping the approach scalable. An analysis framework was developed to process periodic acid-Schiff (PAS) and Sirius Red (SR) slides of renal biopsies. The workflow segments tissues into coarse tissue classes. Handcrafted and deep features were extracted from these tissues and combined using a soft attention model to predict several slide-level labels: delayed graft function (DGF), acute tubular injury (ATI), and Remuzzi grade components. A tissue segmentation quality metric was also developed to reduce the adverse impact of poorly segmented instances. The soft attention model was trained using 5-fold cross-validation on a mixed dataset and tested on the QUOD dataset containing n = 373 PAS and n = 195 SR biopsies. The average ROC-AUC over different prediction tasks was found to be 0.598 ± 0.011 , significantly higher than using only ResNet50 ( 0.545 ± 0.012 ), only handcrafted features ( 0.542 ± 0.011 ), and the baseline ( 0.532 ± 0.012 ) of state-of-the-art performance. In conjunction with soft attention, weighting tissues by segmentation quality has led to further improvement ( A U C = 0.618 ± 0.010 ) . Using an intuitive visualisation scheme, we show that our approach may also be used to support clinical decision making as it allows pinpointing individual tissues relevant to the predictions.

4.
Int J Neural Syst ; 34(9): 2450049, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39010725

RESUMEN

Abnormal behavior recognition is an important technology used to detect and identify activities or events that deviate from normal behavior patterns. It has wide applications in various fields such as network security, financial fraud detection, and video surveillance. In recent years, Deep Convolution Networks (ConvNets) have been widely applied in abnormal behavior recognition algorithms and have achieved significant results. However, existing abnormal behavior detection algorithms mainly focus on improving the accuracy of the algorithms and have not explored the real-time nature of abnormal behavior recognition. This is crucial to quickly identify abnormal behavior in public places and improve urban public safety. Therefore, this paper proposes an abnormal behavior recognition algorithm based on three-dimensional (3D) dense connections. The proposed algorithm uses a multi-instance learning strategy to classify various types of abnormal behaviors, and employs dense connection modules and soft-threshold attention mechanisms to reduce the model's parameter count and enhance network computational efficiency. Finally, redundant information in the sequence is reduced by attention allocation to mitigate its negative impact on recognition results. Experimental verification shows that our method achieves a recognition accuracy of 95.61% on the UCF-crime dataset. Comparative experiments demonstrate that our model has strong performance in terms of recognition accuracy and speed.


Asunto(s)
Redes Neurales de la Computación , Humanos , Reconocimiento de Normas Patrones Automatizadas/métodos , Aprendizaje Profundo , Algoritmos , Crimen , Conducta/fisiología
5.
Neural Netw ; 179: 106518, 2024 Jul 14.
Artículo en Inglés | MEDLINE | ID: mdl-39068680

RESUMEN

Graph convolutional networks (GCNs) as the emerging neural networks have shown great success in Prognostics and Health Management because they can not only extract node features but can also mine relationship between nodes in the graph data. However, the most existing GCNs-based methods are still limited by graph quality, variable working conditions, and limited data, making them difficult to obtain remarkable performance. Therefore, it is proposed in this paper a two stage importance-aware subgraph convolutional network based on multi-source sensors named I2SGCN to address the above-mentioned limitations. In the real-world scenarios, it is found that the diagnostic performance of the most existing GCNs is commonly bounded by the graph quality because it is hard to get high quality through a single sensor. Therefore, we leveraged multi-source sensors to construct graphs that contain more fault-based information of mechanical equipment. Then, we discovered that unsupervised domain adaptation (UDA) methods only use single stage to achieve cross-domain fault diagnosis and ignore more refined feature extraction, which can make the representations contained in the features inadequate. Hence, it is proposed the two-stage fault diagnosis in the whole framework to achieve UDA. In the first stage, the multiple-instance learning is adopted to obtain the importance factor of each sensor towards preliminary fault diagnosis. In the second stage, it is proposed I2SGCN to achieve refined cross-domain fault diagnosis. Moreover, we observed that deficient and limited data may cause label bias and biased training, leading to reduced generalization capacity of the proposed method. Therefore, we constructed the feature-based graph and importance-based graph to jointly mine more effective relationship and then presented a subgraph learning strategy, which not only enriches sufficient and complementary features but also regularizes the training. Comprehensive experiments conducted on four case studies demonstrate the effectiveness and superiority of the proposed method for cross-domain fault diagnosis, which outperforms the state-of-the art methods.

6.
Brain Res ; 1842: 149103, 2024 Nov 01.
Artículo en Inglés | MEDLINE | ID: mdl-38955250

RESUMEN

Amyloid PET scans help in identifying the beta-amyloid deposition in different brain regions. The purpose of this study is to develop a deep learning model that can automate the task of finding amyloid deposition in different regions of the brain only by using PET scan and without the corresponding MRI scan. 2647 18F-Florbetapir PET scans are collected from Alzheimer's Disease Neuroimaging Initiative (ADNI) from multiple centres taken over a period. A deep learning model based on multi-instance learning and attention is proposed which is trained and validated using 80% of the scans and the remaining 20% of the scans are used for testing the model. The performance of the model is validated using Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE). The proposed model is further tested upon an external dataset consisting of 1413 18F-Florbetapir PET scans from the Anti-Amyloid Treatment in Asymptomatic Alzheimer's (A4) study. The proposed model achieves MAE of 0.0243 and RMSE of 0.0320 for summary Standardized Uptake Value Ratio (SUVR) based on composite reference region for ADNI test set. When tested on the A4-study dataset, the proposed model achieves MAE of 0.038 and RMSE of 0.0495 for summary SUVR based on the composite region. The results show that the proposed model provides less MAE and RMSE when compared with existing models. A graphical user interface is developed based on the proposed model where the predictions are made by selecting the files of 18F-Florbetapir PET scans.


Asunto(s)
Enfermedad de Alzheimer , Encéfalo , Disfunción Cognitiva , Tomografía de Emisión de Positrones , Humanos , Tomografía de Emisión de Positrones/métodos , Disfunción Cognitiva/diagnóstico por imagen , Disfunción Cognitiva/metabolismo , Encéfalo/metabolismo , Encéfalo/diagnóstico por imagen , Enfermedad de Alzheimer/diagnóstico por imagen , Enfermedad de Alzheimer/metabolismo , Anciano , Masculino , Femenino , Péptidos beta-Amiloides/metabolismo , Neuroimagen/métodos , Aprendizaje Profundo , Anciano de 80 o más Años , Imagen por Resonancia Magnética/métodos , Glicoles de Etileno , Compuestos de Anilina , Amiloide/metabolismo
7.
Med Image Anal ; 97: 103251, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-38954942

RESUMEN

Accurate histopathological subtype prediction is clinically significant for cancer diagnosis and tumor microenvironment analysis. However, achieving accurate histopathological subtype prediction is a challenging task due to (1) instance-level discrimination of histopathological images, (2) low inter-class and large intra-class variances among histopathological images in their shape and chromatin texture, and (3) heterogeneous feature distribution over different images. In this paper, we formulate subtype prediction as fine-grained representation learning and propose a novel multi-instance selective transformer (MIST) framework, effectively achieving accurate histopathological subtype prediction. The proposed MIST designs an effective selective self-attention mechanism with multi-instance learning (MIL) and vision transformer (ViT) to adaptive identify informative instances for fine-grained representation. Innovatively, the MIST entrusts each instance with different contributions to the bag representation based on its interactions with instances and bags. Specifically, a SiT module with selective multi-head self-attention (S-MSA) is well-designed to identify the representative instances by modeling the instance-to-instance interactions. On the contrary, a MIFD module with the information bottleneck is proposed to learn the discriminative fine-grained representation for histopathological images by modeling instance-to-bag interactions with the selected instances. Substantial experiments on five clinical benchmarks demonstrate that the MIST achieves accurate histopathological subtype prediction and obtains state-of-the-art performance with an accuracy of 0.936. The MIST shows great potential to handle fine-grained medical image analysis, such as histopathological subtype prediction in clinical applications.


Asunto(s)
Algoritmos , Humanos , Neoplasias/diagnóstico por imagen , Neoplasias/patología , Interpretación de Imagen Asistida por Computador/métodos , Aprendizaje Automático , Microambiente Tumoral
8.
Am J Otolaryngol ; 45(4): 104342, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38703609

RESUMEN

OBJECTIVE: To develop a multi-instance learning (MIL) based artificial intelligence (AI)-assisted diagnosis models by using laryngoscopic images to differentiate benign and malignant vocal fold leukoplakia (VFL). METHODS: The AI system was developed, trained and validated on 5362 images of 551 patients from three hospitals. Automated regions of interest (ROI) segmentation algorithm was utilized to construct image-level features. MIL was used to fusion image level results to patient level features, then the extracted features were modeled by seven machine learning algorithms. Finally, we evaluated the image level and patient level results. Additionally, 50 videos of VFL were prospectively gathered to assess the system's real-time diagnostic capabilities. A human-machine comparison database was also constructed to compare the diagnostic performance of otolaryngologists with and without AI assistance. RESULTS: In internal and external validation sets, the maximum area under the curve (AUC) for image level segmentation models was 0.775 (95 % CI 0.740-0.811) and 0.720 (95 % CI 0.684-0.756), respectively. Utilizing a MIL-based fusion strategy, the AUC at the patient level increased to 0.869 (95 % CI 0.798-0.940) and 0.851 (95 % CI 0.756-0.945). For real-time video diagnosis, the maximum AUC at the patient level reached 0.850 (95 % CI, 0.743-0.957). With AI assistance, the AUC improved from 0.720 (95 % CI 0.682-0.755) to 0.808 (95 % CI 0.775-0.839) for senior otolaryngologists and from 0.647 (95 % CI 0.608-0.686) to 0.807 (95 % CI 0.773-0.837) for junior otolaryngologists. CONCLUSIONS: The MIL based AI-assisted diagnosis system can significantly improve the diagnostic performance of otolaryngologists for VFL and help to make proper clinical decisions.


Asunto(s)
Inteligencia Artificial , Laringoscopía , Leucoplasia , Pliegues Vocales , Humanos , Pliegues Vocales/diagnóstico por imagen , Pliegues Vocales/patología , Laringoscopía/métodos , Masculino , Leucoplasia/diagnóstico , Leucoplasia/patología , Femenino , Persona de Mediana Edad , Anciano , Diagnóstico por Computador/métodos , Aprendizaje Automático , Diagnóstico Diferencial , Adulto , Algoritmos , Neoplasias Laríngeas/diagnóstico , Neoplasias Laríngeas/patología , Neoplasias Laríngeas/diagnóstico por imagen
9.
Laryngoscope ; 2024 May 27.
Artículo en Inglés | MEDLINE | ID: mdl-38801129

RESUMEN

OBJECTIVES: Vocal fold leukoplakia (VFL) is a precancerous lesion of laryngeal cancer, and its endoscopic diagnosis poses challenges. We aim to develop an artificial intelligence (AI) model using white light imaging (WLI) and narrow-band imaging (NBI) to distinguish benign from malignant VFL. METHODS: A total of 7057 images from 426 patients were used for model development and internal validation. Additionally, 1617 images from two other hospitals were used for model external validation. Modeling learning based on WLI and NBI modalities was conducted using deep learning combined with a multi-instance learning approach (MIL). Furthermore, 50 prospectively collected videos were used to evaluate real-time model performance. A human-machine comparison involving 100 patients and 12 laryngologists assessed the real-world effectiveness of the model. RESULTS: The model achieved the highest area under the receiver operating characteristic curve (AUC) values of 0.868 and 0.884 in the internal and external validation sets, respectively. AUC in the video validation set was 0.825 (95% CI: 0.704-0.946). In the human-machine comparison, AI significantly improved AUC and accuracy for all laryngologists (p < 0.05). With the assistance of AI, the diagnostic abilities and consistency of all laryngologists improved. CONCLUSIONS: Our multicenter study developed an effective AI model using MIL and fusion of WLI and NBI images for VFL diagnosis, particularly aiding junior laryngologists. However, further optimization and validation are necessary to fully assess its potential impact in clinical settings. LEVEL OF EVIDENCE: 3 Laryngoscope, 2024.

10.
Comput Methods Programs Biomed ; 250: 108164, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38718709

RESUMEN

BACKGROUND AND OBJECTIVE: Current automatic electrocardiogram (ECG) diagnostic systems could provide classification outcomes but often lack explanations for these results. This limitation hampers their application in clinical diagnoses. Previous supervised learning could not highlight abnormal segmentation output accurately enough for clinical application without manual labeling of large ECG datasets. METHOD: In this study, we present a multi-instance learning framework called MA-MIL, which has designed a multi-layer and multi-instance structure that is aggregated step by step at different scales. We evaluated our method using the public MIT-BIH dataset and our private dataset. RESULTS: The results show that our model performed well in both ECG classification output and heartbeat level, sub-heartbeat level abnormal segment detection, with accuracy and F1 scores of 0.987 and 0.986 for ECG classification and 0.968 and 0.949 for heartbeat level abnormal detection, respectively. Compared to visualization methods, the IoU values of MA-MIL improved by at least 17 % and at most 31 % across all categories. CONCLUSIONS: MA-MIL could accurately locate the abnormal ECG segment, offering more trustworthy results for clinical application.


Asunto(s)
Algoritmos , Electrocardiografía , Aprendizaje Automático Supervisado , Electrocardiografía/métodos , Humanos , Frecuencia Cardíaca , Bases de Datos Factuales , Procesamiento de Señales Asistido por Computador
11.
Comput Biol Med ; 174: 108461, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38626509

RESUMEN

BACKGROUND: Positron emission tomography (PET) is extensively employed for diagnosing and staging various tumors, including liver cancer, lung cancer, and lymphoma. Accurate subtype classification of tumors plays a crucial role in formulating effective treatment plans for patients. Notably, lymphoma comprises subtypes like diffuse large B-cell lymphoma and Hodgkin's lymphoma, while lung cancer encompasses adenocarcinoma, small cell carcinoma, and squamous cell carcinoma. Similarly, liver cancer consists of subtypes such as cholangiocarcinoma and hepatocellular carcinoma. Consequently, the subtype classification of tumors based on PET images holds immense clinical significance. However, in clinical practice, the number of cases available for each subtype is often limited and imbalanced. Therefore, the primary challenge lies in achieving precise subtype classification using a small dataset. METHOD: This paper presents a novel approach for tumor subtype classification in small datasets using RA-DL (Radiomics-DeepLearning) attention. To address the limited sample size, Support Vector Machines (SVM) is employed as the classifier for tumor subtypes instead of deep learning methods. Emphasizing the importance of texture information in tumor subtype recognition, radiomics features are extracted from the tumor regions during the feature extraction stage. These features are compressed using an autoencoder to reduce redundancy. In addition to radiomics features, deep features are also extracted from the tumors to leverage the feature extraction capabilities of deep learning. In contrast to existing methods, our proposed approach utilizes the RA-DL-Attention mechanism to guide the deep network in extracting complementary deep features that enhance the expressive capacity of the final features while minimizing redundancy. To address the challenges of limited and imbalanced data, our method avoids using classification labels during deep feature extraction and instead incorporates 2D Region of Interest (ROI) segmentation and image reconstruction as auxiliary tasks. Subsequently, all lesion features of a single patient are aggregated into a feature vector using a multi-instance aggregation layer. RESULT: Validation experiments were conducted on three PET datasets, specifically the liver cancer dataset, lung cancer dataset, and lymphoma dataset. In the context of lung cancer, our proposed method achieved impressive performance with Area Under Curve (AUC) values of 0.82, 0.84, and 0.83 for the three-classification task. For the binary classification task of lymphoma, our method demonstrated notable results with AUC values of 0.95 and 0.75. Moreover, in the binary classification task of liver tumor, our method exhibited promising performance with AUC values of 0.84 and 0.86. CONCLUSION: The experimental results clearly indicate that our proposed method outperforms alternative approaches significantly. Through the extraction of complementary radiomics features and deep features, our method achieves a substantial improvement in tumor subtype classification performance using small PET datasets.


Asunto(s)
Tomografía de Emisión de Positrones , Máquina de Vectores de Soporte , Humanos , Tomografía de Emisión de Positrones/métodos , Neoplasias/diagnóstico por imagen , Neoplasias/clasificación , Bases de Datos Factuales , Aprendizaje Profundo , Interpretación de Imagen Asistida por Computador/métodos , Neoplasias Hepáticas/diagnóstico por imagen , Neoplasias Hepáticas/clasificación , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/clasificación , Radiómica
12.
Nan Fang Yi Ke Da Xue Xue Bao ; 44(3): 585-593, 2024 Mar 20.
Artículo en Chino | MEDLINE | ID: mdl-38597451

RESUMEN

OBJECTIVE: To develop a multi-modal deep learning method for automatic classification of immune-mediated glomerular diseases based on images of optical microscopy (OM), immunofluorescence microscopy (IM), and transmission electron microscopy (TEM). METHODS: We retrospectively collected the pathological images from 273 patients and constructed a multi-modal multi- instance model for classification of 3 immune-mediated glomerular diseases, namely immunoglobulin A nephropathy (IgAN), membranous nephropathy (MN), and lupus nephritis (LN). This model adopts an instance-level multi-instance learning (I-MIL) method to select the TEM images for multi-modal feature fusion with the OM images and IM images of the same patient. By comparing this model with unimodal and bimodal models, we explored different combinations of the 3 modalities and the optimal methods for modal feature fusion. RESULTS: The multi-modal multi-instance model combining OM, IM, and TEM images had a disease classification accuracy of (88.34±2.12)%, superior to that of the optimal unimodal model [(87.08±4.25)%] and that of the optimal bimodal model [(87.92±3.06)%]. CONCLUSION: This multi- modal multi- instance model based on OM, IM, and TEM images can achieve automatic classification of immune-mediated glomerular diseases with a good classification accuracy.


Asunto(s)
Glomerulonefritis por IGA , Levamisol/análogos & derivados , Humanos , Estudios Retrospectivos , Microscopía Fluorescente , Microscopía Electrónica de Transmisión
13.
Med Image Anal ; 94: 103124, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38428271

RESUMEN

Analyzing high resolution whole slide images (WSIs) with regard to information across multiple scales poses a significant challenge in digital pathology. Multi-instance learning (MIL) is a common solution for working with high resolution images by classifying bags of objects (i.e. sets of smaller image patches). However, such processing is typically performed at a single scale (e.g., 20× magnification) of WSIs, disregarding the vital inter-scale information that is key to diagnoses by human pathologists. In this study, we propose a novel cross-scale MIL algorithm to explicitly aggregate inter-scale relationships into a single MIL network for pathological image diagnosis. The contribution of this paper is three-fold: (1) A novel cross-scale MIL (CS-MIL) algorithm that integrates the multi-scale information and the inter-scale relationships is proposed; (2) A toy dataset with scale-specific morphological features is created and released to examine and visualize differential cross-scale attention; (3) Superior performance on both in-house and public datasets is demonstrated by our simple cross-scale MIL strategy. The official implementation is publicly available at https://github.com/hrlblab/CS-MIL.


Asunto(s)
Algoritmos , Diagnóstico por Imagen , Humanos
14.
Front Immunol ; 15: 1345586, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38515756

RESUMEN

Introduction: T cell receptor (TCR) repertoires provide valuable insights into complex human diseases, including cancers. Recent advancements in immune sequencing technology have significantly improved our understanding of TCR repertoire. Some computational methods have been devised to identify cancer-associated TCRs and enable cancer detection using TCR sequencing data. However, the existing methods are often limited by their inadequate consideration of the correlations among TCRs within a repertoire, hindering the identification of crucial TCRs. Additionally, the sparsity of cancer-associated TCR distribution presents a challenge in accurate prediction. Methods: To address these issues, we presented DeepLION2, an innovative deep multi-instance contrastive learning framework specifically designed to enhance cancer-associated TCR prediction. DeepLION2 leveraged content-based sparse self-attention, focusing on the top k related TCRs for each TCR, to effectively model inter-TCR correlations. Furthermore, it adopted a contrastive learning strategy for bootstrapping parameter updates of the attention matrix, preventing the model from fixating on non-cancer-associated TCRs. Results: Extensive experimentation on diverse patient cohorts, encompassing over ten cancer types, demonstrated that DeepLION2 significantly outperformed current state-of-the-art methods in terms of accuracy, sensitivity, specificity, Matthews correlation coefficient, and area under the curve (AUC). Notably, DeepLION2 achieved impressive AUC values of 0.933, 0.880, and 0.763 on thyroid, lung, and gastrointestinal cancer cohorts, respectively. Furthermore, it effectively identified cancer-associated TCRs along with their key motifs, highlighting the amino acids that play a crucial role in TCR-peptide binding. Conclusion: These compelling results underscore DeepLION2's potential for enhancing cancer detection and facilitating personalized cancer immunotherapy. DeepLION2 is publicly available on GitHub, at https://github.com/Bioinformatics7181/DeepLION2, for academic use only.


Asunto(s)
Neoplasias , Receptores de Antígenos de Linfocitos T , Humanos , Péptidos , Inmunoterapia , Neoplasias/genética
15.
Sci Rep ; 14(1): 3109, 2024 02 07.
Artículo en Inglés | MEDLINE | ID: mdl-38326410

RESUMEN

Small-field-of-view reconstruction CT images (sFOV-CT) increase the pixel density across airway structures and reduce partial volume effects. Multi-instance learning (MIL) is proposed as a weakly supervised machine learning method, which can automatically assess the image quality. The aim of this study was to evaluate the disparities between conventional CT (c-CT) and sFOV-CT images using a lung nodule system based on MIL and assessments from radiologists. 112 patients who underwent chest CT were retrospectively enrolled in this study between July 2021 to March 2022. After undergoing c-CT examinations, sFOV-CT images with small-field-of-view were reconstructed. Two radiologists analyzed all c-CT and sFOV-CT images, including features such as location, nodule type, size, CT values, and shape signs. Then, an MIL-based lung nodule system objectively analyzed the c-CT (c-MIL) and sFOV-CT (sFOV-MIL) to explore their differences. The signal-to-noise ratio of lungs (SNR-lung) and contrast-to-noise ratio of nodules (CNR-nodule) were calculated to evaluate the quality of CT images from another perspective. The subjective evaluation by radiologists showed that feature of minimal CT value (p = 0.019) had statistical significance between c-CT and sFOV-CT. However, most features (all with p < 0.05), except for nodule type, location, volume, mean CT value, and vacuole sign (p = 0.056-1.000), had statistical differences between c-MIL and sFOV-MIL by MIL system. The SNR-lung between c-CT and sFOV-CT had no statistical significance, while the CNR-nodule showed statistical difference (p = 0.007), and the CNR of sFOV-CT was higher than that of c-CT. In detecting the difference between c-CT and sFOV-CT, features extracted by the MIL system had more statistical differences than those evaluated by radiologists. The image quality of those two CT images was different, and the CNR-nodule of sFOV-CT was higher than that of c-CT.


Asunto(s)
Neoplasias Pulmonares , Interpretación de Imagen Radiográfica Asistida por Computador , Humanos , Estudios Retrospectivos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Pulmón , Neoplasias Pulmonares/diagnóstico por imagen , Dosis de Radiación , Algoritmos
16.
Ophthalmol Sci ; 4(3): 100428, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38284101

RESUMEN

Purpose: Nascent geographic atrophy (nGA) refers to specific features seen on OCT B-scans, which are strongly associated with the future development of geographic atrophy (GA). This study sought to develop a deep learning model to screen OCT B-scans for nGA that warrant further manual review (an artificial intelligence [AI]-assisted approach), and to determine the extent of reduction in OCT B-scan load requiring manual review while maintaining near-perfect nGA detection performance. Design: Development and evaluation of a deep learning model. Participants: One thousand eight hundred and eighty four OCT volume scans (49 B-scans per volume) without neovascular age-related macular degeneration from 280 eyes of 140 participants with bilateral large drusen at baseline, seen at 6-monthly intervals up to a 36-month period (from which 40 eyes developed nGA). Methods: OCT volume and B-scans were labeled for the presence of nGA. Their presence at the volume scan level provided the ground truth for training a deep learning model to identify OCT B-scans that potentially showed nGA requiring manual review. Using a threshold that provided a sensitivity of 0.99, the B-scans identified were assigned the ground truth label with the AI-assisted approach. The performance of this approach for detecting nGA across all visits, or at the visit of nGA onset, was evaluated using fivefold cross-validation. Main Outcome Measures: Sensitivity for detecting nGA, and proportion of OCT B-scans requiring manual review. Results: The AI-assisted approach (utilizing outputs from the deep learning model to guide manual review) had a sensitivity of 0.97 (95% confidence interval [CI] = 0.93-1.00) and 0.95 (95% CI = 0.87-1.00) for detecting nGA across all visits and at the visit of nGA onset, respectively, when requiring manual review of only 2.7% and 1.9% of selected OCT B-scans, respectively. Conclusions: A deep learning model could be used to enable near-perfect detection of nGA onset while reducing the number of OCT B-scans requiring manual review by over 50-fold. This AI-assisted approach shows promise for substantially reducing the current burden of manual review of OCT B-scans to detect this crucial feature that portends future development of GA. Financial Disclosures: Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.

17.
Mod Pathol ; 37(1): 100373, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37925056

RESUMEN

The current flow cytometric analysis of blood and bone marrow samples for diagnosis of acute myeloid leukemia (AML) relies heavily on manual intervention in the processing and analysis steps, introducing significant subjectivity into resulting diagnoses and necessitating highly trained personnel. Furthermore, concurrent molecular characterization via cytogenetics and targeted sequencing can take multiple days, delaying patient diagnosis and treatment. Attention-based multi-instance learning models (ABMILMs) are deep learning models that make accurate predictions and generate interpretable insights regarding the classification of a sample from individual events/cells; nonetheless, these models have yet to be applied to flow cytometry data. In this study, we developed a computational pipeline using ABMILMs for the automated diagnosis of AML cases based exclusively on flow cytometric data. Analysis of 1820 flow cytometry samples shows that this pipeline provides accurate diagnoses of acute leukemia (area under the receiver operating characteristic curve [AUROC] 0.961) and accurately differentiates AML vs B- and T-lymphoblastic leukemia (AUROC 0.965). Models for prediction of 9 cytogenetic aberrancies and 32 pathogenic variants in AML provide accurate predictions, particularly for t(15;17)(PML::RARA) [AUROC 0.929], t(8;21)(RUNX1::RUNX1T1) (AUROC 0.814), and NPM1 variants (AUROC 0.807). Finally, we demonstrate how these models generate interpretable insights into which individual flow cytometric events and markers deliver optimal diagnostic utility, providing hematopathologists with a data visualization tool for improved data interpretation, as well as novel biological associations between flow cytometric marker expression and cytogenetic/molecular variants in AML. Our study is the first to illustrate the feasibility of using deep learning-based analysis of flow cytometric data for automated AML diagnosis and molecular characterization.


Asunto(s)
Aprendizaje Profundo , Leucemia Mieloide Aguda , Humanos , Citometría de Flujo/métodos , Leucemia Mieloide Aguda/diagnóstico , Leucemia Mieloide Aguda/genética , Leucemia Mieloide Aguda/metabolismo , Enfermedad Aguda , Citogenética
18.
Comput Methods Programs Biomed ; 244: 107936, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38016392

RESUMEN

BACKGROUND AND OBJECTIVE: Esophageal cancer is a serious disease with a high prevalence in Eastern Asia. Histopathology tissue analysis stands as the gold standard in diagnosing esophageal cancer. In recent years, there has been a shift towards digitizing histopathological images into whole slide images (WSIs), progressively integrating them into cancer diagnostics. However, the gigapixel sizes of WSIs present significant storage and processing challenges, and they often lack localized annotations. To address this issue, multi-instance learning (MIL) has been introduced for WSI classification, utilizing weakly supervised learning for diagnosis analysis. By applying the principles of MIL to WSI analysis, it is possible to reduce the workload of pathologists by facilitating the generation of localized annotations. Nevertheless, the approach's effectiveness is hindered by the traditional simple aggregation operation and the domain shift resulting from the prevalent use of convolutional feature extractors pretrained on ImageNet. METHODS: We propose a MIL-based framework for WSI analysis and cancer classification. Concurrently, we introduce employing self-supervised learning, which obviates the need for manual annotation and demonstrates versatility in various tasks, to pretrain feature extractors. This method enhances the extraction of representative features from esophageal WSI for MIL, ensuring more robust and accurate performance. RESULTS: We build a comprehensive dataset of whole esophageal slide images and conduct extensive experiments utilizing this dataset. The performance on our dataset demonstrates the efficiency of our proposed MIL framework and the pretraining process, with our framework outperforming existing methods, achieving an accuracy of 93.07% and AUC (area under the curve) of 95.31%. CONCLUSION: This work proposes an effective MIL method to classify WSI of esophageal cancer. The promising results indicate that our cancer classification framework holds great potential in promoting the automatic whole esophageal slide image analysis.


Asunto(s)
Neoplasias Esofágicas , Humanos , Neoplasias Esofágicas/diagnóstico por imagen , Suministros de Energía Eléctrica , Procesamiento de Imagen Asistido por Computador , Carga de Trabajo
19.
Comput Methods Programs Biomed ; 242: 107789, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37722310

RESUMEN

BACKGROUND AND OBJECTIVES: The pathological diagnosis of renal cell carcinoma is crucial for treatment. Currently, the multi-instance learning method is commonly used for whole-slide image classification of renal cell carcinoma, which is mainly based on the assumption of independent identical distribution. But this is inconsistent with the need to consider the correlation between different instances in the diagnosis process. Furthermore, the problem of high resource consumption of pathology images is still urgent to be solved. Therefore, we propose a new multi-instance learning method to solve this problem. METHODS: In this study, we proposed a hybrid multi-instance learning model based on the Transformer and the Graph Attention Network, called TGMIL, to achieve whole-slide image of renal cell carcinoma classification without pixel-level annotation or region of interest extraction. Our approach is divided into three steps. First, we designed a feature pyramid with the multiple low magnifications of whole-slide image named MMFP. It makes the model incorporates richer information, and reduces memory consumption as well as training time compared to the highest magnification. Second, TGMIL amalgamates the Transformer and the Graph Attention's capabilities, adeptly addressing the loss of instance contextual and spatial. Within the Graph Attention network stream, an easy and efficient approach employing max pooling and mean pooling yields the graph adjacency matrix, devoid of extra memory consumption. Finally, the outputs of two streams of TGMIL are aggregated to achieve the classification of renal cell carcinoma. RESULTS: On the TCGA-RCC validation set, a public dataset for renal cell carcinoma, the area under a receiver operating characteristic (ROC) curve (AUC) and accuracy of TGMIL were 0.98±0.0015,0.9191±0.0062, respectively. It showcased remarkable proficiency on the private validation set of renal cell carcinoma pathology images, attaining AUC of 0.9386±0.0162 and ACC of 0.9197±0.0124. Furthermore, on the public breast cancer whole-slide image test dataset, CAMELYON 16, our model showed good classification performance with an accuracy of 0.8792. CONCLUSIONS: TGMIL models the diagnostic process of pathologists and shows good classification performance on multiple datasets. Concurrently, the MMFP module efficiently diminishes resource requirements, offering a novel angle for exploring computational pathology images.


Asunto(s)
Carcinoma de Células Renales , Neoplasias Renales , Humanos , Carcinoma de Células Renales/diagnóstico por imagen , Aprendizaje , Suministros de Energía Eléctrica , Curva ROC , Neoplasias Renales/diagnóstico por imagen
20.
Health Inf Sci Syst ; 11(1): 39, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37649855

RESUMEN

Behavioral ratings based on clinical observations are still the gold standard for screening, diagnosing, and assessing outcomes in Tourette syndrome. Detecting tic symptoms plays an important role in patient treatment and evaluation; accurate tic identification is the key to clinical diagnosis and evaluation. In this study, we proposed a tic action detection method using face video feature recognition for tic and control groups. Through facial ROI extraction, a 3D convolutional neural network was used to learn video feature representations, and multi-instance learning anomaly detection strategy was integrated to construct the tic action analysis and discrimination framework. We applied this tic recognition framework in our video dataset. The model evaluation results achieved average tic detection accuracy of 91.02%, precision of 77.07% and recall of 78.78%. And the tic score curve with postprocessing provided information of how the patient's twitches change over time. The detection results at the individual level indicated that our method can effectively detect tic actions in videos of Tourette patients without the need for fine labeling, which is significant for the long-term evaluation of patients with Tourette syndrome.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA