Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 50
Filtrar
1.
J Imaging Inform Med ; 2024 Apr 08.
Artigo em Inglês | MEDLINE | ID: mdl-38587770

RESUMO

Uptake segmentation and classification on PSMA PET/CT are important for automating whole-body tumor burden determinations. We developed and evaluated an automated deep learning (DL)-based framework that segments and classifies uptake on PSMA PET/CT. We identified 193 [18F] DCFPyL PET/CT scans of patients with biochemically recurrent prostate cancer from two institutions, including 137 [18F] DCFPyL PET/CT scans for training and internally testing, and 56 scans from another institution for external testing. Two radiologists segmented and labelled foci as suspicious or non-suspicious for malignancy. A DL-based segmentation was developed with two independent CNNs. An anatomical prior guidance was applied to make the DL framework focus on PSMA-avid lesions. Segmentation performance was evaluated by Dice, IoU, precision, and recall. Classification model was constructed with multi-modal decision fusion framework evaluated by accuracy, AUC, F1 score, precision, and recall. Automatic segmentation of suspicious lesions was improved under prior guidance, with mean Dice, IoU, precision, and recall of 0.700, 0.566, 0.809, and 0.660 on the internal test set and 0.680, 0.548, 0.749, and 0.740 on the external test set. Our multi-modal decision fusion framework outperformed single-modal and multi-modal CNNs with accuracy, AUC, F1 score, precision, and recall of 0.764, 0.863, 0.844, 0.841, and 0.847 in distinguishing suspicious and non-suspicious foci on the internal test set and 0.796, 0.851, 0.865, 0.814, and 0.923 on the external test set. DL-based lesion segmentation on PSMA PET is facilitated through our anatomical prior guidance strategy. Our classification framework differentiates suspicious foci from those not suspicious for cancer with good accuracy.

2.
Artigo em Inglês | MEDLINE | ID: mdl-38568767

RESUMO

Health disparities among marginalized populations with lower socioeconomic status significantly impact the fairness and effectiveness of healthcare delivery. The increasing integration of artificial intelligence (AI) into healthcare presents an opportunity to address these inequalities, provided that AI models are free from bias. This paper aims to address the bias challenges by population disparities within healthcare systems, existing in the presentation of and development of algorithms, leading to inequitable medical implementation for conditions such as pulmonary embolism (PE) prognosis. In this study, we explore the diversity of biases in healthcare systems, which highlights the need for a holistic framework to reduce bias by complementary aggregation. By leveraging de-biasing deep survival prediction models, we propose a framework that disentangles identifiable information from images, text reports, and clinical variables to mitigate potential biases within multimodal datasets. Our study offers several advantages over traditional clinical-based survival prediction methods, including richer survival-related characteristics and bias-complementary predicted results. By improving the robustness of survival analysis through this framework, we aim to benefit patients, clinicians, and researchers by improving fairness and accuracy in healthcare AI systems. The code is available at https://github.com/zzs95/fairPE-SA.

3.
J Imaging Inform Med ; 2024 Mar 21.
Artigo em Inglês | MEDLINE | ID: mdl-38514595

RESUMO

Deep learning models have demonstrated great potential in medical imaging but are limited by the expensive, large volume of annotations required. To address this, we compared different active learning strategies by training models on subsets of the most informative images using real-world clinical datasets for brain tumor segmentation and proposing a framework that minimizes the data needed while maintaining performance. Then, 638 multi-institutional brain tumor magnetic resonance imaging scans were used to train three-dimensional U-net models and compare active learning strategies. Uncertainty estimation techniques including Bayesian estimation with dropout, bootstrapping, and margins sampling were compared to random query. Strategies to avoid annotating similar images were also considered. We determined the minimum data necessary to achieve performance equivalent to the model trained on the full dataset (α = 0.05). Bayesian approximation with dropout at training and testing showed results equivalent to that of the full data model (target) with around 30% of the training data needed by random query to achieve target performance (p = 0.018). Annotation redundancy restriction techniques can reduce the training data needed by random query to achieve target performance by 20%. We investigated various active learning strategies to minimize the annotation burden for three-dimensional brain tumor segmentation. Dropout uncertainty estimation achieved target performance with the least annotated data.

4.
Breast Cancer ; 31(3): 529-535, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38351366

RESUMO

This rapid communication highlights the correlations between digital pathology-whole slide imaging (WSI) and radiomics-magnetic resonance imaging (MRI) features in triple-negative breast cancer (TNBC) patients. The research collected 12 patients who had both core needle biopsy and MRI performed to evaluate pathologic complete response (pCR). The results showed that higher collagenous values in pathology data were correlated with more homogeneity, whereas higher tumor expression values in pathology data correlated with less homogeneity in the appearance of tumors on MRI by size zone non-uniformity normalized (SZNN). Higher myxoid values in pathology data are correlated with less similarity of gray-level non-uniformity (GLN) in tumor regions on MRIs, while higher immune values in WSIs correlated with the more joint distribution of smaller-size zones by small area low gray-level emphasis (SALGE) in the tumor regions on MRIs. Pathologic complete response (pCR) was associated with collagen, tumor, and myxoid expression in WSI and GLN and SZNN in radiomic features. The correlations of WSI and radiomic features may further our understanding of the TNBC tumoral microenvironment (TME) and could be used in the future to better tailor the use of neoadjuvant chemotherapy (NAC). This communication will focus on the post-NAC MRI features correlated with pCR and their association with WSI features from core needle biopsies.


Assuntos
Imageamento por Ressonância Magnética , Neoplasias de Mama Triplo Negativas , Humanos , Neoplasias de Mama Triplo Negativas/diagnóstico por imagem , Neoplasias de Mama Triplo Negativas/patologia , Feminino , Imageamento por Ressonância Magnética/métodos , Biópsia com Agulha de Grande Calibre/métodos , Pessoa de Meia-Idade , Adulto , Idoso , Microambiente Tumoral , Terapia Neoadjuvante/métodos , Resposta Patológica Completa , Radiômica
5.
Cereb Cortex ; 34(2)2024 01 31.
Artigo em Inglês | MEDLINE | ID: mdl-38300184

RESUMO

T1 image is a widely collected imaging sequence in various neuroimaging datasets, but it is rarely used to construct an individual-level brain network. In this study, a novel individualized radiomics-based structural similarity network was proposed from T1 images. In detail, it used voxel-based morphometry to obtain the preprocessed gray matter images, and radiomic features were then extracted on each region of interest in Brainnetome atlas, and an individualized radiomics-based structural similarity network was finally built using the correlational values of radiomic features between any pair of regions of interest. After that, the network characteristics of individualized radiomics-based structural similarity network were assessed, including graph theory attributes, test-retest reliability, and individual identification ability (fingerprinting). At last, two representative applications for individualized radiomics-based structural similarity network, namely mild cognitive impairment subtype discrimination and fluid intelligence prediction, were exemplified and compared with some other networks on large open-source datasets. The results revealed that the individualized radiomics-based structural similarity network displays remarkable network characteristics and exhibits advantageous performances in mild cognitive impairment subtype discrimination and fluid intelligence prediction. In summary, the individualized radiomics-based structural similarity network provides a distinctive, reliable, and informative individualized structural brain network, which can be combined with other networks such as resting-state functional connectivity for various phenotypic and clinical applications.


Assuntos
Encéfalo , Radiômica , Reprodutibilidade dos Testes , Encéfalo/diagnóstico por imagem , Substância Cinzenta/diagnóstico por imagem , Neuroimagem
6.
Med Phys ; 51(3): 2007-2019, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37643447

RESUMO

BACKGROUND: Diagnosis and treatment management for head and neck squamous cell carcinoma (HNSCC) is guided by routine diagnostic head and neck computed tomography (CT) scans to identify tumor and lymph node features. The extracapsular extension (ECE) is a strong predictor of patients' survival outcomes with HNSCC. It is essential to detect the occurrence of ECE as it changes staging and treatment planning for patients. Current clinical ECE detection relies on visual identification and pathologic confirmation conducted by clinicians. However, manual annotation of the lymph node region is a required data preprocessing step in most of the current machine learning-based ECE diagnosis studies. PURPOSE: In this paper, we propose a Gradient Mapping Guided Explainable Network (GMGENet) framework to perform ECE identification automatically without requiring annotated lymph node region information. METHODS: The gradient-weighted class activation mapping (Grad-CAM) technique is applied to guide the deep learning algorithm to focus on the regions that are highly related to ECE. The proposed framework includes an extractor and a classifier. In a joint training process, informative volumes of interest (VOIs) are extracted by the extractor without labeled lymph node region information, and the classifier learns the pattern to classify the extracted VOIs into ECE positive and negative. RESULTS: In evaluation, the proposed methods are well-trained and tested using cross-validation. GMGENet achieved test accuracy and area under the curve (AUC) of 92.2% and 89.3%, respectively. GMGENetV2 achieved 90.3% accuracy and 91.7% AUC in the test. The results were compared with different existing models and further confirmed and explained by generating ECE probability heatmaps via a Grad-CAM technique. The presence or absence of ECE has been analyzed and correlated with ground truth histopathological findings. CONCLUSIONS: The proposed deep network can learn meaningful patterns to identify ECE without providing lymph node contours. The introduced ECE heatmaps will contribute to the clinical implementations of the proposed model and reveal unknown features to radiologists. The outcome of this study is expected to promote the implementation of explainable artificial intelligence-assiste ECE detection.


Assuntos
Extensão Extranodal , Neoplasias de Cabeça e Pescoço , Humanos , Carcinoma de Células Escamosas de Cabeça e Pescoço , Extensão Extranodal/patologia , Inteligência Artificial , Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Neoplasias de Cabeça e Pescoço/patologia , Linfonodos/diagnóstico por imagem , Linfonodos/patologia , Tomografia Computadorizada por Raios X , Redes Neurais de Computação
7.
IEEE J Biomed Health Inform ; 28(2): 929-940, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37930923

RESUMO

Semi-supervised learning methods have been explored to mitigate the scarcity of pixel-level annotation in medical image segmentation tasks. Consistency learning, serving as a mainstream method in semi-supervised training, suffers from low efficiency and poor stability due to inaccurate supervision and insufficient feature representation. Prototypical learning is one potential and plausible way to handle this problem due to the nature of feature aggregation in prototype calculation. However, the previous works have not fully studied how to enhance the supervision quality and feature representation using prototypical learning under the semi-supervised condition. To address this issue, we propose an implicit-explicit alignment (IEPAlign) framework to foster semi-supervised consistency training. In specific, we develop an implicit prototype alignment method based on dynamic multiple prototypes on-the-fly. And then, we design a multiple prediction voting strategy for reliable unlabeled mask generation and prototype calculation to improve the supervision quality. Afterward, to boost the intra-class consistency and inter-class separability of pixel-wise features in semi-supervised segmentation, we construct a region-aware hierarchical prototype alignment, which transmits information from labeled to unlabeled and from certain regions to uncertain regions. We evaluate IEPAlign on three medical image segmentation tasks. The extensive experimental results demonstrate that the proposed method outperforms other popular semi-supervised segmentation methods and achieves comparable performance with fully-supervised training methods.


Assuntos
Aprendizado de Máquina Supervisionado , Votação , Processamento de Imagem Assistida por Computador
8.
Radiology ; 309(2): e222891, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37934098

RESUMO

Interventional oncology is a rapidly growing field with advances in minimally invasive image-guided local-regional treatments for hepatocellular carcinoma (HCC), including transarterial chemoembolization, transarterial radioembolization, and thermal ablation. However, current standardized clinical staging systems for HCC are limited in their ability to optimize patient selection for treatment as they rely primarily on serum markers and radiologist-defined imaging features. Given the variation in treatment responses, an updated scoring system that includes multidimensional aspects of the disease, including quantitative imaging features, serum markers, and functional biomarkers, is needed to optimally triage patients. With the vast amounts of numerical medical record data and imaging features, researchers have turned to image-based methods, such as radiomics and artificial intelligence (AI), to automatically extract and process multidimensional data from images. The synthesis of these data can provide clinically relevant results to guide personalized treatment plans and optimize resource utilization. Machine learning (ML) is a branch of AI in which a model learns from training data and makes effective predictions by teaching itself. This review article outlines the basics of ML and provides a comprehensive overview of its potential value in the prediction of treatment response in patients with HCC after minimally invasive image-guided therapy.


Assuntos
Carcinoma Hepatocelular , Quimioembolização Terapêutica , Neoplasias Hepáticas , Humanos , Inteligência Artificial , Aprendizado de Máquina , Biomarcadores
9.
Artigo em Inglês | MEDLINE | ID: mdl-37790880

RESUMO

We develop deep clustering survival machines to simultaneously predict survival information and characterize data heterogeneity that is not typically modeled by conventional survival analysis methods. By modeling timing information of survival data generatively with a mixture of parametric distributions, referred to as expert distributions, our method learns weights of the expert distributions for individual instances based on their features discriminatively such that each instance's survival information can be characterized by a weighted combination of the learned expert distributions. Extensive experiments on both real and synthetic datasets have demonstrated that our method is capable of obtaining promising clustering results and competitive time-to-event predicting performance.

10.
Eur J Radiol ; 168: 111136, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37832194

RESUMO

PURPOSE: The study was aimed to develop and evaluate a deep learning-based radiomics to predict the histological risk categorization of thymic epithelial tumors (TETs), which can be highly informative for patient treatment planning and prognostic assessment. METHOD: A total of 681 patients with TETs from three independent hospitals were included and separated into derivation cohort and external test cohort. Handcrafted and deep learning features were extracted from preoperative contrast-enhanced CT images and selected to build three radiomics signatures (radiomics signature [Rad_Sig], deep learning signature [DL_Sig] and deep learning radiomics signature [DLR_Sig]) to predict risk categorization of TETs. A deep learning-based radiomic nomogram (DLRN) was then depicted to visualize the classification evaluation. The performance of predictive models was compared using the receiver operating characteristic and decision curve analysis (DCA). RESULTS: Among three radiomics signatures, DLR_Sig demonstrated optimum performance with an AUC of 0.883 for the derivation cohort and 0.749 for the external test cohort. Combining DLR_Sig with age and gender, DLRN was depict and exhibited optimum performance among all radiomics models with an AUC of 0.965, accuracy of 0.911, sensitivity of 0.921 and specificity of 0.902 in the derivation cohort, and an AUC of 0.786, accuracy of 0.774, sensitivity of 0.778 and specificity of 0.771 in the external test cohort. The DCA showed that DLRN had greater clinical benefit than other radiomics signatures. CONCLUSIONS: Our study developed and validated a DLRN to accurately predict the risk categorization of TETs, which has potential to facilitate individualized treatment and improve patient prognosis evaluation.


Assuntos
Aprendizado Profundo , Neoplasias Epiteliais e Glandulares , Neoplasias do Timo , Humanos , Nomogramas , Neoplasias Epiteliais e Glandulares/diagnóstico por imagem , Neoplasias do Timo/diagnóstico por imagem , Estudos Retrospectivos
11.
Neural Netw ; 166: 487-500, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37574622

RESUMO

Reconstructing visual experience from brain responses measured by functional magnetic resonance imaging (fMRI) is a challenging yet important research topic in brain decoding, especially it has proved more difficult to decode visually similar stimuli, such as faces. Although face attributes are known as the key to face recognition, most existing methods generally ignore how to decode facial attributes more precisely in perceived face reconstruction, which often leads to indistinguishable reconstructed faces. To solve this problem, we propose a novel neural decoding framework called VSPnet (voxel2style2pixel) by establishing hierarchical encoding and decoding networks with disentangled latent representations as media, so that to recover visual stimuli more elaborately. And we design a hierarchical visual encoder (named HVE) to pre-extract features containing both high-level semantic knowledge and low-level visual details from stimuli. The proposed VSPnet consists of two networks: Multi-branch cognitive encoder and style-based image generator. The encoder network is constructed by multiple linear regression branches to map brain signals to the latent space provided by the pre-extracted visual features and obtain representations containing hierarchical information consistent to the corresponding stimuli. We make the generator network inspired by StyleGAN to untangle the complexity of fMRI representations and generate images. And the HVE network is composed of a standard feature pyramid over a ResNet backbone. Extensive experimental results on the latest public datasets have demonstrated the reconstruction accuracy of our proposed method outperforms the state-of-the-art approaches and the identifiability of different reconstructed faces has been greatly improved. In particular, we achieve feature editing for several facial attributes in fMRI domain based on the multiview (i.e., visual stimuli and evoked fMRI) latent representations.


Assuntos
Encéfalo , Reconhecimento Psicológico , Encéfalo/diagnóstico por imagem , Encéfalo/fisiologia , Mapeamento Encefálico/métodos , Imageamento por Ressonância Magnética/métodos , Análise Multivariada
12.
Front Radiol ; 3: 928639, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37492388

RESUMO

Breast cancer is a leading cause of death for women globally. A characteristic of breast cancer includes its ability to metastasize to distant regions of the body, and the disease achieves this through first spreading to the axillary lymph nodes. Traditional diagnosis of axillary lymph node metastasis includes an invasive technique that leads to potential clinical complications for breast cancer patients. The rise of artificial intelligence in the medical imaging field has led to the creation of innovative deep learning models that can predict the metastatic status of axillary lymph nodes noninvasively, which would result in no unnecessary biopsies and dissections for patients. In this review, we discuss the success of various deep learning artificial intelligence models across multiple imaging modalities in their performance of predicting axillary lymph node metastasis.

13.
J Digit Imaging ; 36(5): 2075-2087, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37340197

RESUMO

Deep convolutional neural networks (DCNNs) have shown promise in brain tumor segmentation from multi-modal MRI sequences, accommodating heterogeneity in tumor shape and appearance. The fusion of multiple MRI sequences allows networks to explore complementary tumor information for segmentation. However, developing a network that maintains clinical relevance in situations where certain MRI sequence(s) might be unavailable or unusual poses a significant challenge. While one solution is to train multiple models with different MRI sequence combinations, it is impractical to train every model from all possible sequence combinations. In this paper, we propose a DCNN-based brain tumor segmentation framework incorporating a novel sequence dropout technique in which networks are trained to be robust to missing MRI sequences while employing all other available sequences. Experiments were performed on the RSNA-ASNR-MICCAI BraTS 2021 Challenge dataset. When all MRI sequences were available, there were no significant differences in performance of the model with and without dropout for enhanced tumor (ET), tumor (TC), and whole tumor (WT) (p-values 1.000, 1.000, 0.799, respectively), demonstrating that the addition of dropout improves robustness without hindering overall performance. When key sequences were unavailable, the network with sequence dropout performed significantly better. For example, when tested on only T1, T2, and FLAIR sequences together, DSC for ET, TC, and WT increased from 0.143 to 0.486, 0.431 to 0.680, and 0.854 to 0.901, respectively. Sequence dropout represents a relatively simple yet effective approach for brain tumor segmentation with missing MRI sequences.


Assuntos
Neoplasias Encefálicas , Processamento de Imagem Assistida por Computador , Humanos , Processamento de Imagem Assistida por Computador/métodos , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/patologia , Redes Neurais de Computação , Imageamento por Ressonância Magnética/métodos
14.
IEEE J Biomed Health Inform ; 27(8): 4052-4061, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37204947

RESUMO

Segmentation of liver from CT scans is essential in computer-aided liver disease diagnosis and treatment. However, the 2DCNN ignores the 3D context, and the 3DCNN suffers from numerous learnable parameters and high computational cost. In order to overcome this limitation, we propose an Attentive Context-Enhanced Network (AC-E Network) consisting of 1) an attentive context encoding module (ACEM) that can be integrated into the 2D backbone to extract 3D context without a sharp increase in the number of learnable parameters; 2) a dual segmentation branch including complemental loss making the network attend to both the liver region and boundary so that getting the segmented liver surface with high accuracy. Extensive experiments on the LiTS and the 3D-IRCADb datasets demonstrate that our method outperforms existing approaches and is competitive to the state-of-the-art 2D-3D hybrid method on the equilibrium of the segmentation precision and the number of model parameters.


Assuntos
Abdome , Neoplasias Hepáticas , Humanos , Tomografia Computadorizada por Raios X/métodos , Diagnóstico por Computador , Processamento de Imagem Assistida por Computador/métodos
15.
Phys Med Biol ; 68(9)2023 04 25.
Artigo em Inglês | MEDLINE | ID: mdl-37019119

RESUMO

Objective. Radiation therapy for head and neck (H&N) cancer relies on accurate segmentation of the primary tumor. A robust, accurate, and automated gross tumor volume segmentation method is warranted for H&N cancer therapeutic management. The purpose of this study is to develop a novel deep learning segmentation model for H&N cancer based on independent and combined CT and FDG-PET modalities.Approach. In this study, we developed a robust deep learning-based model leveraging information from both CT and PET. We implemented a 3D U-Net architecture with 5 levels of encoding and decoding, computing model loss through deep supervision. We used a channel dropout technique to emulate different combinations of input modalities. This technique prevents potential performance issues when only one modality is available, increasing model robustness. We implemented ensemble modeling by combining two types of convolutions with differing receptive fields, conventional and dilated, to improve capture of both fine details and global information.Main Results. Our proposed methods yielded promising results, with a Dice similarity coefficient (DSC) of 0.802 when deployed on combined CT and PET, DSC of 0.610 when deployed on CT, and DSC of 0.750 when deployed on PET.Significance. Application of a channel dropout method allowed for a single model to achieve high performance when deployed on either single modality images (CT or PET) or combined modality images (CT and PET). The presented segmentation techniques are clinically relevant to applications where images from a certain modality might not always be available.


Assuntos
Aprendizado Profundo , Neoplasias de Cabeça e Pescoço , Humanos , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos , Tomografia Computadorizada por Raios X , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador/métodos
16.
Med Phys ; 50(8): 4993-5001, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-36780152

RESUMO

BACKGROUND: Hematologic toxicity (HT) is a common adverse tissue reaction during radiotherapy for rectal cancer patients, which may lead to various negative effects such as reduced therapeutic effect, prolonged treatment period and increased treatment cost. Therefore, predicting the occurrence of HT before radiotherapy is necessary but still challenging. PURPOSE: This study proposes a hybrid machine learning model to predict the symptomatic radiation HT in rectal cancer patients using the combined demographic, clinical, dosimetric, and Radiomics features, and ascertains the most effective regions of interest (ROI) in CT images and predictive feature sets. METHODS: A discovery dataset of 240 rectal cancer patients, including 145 patients with HT symptoms and a validation dataset of 96 patients (63 patients with HT) with different dose prescription were retrospectively enrolled. Eight ROIs were contoured on patient CT images to derive Radiomics features, which were then, respectively, combined with the demographic, clinical, and dosimetric features to classify patients with HT symptoms. Moreover, the survival analysis was performed on risky patients with HT in order to understand the HT progression. RESULTS: The classification models in ROIs of bone marrow and femoral head exhibited relatively high accuracies (accuracy = 0.765 and 0.725) in the discovery dataset as well as comparable performances in the validation dataset (accuracy = 0.758 and 0.714). When combining the two ROIs together, the model performance was the best in both discovery and validation datasets (accuracy = 0.843 and 0.802). In the survival analysis test, only the bone marrow ROI achieved statistically significant performance in accessing risky HT (C-index = 0.658, P = 0.03). Most of the discriminative features were Radiomics features, and only gender and the mean dose in Irradvolume was involved in HT. CONCLUSION: The results reflect that the Radiomics features of bone marrow are significantly correlated with HT occurrence and progression in rectal cancer. The proposed Radiomics-based model may help the early detection of radiotherapy induced HT in rectal cancer patients and thus improve the clinical outcome in future.


Assuntos
Lesões por Radiação , Neoplasias Retais , Humanos , Estudos Retrospectivos , Detecção Precoce de Câncer , Reto , Neoplasias Retais/diagnóstico por imagem , Neoplasias Retais/radioterapia , Lesões por Radiação/diagnóstico por imagem , Lesões por Radiação/etiologia
17.
IEEE Trans Image Process ; 32: 1897-1910, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36417725

RESUMO

Camouflaged object detection, which aims to detect/segment the object(s) that blend in with their surrounding, remains challenging for deep models due to the intrinsic similarities between foreground objects and background surroundings. Ideally, an effective model should be capable of finding valuable clues from the given scene and integrating them into a joint learning framework to co-enhance the representation. Inspired by this observation, we propose a novel Mutual Graph Learning (MGL) model by shifting the conventional perspective of mutual learning from regular grids to graph domain. Specifically, an image is decoupled by MGL into two task-specific feature maps - one for finding the rough location of the target and the other for capturing its accurate boundary details. Then, the mutual benefits can be fully exploited by reasoning their high-order relations through graphs recurrently. It should be noted that our method is different from most mutual learning models that model all between-task interactions with the use of a shared function. To increase information interactions, MGL is built with typed functions for dealing with different complementary relations. To overcome the accuracy loss caused by interpolation to higher resolution and the computational redundancy resulting from recurrent learning, the S-MGL is equipped with a multi-source attention contextual recovery module, called R-MGL_v2, which uses the pixel feature information iteratively. Experiments on challenging datasets, including CHAMELEON, CAMO, COD10K, and NC4K demonstrate the effectiveness of our MGL with superior performance to existing state-of-the-art methods. The code can be found at https://github.com/fanyang587/MGL.

18.
IEEE Trans Neural Netw Learn Syst ; 34(5): 2633-2646, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-34520365

RESUMO

Scene parsing, or semantic segmentation, aims at labeling all pixels in an image with the predefined categories of things and stuff. Learning a robust representation for each pixel is crucial for this task. Existing state-of-the-art (SOTA) algorithms employ deep neural networks to learn (discover) the representations needed for parsing from raw data. Nevertheless, these networks discover desired features or representations only from the given image (content), ignoring more generic knowledge contained in the dataset. To overcome this deficiency, we make the first attempt to explore the meaningful supportive knowledge, including general visual concepts (i.e., the generic representations for objects and stuff) and their relations from the whole dataset to enhance the underlying representations of a specific scene for better scene parsing. Specifically, we propose a novel supportive knowledge mining module (SKMM) and a knowledge augmentation operator (KAO), which can be easily plugged into modern scene parsing networks. By taking image-specific content and dataset-level supportive knowledge into full consideration, the resulting model, called knowledge augmented neural network (KANN), can better understand the given scene and provide greater representational power. Experiments are conducted on three challenging scene parsing and semantic segmentation datasets: Cityscapes, Pascal-Context, and ADE20K. The results show that our KANN is effective and achieves better results than all existing SOTA methods.

19.
J Stroke Cerebrovasc Dis ; 31(11): 106753, 2022 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-36115105

RESUMO

OBJECTIVES: In this study, we developed a deep learning pipeline that detects large vessel occlusion (LVO) and predicts functional outcome based on computed tomography angiography (CTA) images to improve the management of the LVO patients. METHODS: A series identifier picked out 8650 LVO-protocoled studies from 2015 to 2019 at Rhode Island Hospital with an identified thin axial series that served as the data pool. Data were annotated into 2 classes: 1021 LVOs and 7629 normal. The Inception-V1 I3D architecture was applied for LVO detection. For outcome prediction, 323 patients undergoing thrombectomy were selected. A 3D convolution neural network (CNN) was used for outcome prediction (30-day mRS) with CTA volumes and embedded pre-treatment variables as inputs. RESULT: For LVO-detection model, CTAs from 8,650 patients (median age 68 years, interquartile range (IQR): 58-81; 3934 females) were analyzed. The cross-validated AUC for LVO vs. not was 0.74 (95% CI: 0.72-0.75). For the mRS classification model, CTAs from 323 patients (median age 75 years, IQR: 63-84; 164 females) were analyzed. The algorithm achieved a test AUC of 0.82 (95% CI: 0.79-0.84), sensitivity of 89%, and specificity 66%. The two models were then integrated with hospital infrastructure where CTA was collected in real-time and processed by the model. If LVO was detected, interventionists were notified and provided with predicted clinical outcome information. CONCLUSION: 3D CNNs based on CTA were effective in selecting LVO and predicting LVO mechanical thrombectomy short-term prognosis. End-to-end AI platform allows users to receive immediate prognosis prediction and facilitates clinical workflow.


Assuntos
Isquemia Encefálica , Acidente Vascular Cerebral , Feminino , Humanos , Idoso , Inteligência Artificial , Trombectomia/efeitos adversos , Angiografia por Tomografia Computadorizada/métodos , Artéria Cerebral Média , Estudos Retrospectivos
20.
EBioMedicine ; 82: 104127, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35810561

RESUMO

BACKGROUND: Pre-treatment FDG-PET/CT scans were analyzed with machine learning to predict progression of lung malignancies and overall survival (OS). METHODS: A retrospective review across three institutions identified patients with a pre-procedure FDG-PET/CT and an associated malignancy diagnosis. Lesions were manually and automatically segmented, and convolutional neural networks (CNNs) were trained using FDG-PET/CT inputs to predict malignancy progression. Performance was evaluated using area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, and specificity. Image features were extracted from CNNs and by radiomics feature extraction, and random survival forests (RSF) were constructed to predict OS. Concordance index (C-index) and integrated brier score (IBS) were used to evaluate OS prediction. FINDINGS: 1168 nodules (n=965 patients) were identified. 792 nodules had progression and 376 were progression-free. The most common malignancies were adenocarcinoma (n=740) and squamous cell carcinoma (n=179). For progression risk, the PET+CT ensemble model with manual segmentation (accuracy=0.790, AUC=0.876) performed similarly to the CT only (accuracy=0.723, AUC=0.888) and better compared to the PET only (accuracy=0.664, AUC=0.669) models. For OS prediction with deep learning features, the PET+CT+clinical RSF ensemble model (C-index=0.737) performed similarly to the CT only (C-index=0.730) and better than the PET only (C-index=0.595), and clinical only (C-index=0.595) models. RSF models constructed with radiomics features had comparable performance to those with CNN features. INTERPRETATION: CNNs trained using pre-treatment FDG-PET/CT and extracted performed well in predicting lung malignancy progression and OS. OS prediction performance with CNN features was comparable to a radiomics approach. The prognostic models could inform treatment options and improve patient care. FUNDING: NIH NHLBI training grant (5T35HL094308-12, John Sollee).


Assuntos
Neoplasias Pulmonares , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Fluordesoxiglucose F18 , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/terapia , Aprendizado de Máquina , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos , Tomografia por Emissão de Pósitrons
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...