Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Res Sq ; 2023 Sep 11.
Artigo em Inglês | MEDLINE | ID: mdl-37790451

RESUMO

We report domain knowledge-based rules for assigning voxels in brain multiparametric MRI (mpMRI) to distinct tissuetypes based on their appearance on Apparent Diffusion Coefficient of water (ADC) maps, T1-weighted unenhanced and contrast-enhanced, T2-weighted, and Fluid-Attenuated Inversion Recovery images. The development dataset comprised mpMRI of 18 participants with preoperative high-grade glioma (HGG), recurrent HGG (rHGG), and brain metastases. External validation was performed on mpMRI of 235 HGG participants in the BraTS 2020 training dataset. The treatment dataset comprised serial mpMRI of 32 participants (total 231 scan dates) in a clinical trial of immunoradiotherapy in rHGG (NCT02313272). Pixel intensity-based rules for segmenting contrast-enhancing tumor (CE), hemorrhage, Fluid, non-enhancing tumor (Edema1), and leukoaraiosis (Edema2) were identified on calibrated, co-registered mpMRI images in the development dataset. On validation, rule-based CE and High FLAIR (Edema1 + Edema2) volumes were significantly correlated with ground truth volumes of enhancing tumor (R = 0.85;p < 0.001) and peritumoral edema (R = 0.87;p < 0.001), respectively. In the treatment dataset, a model combining time-on-treatment and rule-based volumes of CE and intratumoral Fluid was 82.5% accurate for predicting progression within 30 days of the scan date. An explainable decision tree applied to brain mpMRI yields validated, consistent, intratumoral tissuetype volumes suitable for quantitative response assessment in clinical trials of rHGG.

2.
Sci Rep ; 11(1): 3785, 2021 02 15.
Artigo em Inglês | MEDLINE | ID: mdl-33589715

RESUMO

Sarcomatoid differentiation in RCC (sRCC) is associated with a poor prognosis, necessitating more aggressive management than RCC without sarcomatoid components (nsRCC). Since suspected renal cell carcinoma (RCC) tumors are not routinely biopsied for histologic evaluation, there is a clinical need for a non-invasive method to detect sarcomatoid differentiation pre-operatively. We utilized unsupervised self-organizing map (SOM) and supervised Learning Vector Quantizer (LVQ) machine learning to classify RCC tumors on T2-weighted, non-contrast T1-weighted fat-saturated, contrast-enhanced arterial-phase T1-weighted fat-saturated, and contrast-enhanced venous-phase T1-weighted fat-saturated MRI images. The SOM was trained on 8 nsRCC and 8 sRCC tumors, and used to compute Activation Maps for each training, validation (3 nsRCC and 3 sRCC), and test (5 nsRCC and 5 sRCC) tumor. The LVQ classifier was trained and optimized on Activation Maps from the 22 training and validation cohort tumors, and tested on Activation Maps of the 10 unseen test tumors. In this preliminary study, the SOM-LVQ model achieved a hold-out testing accuracy of 70% in the task of identifying sarcomatoid differentiation in RCC on standard multiparameter MRI (mpMRI) images. We have demonstrated a combined SOM-LVQ machine learning approach that is suitable for analysis of limited mpMRI datasets for the task of differential diagnosis.


Assuntos
Carcinoma de Células Renais/diagnóstico , Diferenciação Celular/genética , Diagnóstico Diferencial , Neoplasias Renais/diagnóstico , Algoritmos , Carcinoma de Células Renais/diagnóstico por imagem , Carcinoma de Células Renais/patologia , Feminino , Humanos , Neoplasias Renais/diagnóstico por imagem , Neoplasias Renais/patologia , Aprendizado de Máquina , Masculino , Imageamento por Ressonância Magnética Multiparamétrica
3.
Cancer Med ; 7(12): 6340-6356, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-30507033

RESUMO

BACKGROUND: Current guidelines for lung cancer screening increased a positive scan threshold to a 6 mm longest diameter. We extracted radiomic features from baseline and follow-up screens and performed size-specific analyses to predict lung cancer incidence using three nodule size classes (<6 mm [small], 6-16 mm [intermediate], and ≥16 mm [large]). METHODS: We extracted 219 features from baseline (T0) nodules and 219 delta features which are the change from T0 to first follow-up (T1). Nodules were identified for 160 incidence cases diagnosed with lung cancer at T1 or second follow-up screen (T2) and for 307 nodule-positive controls that had three consecutive positive screens not diagnosed as lung cancer. The cases and controls were split into training and test cohorts; classifier models were used to identify the most predictive features. RESULTS: The final models revealed modest improvements for baseline and delta features when compared to only baseline features. The AUROCs for small- and intermediate-sized nodules were 0.83 (95% CI 0.76-0.90) and 0.76 (95% CI 0.71-0.81) for baseline-only radiomic features, respectively, and 0.84 (95% CI 0.77-0.90) and 0.84 (95% CI 0.80-0.88) for baseline and delta features, respectively. When intermediate and large nodules were combined, the AUROC for baseline-only features was 0.80 (95% CI 0.76-0.84) compared with 0.86 (95% CI 0.83-0.89) for baseline and delta features. CONCLUSIONS: We found modest improvements in predicting lung cancer incidence by combining baseline and delta radiomics. Radiomics could be used to improve current size-based screening guidelines.


Assuntos
Detecção Precoce de Câncer , Neoplasias Pulmonares/diagnóstico por imagem , Programas de Rastreamento , Idoso , Estudos de Casos e Controles , Feminino , Humanos , Incidência , Neoplasias Pulmonares/epidemiologia , Masculino , Pessoa de Meia-Idade , Radiografia
4.
J Med Imaging (Bellingham) ; 5(1): 011021, 2018 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-29594181

RESUMO

Lung cancer has a high incidence and mortality rate. Early detection and diagnosis of lung cancers is best achieved with low-dose computed tomography (CT). Classical radiomics features extracted from lung CT images have been shown as able to predict cancer incidence and prognosis. With the advancement of deep learning and convolutional neural networks (CNNs), deep features can be identified to analyze lung CTs for prognosis prediction and diagnosis. Due to a limited number of available images in the medical field, the transfer learning concept can be helpful. Using subsets of participants from the National Lung Screening Trial (NLST), we utilized a transfer learning approach to differentiate lung cancer nodules versus positive controls. We experimented with three different pretrained CNNs for extracting deep features and used five different classifiers. Experiments were also conducted with deep features from different color channels of a pretrained CNN. Selected deep features were combined with radiomics features. A CNN was designed and trained. Combinations of features from pretrained, CNNs trained on NLST data, and classical radiomics were used to build classifiers. The best accuracy (76.79%) was obtained using feature combinations. An area under the receiver operating characteristic curve of 0.87 was obtained using a CNN trained on an augmented NLST data cohort.

5.
Tomography ; 2(4): 388-395, 2016 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-28066809

RESUMO

Lung cancer is the most common cause of cancer-related deaths in the USA. It can be detected and diagnosed using computed tomography images. For an automated classifier, identifying predictive features from medical images is a key concern. Deep feature extraction using pretrained convolutional neural networks (CNNs) has recently been successfully applied in some image domains. Here, we applied a pretrained CNN to extract deep features from 40 computed tomography images, with contrast, of non-small cell adenocarcinoma lung cancer, and combined deep features with traditional image features and trained classifiers to predict short- and long-term survivors. We experimented with several pretrained CNNs and several feature selection strategies. The best previously reported accuracy when using traditional quantitative features was 77.5% (area under the curve [AUC], 0.712), which was achieved by a decision tree classifier. The best reported accuracy from transfer learning and deep features was 77.5% (AUC, 0.713) using a decision tree classifier. When extracted deep neural network features were combined with traditional quantitative features, we obtained an accuracy of 90% (AUC, 0.935) with the 5 best post-rectified linear unit features extracted from a vgg-f pretrained CNN and the 5 best traditional features. The best results were achieved with the symmetric uncertainty feature ranking algorithm followed by a random forests classifier.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA