Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 110
Filtrar
1.
Semin Ultrasound CT MR ; 45(2): 152-160, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38403128

RESUMO

Artificial intelligence's (AI) emergence in radiology elicits both excitement and uncertainty. AI holds promise for improving radiology with regards to clinical practice, education, and research opportunities. Yet, AI systems are trained on select datasets that can contain bias and inaccuracies. Radiologists must understand these limitations and engage with AI developers at every step of the process - from algorithm initiation and design to development and implementation - to maximize benefit and minimize harm that can be enabled by this technology.


Assuntos
Inteligência Artificial , Radiologia , Humanos , Radiologia/métodos , Algoritmos , Diagnóstico por Imagem/métodos
2.
J Med Imaging (Bellingham) ; 10(4): 044006, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37564098

RESUMO

Purpose: We aim to evaluate the performance of radiomic biopsy (RB), best-fit bounding box (BB), and a deep-learning-based segmentation method called no-new-U-Net (nnU-Net), compared to the standard full manual (FM) segmentation method for predicting benign and malignant lung nodules using a computed tomography (CT) radiomic machine learning model. Materials and Methods: A total of 188 CT scans of lung nodules from 2 institutions were used for our study. One radiologist identified and delineated all 188 lung nodules, whereas a second radiologist segmented a subset (n=20) of these nodules. Both radiologists employed FM and RB segmentation methods. BB segmentations were generated computationally from the FM segmentations. The nnU-Net, a deep-learning-based segmentation method, performed automatic nodule detection and segmentation. The time radiologists took to perform segmentations was recorded. Radiomic features were extracted from each segmentation method, and models to predict benign and malignant lung nodules were developed. The Kruskal-Wallis and DeLong tests were used to compare segmentation times and areas under the curve (AUC), respectively. Results: For the delineation of the FM, RB, and BB segmentations, the two radiologists required a median time (IQR) of 113 (54 to 251.5), 21 (9.25 to 38), and 16 (12 to 64.25) s, respectively (p=0.04). In dataset 1, the mean AUC (95% CI) of the FM, RB, BB, and nnU-Net model were 0.964 (0.96 to 0.968), 0.985 (0.983 to 0.987), 0.961 (0.956 to 0.965), and 0.878 (0.869 to 0.888). In dataset 2, the mean AUC (95% CI) of the FM, RB, BB, and nnU-Net model were 0.717 (0.705 to 0.729), 0.919 (0.913 to 0.924), 0.699 (0.687 to 0.711), and 0.644 (0.632 to 0.657). Conclusion: Radiomic biopsy-based models outperformed FM and BB models in prediction of benign and malignant lung nodules in two independent datasets while deep-learning segmentation-based models performed similarly to FM and BB. RB could be a more efficient segmentation method, but further validation is needed.

3.
Heliyon ; 9(7): e17934, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37483733

RESUMO

In response to the unprecedented global healthcare crisis of the COVID-19 pandemic, the scientific community has joined forces to tackle the challenges and prepare for future pandemics. Multiple modalities of data have been investigated to understand the nature of COVID-19. In this paper, MIDRC investigators present an overview of the state-of-the-art development of multimodal machine learning for COVID-19 and model assessment considerations for future studies. We begin with a discussion of the lessons learned from radiogenomic studies for cancer diagnosis. We then summarize the multi-modality COVID-19 data investigated in the literature including symptoms and other clinical data, laboratory tests, imaging, pathology, physiology, and other omics data. Publicly available multimodal COVID-19 data provided by MIDRC and other sources are summarized. After an overview of machine learning developments using multimodal data for COVID-19, we present our perspectives on the future development of multimodal machine learning models for COVID-19.

4.
medRxiv ; 2023 Mar 02.
Artigo em Inglês | MEDLINE | ID: mdl-36909593

RESUMO

Lung Cancer is the leading cause of cancer mortality in the U.S. The effectiveness of standard treatments, including surgery, chemotherapy or radiotherapy, depends on several factors like type and stage of cancer, with the survival rate being much worse for later cancer stages. The National Lung Screening Trial (NLST) established that patients screened using low-dose Computed Tomography (CT) had a 15 to 20 percent lower risk of dying from lung cancer than patients screened using chest X-rays. While CT excelled at detecting small early stage malignant nodules, a large proportion of patients (> 25%) screened positive and only a small fraction (< 10%) of these positive screens actually had or developed cancer in the subsequent years. We developed a model to distinguish between high and low risk patients among the positive screens, predicting the likelihood of having or developing lung cancer at the current time point or in subsequent years non-invasively, based on current and previous CT imaging data. However, most of the nodules in NLST are very small, and nodule segmentations or even precise locations are unavailable. Our model comprises two stages: the first stage is a neural network model trained on the Lung Image Database Consortium (LIDC-IDRI) cohort which detects nodules and assigns them malignancy scores. The second part of our model is a boosted tree which outputs a cancer probability for a patient based on the nodule information (location and malignancy score) predicted by the first stage. Our model, built on a subset of the NLST cohort (n = 1138) shows excellent performance, achieving an area under the receiver operating characteristics curve (ROC AUC) of 0.85 when predicting based on CT images from all three time points available in the NLST dataset.

5.
J Med Imaging (Bellingham) ; 9(6): 066001, 2022 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-36388142

RESUMO

Purpose: We developed a model integrating multimodal quantitative imaging features from tumor and nontumor regions, qualitative features, and clinical data to improve the risk stratification of patients with resectable non-small cell lung cancer (NSCLC). Approach: We retrospectively analyzed 135 patients [mean age, 69 years (43 to 87, range); 100 male patients and 35 female patients] with NSCLC who underwent upfront surgical resection between 2008 and 2012. The tumor and peritumoral regions on both preoperative CT and FDG PET-CT and the vertebral bodies L3 to L5 on FDG PET were segmented to assess the tumor and bone marrow uptake, respectively. Radiomic features were extracted and combined with clinical and CT qualitative features. A random survival forest model was developed using the top-performing features to predict the time to recurrence/progression in the training cohort ( n = 101 ), validated in the testing cohort ( n = 34 ) using the concordance, and compared with a stage-only model. Patients were stratified into high- and low-risks of recurrence/progression using Kaplan-Meier analysis. Results: The model, consisting of stage, three wavelet texture features, and three wavelet first-order features, achieved a concordance of 0.78 and 0.76 in the training and testing cohorts, respectively, significantly outperforming the baseline stage-only model results of 0.67 ( p < 0.005 ) and 0.60 ( p = 0.008 ), respectively. Patients at high- and low-risks of recurrence/progression were significantly stratified in both the training ( p < 0.005 ) and the testing ( p = 0.03 ) cohorts. Conclusions: Our radiomic model, consisting of stage and tumor, peritumoral, and bone marrow features from CT and FDG PET-CT significantly stratified patients into low- and high-risk of recurrence/progression.

6.
Commun Med (Lond) ; 2: 133, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36310650

RESUMO

An increasing array of tools is being developed using artificial intelligence (AI) and machine learning (ML) for cancer imaging. The development of an optimal tool requires multidisciplinary engagement to ensure that the appropriate use case is met, as well as to undertake robust development and testing prior to its adoption into healthcare systems. This multidisciplinary review highlights key developments in the field. We discuss the challenges and opportunities of AI and ML in cancer imaging; considerations for the development of algorithms into tools that can be widely used and disseminated; and the development of the ecosystem needed to promote growth of AI and ML in cancer imaging.

7.
Nat Commun ; 13(1): 4128, 2022 07 15.
Artigo em Inglês | MEDLINE | ID: mdl-35840566

RESUMO

International challenges have become the de facto standard for comparative assessment of image analysis algorithms. Although segmentation is the most widely investigated medical image processing task, the various challenges have been organized to focus only on specific clinical tasks. We organized the Medical Segmentation Decathlon (MSD)-a biomedical image analysis challenge, in which algorithms compete in a multitude of both tasks and modalities to investigate the hypothesis that a method capable of performing well on multiple tasks will generalize well to a previously unseen task and potentially outperform a custom-designed solution. MSD results confirmed this hypothesis, moreover, MSD winner continued generalizing well to a wide range of other clinical problems for the next two years. Three main conclusions can be drawn from this study: (1) state-of-the-art image segmentation algorithms generalize well when retrained on unseen tasks; (2) consistent algorithmic performance across multiple tasks is a strong surrogate of algorithmic generalizability; (3) the training of accurate AI segmentation models is now commoditized to scientists that are not versed in AI model training.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Processamento de Imagem Assistida por Computador/métodos
8.
Neuro Oncol ; 24(4): 601-609, 2022 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-34487172

RESUMO

BACKGROUND: Non-invasive differentiation between schwannomas and neurofibromas is important for appropriate management, preoperative counseling, and surgical planning, but has proven difficult using conventional imaging. The objective of this study was to develop and evaluate machine learning approaches for differentiating peripheral schwannomas from neurofibromas. METHODS: We assembled a cohort of schwannomas and neurofibromas from 3 independent institutions and extracted high-dimensional radiomic features from gadolinium-enhanced, T1-weighted MRI using the PyRadiomics package on Quantitative Imaging Feature Pipeline. Age, sex, neurogenetic syndrome, spontaneous pain, and motor deficit were recorded. We evaluated the performance of 6 radiomics-based classifier models with and without clinical features and compared model performance against human expert evaluators. RESULTS: One hundred and seven schwannomas and 59 neurofibromas were included. The primary models included both clinical and imaging data. The accuracy of the human evaluators (0.765) did not significantly exceed the no-information rate (NIR), whereas the Support Vector Machine (0.929), Logistic Regression (0.929), and Random Forest (0.905) classifiers exceeded the NIR. Using the method of DeLong, the AUCs for the Logistic Regression (AUC = 0.923) and K Nearest Neighbor (AUC = 0.923) classifiers were significantly greater than the human evaluators (AUC = 0.766; p = 0.041). CONCLUSIONS: The radiomics-based classifiers developed here proved to be more accurate and had a higher AUC on the ROC curve than expert human evaluators. This demonstrates that radiomics using routine MRI sequences and clinical features can aid in differentiation of peripheral schwannomas and neurofibromas.


Assuntos
Neurilemoma , Neurofibroma , Humanos , Aprendizado de Máquina , Imageamento por Ressonância Magnética/métodos , Neurilemoma/diagnóstico por imagem , Neurofibroma/diagnóstico por imagem , Estudos Retrospectivos
9.
J Med Imaging (Bellingham) ; 8(5): 054501, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-34514033

RESUMO

Purpose: To differentiate oncocytoma and chromophobe renal cell carcinoma (RCC) using radiomics features computed from spherical samples of image regions of interest, "radiomic biopsies" (RBs). Approach: In a retrospective cohort study of 102 CT cases [68 males (67%), 34 females (33%); mean age ± SD, 63 ± 12 years ], we pathology-confirmed 42 oncocytomas (41%) and 60 chromophobes (59%). A board-certified radiologist performed two RB rounds. From each RB round, we computed radiomics features and compared the performance of a random forest and AdaBoost binary classifier trained from the features. To control for overfitting, we performed 10 rounds of 70% to 30% train-test splits with feature-selection, cross-validation, and hyperparameter-optimization on each split. We evaluated the performance with test ROC AUC. We tested models on data from the other RB round and compared with the same round testing with the DeLong test. We clustered important features for each round and measured a bootstrapped adjusted Rand index agreement. Results: Our best classifiers achieved an average AUC of 0.71 ± 0.024 . We found no evidence of an effect for RB round ( p = 1 ). We also found no evidence for a decrease in model performance when tested on the other RB round ( p = 0.85 ). Feature clustering produced seven clusters in each RB round with high agreement ( Rand index = 0.981 ± 0.002 , p < 0.00001 ). Conclusions: A consistent radiomic signature can be derived from RBs and could help distinguish oncocytoma and chromophobe RCC.

10.
JCO Clin Cancer Inform ; 5: 746-757, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-34264747

RESUMO

PURPOSE: Small-cell lung cancer (SCLC) is the deadliest form of lung cancer, partly because of its short doubling time. Delays in imaging identification and diagnosis of nodules create a risk for stage migration. The purpose of our study was to determine if a machine learning radiomics model can detect SCLC on computed tomography (CT) among all nodules at least 1 cm in size. MATERIALS AND METHODS: Computed tomography scans from a single institution were selected and resampled to 1 × 1 × 1 mm. Studies were divided into SCLC and other scans comprising benign, adenocarcinoma, and squamous cell carcinoma that were segregated into group A (noncontrast scans) and group B (contrast-enhanced scans). Four machine learning classification models, support vector classifier, random forest (RF), XGBoost, and logistic regression, were used to generate radiomic models using 59 quantitative first-order and texture Imaging Biomarker Standardization Initiative compliant PyRadiomics features, which were found to be robust between two segmenters with minimum Redundancy Maximum Relevance feature selection within each leave-one-out-cross-validation to avoid overfitting. The performance was evaluated using a receiver operating characteristic curve. A final model was created using the RF classifier and aggregate minimum Redundancy Maximum Relevance to determine feature importance. RESULTS: A total of 103 studies were included in the analysis. The area under the receiver operating characteristic curve for RF, support vector classifier, XGBoost, and logistic regression was 0.81, 0.77, 0.84, and 0.84 in group A, and 0.88, 0.87, 0.85, and 0.81 in group B, respectively. Nine radiomic features in group A and 14 radiomic features in group B were predictive of SCLC. Six radiomic features overlapped between groups A and B. CONCLUSION: A machine learning radiomics model may help differentiate SCLC from other lung lesions.


Assuntos
Neoplasias Pulmonares , Tomografia Computadorizada por Raios X , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Aprendizado de Máquina , Curva ROC , Estudos Retrospectivos
11.
IEEE Trans Med Imaging ; 40(12): 3748-3761, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34264825

RESUMO

Lung cancer is by far the leading cause of cancer death in the US. Recent studies have demonstrated the effectiveness of screening using low dose CT (LDCT) in reducing lung cancer related mortality. While lung nodules are detected with a high rate of sensitivity, this exam has a low specificity rate and it is still difficult to separate benign and malignant lesions. The ISBI 2018 Lung Nodule Malignancy Prediction Challenge, developed by a team from the Quantitative Imaging Network of the National Cancer Institute, was focused on the prediction of lung nodule malignancy from two sequential LDCT screening exams using automated (non-manual) algorithms. We curated a cohort of 100 subjects who participated in the National Lung Screening Trial and had established pathological diagnoses. Data from 30 subjects were randomly selected for training and the remaining was used for testing. Participants were evaluated based on the area under the receiver operating characteristic curve (AUC) of nodule-wise malignancy scores generated by their algorithms on the test set. The challenge had 17 participants, with 11 teams submitting reports with method description, mandated by the challenge rules. Participants used quantitative methods, resulting in a reporting test AUC ranging from 0.698 to 0.913. The top five contestants used deep learning approaches, reporting an AUC between 0.87 - 0.91. The team's predictor did not achieve significant differences from each other nor from a volume change estimate (p =.05 with Bonferroni-Holm's correction).


Assuntos
Neoplasias Pulmonares , Nódulo Pulmonar Solitário , Algoritmos , Humanos , Pulmão , Neoplasias Pulmonares/diagnóstico por imagem , Curva ROC , Nódulo Pulmonar Solitário/diagnóstico por imagem , Tomografia Computadorizada por Raios X
12.
Neurosurgery ; 89(3): 509-517, 2021 08 16.
Artigo em Inglês | MEDLINE | ID: mdl-34131749

RESUMO

BACKGROUND: Clinicoradiologic differentiation between benign and malignant peripheral nerve sheath tumors (PNSTs) has important management implications. OBJECTIVE: To develop and evaluate machine-learning approaches to differentiate benign from malignant PNSTs. METHODS: We identified PNSTs treated at 3 institutions and extracted high-dimensional radiomics features from gadolinium-enhanced, T1-weighted magnetic resonance imaging (MRI) sequences. Training and test sets were selected randomly in a 70:30 ratio. A total of 900 image features were automatically extracted using the PyRadiomics package from Quantitative Imaging Feature Pipeline. Clinical data including age, sex, neurogenetic syndrome presence, spontaneous pain, and motor deficit were also incorporated. Features were selected using sparse regression analysis and retained features were further refined by gradient boost modeling to optimize the area under the curve (AUC) for diagnosis. We evaluated the performance of radiomics-based classifiers with and without clinical features and compared performance against human readers. RESULTS: A total of 95 malignant and 171 benign PNSTs were included. The final classifier model included 21 imaging and clinical features. Sensitivity, specificity, and AUC of 0.676, 0.882, and 0.845, respectively, were achieved on the test set. Using imaging and clinical features, human experts collectively achieved sensitivity, specificity, and AUC of 0.786, 0.431, and 0.624, respectively. The AUC of the classifier was statistically better than expert humans (P = .002). Expert humans were not statistically better than the no-information rate, whereas the classifier was (P = .001). CONCLUSION: Radiomics-based machine learning using routine MRI sequences and clinical features can aid in evaluation of PNSTs. Further improvement may be achieved by incorporating additional imaging sequences and clinical variables into future models.


Assuntos
Neoplasias de Bainha Neural , Neurofibrossarcoma , Humanos , Aprendizado de Máquina , Imageamento por Ressonância Magnética , Neoplasias de Bainha Neural/diagnóstico por imagem , Estudos Retrospectivos
13.
Neurooncol Adv ; 3(1): vdab042, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33977272

RESUMO

BACKGROUND: Diffuse intrinsic pontine gliomas (DIPGs) are lethal pediatric brain tumors. Presently, MRI is the mainstay of disease diagnosis and surveillance. We identify clinically significant computational features from MRI and create a prognostic machine learning model. METHODS: We isolated tumor volumes of T1-post-contrast (T1) and T2-weighted (T2) MRIs from 177 treatment-naïve DIPG patients from an international cohort for model training and testing. The Quantitative Image Feature Pipeline and PyRadiomics was used for feature extraction. Ten-fold cross-validation of least absolute shrinkage and selection operator Cox regression selected optimal features to predict overall survival in the training dataset and tested in the independent testing dataset. We analyzed model performance using clinical variables (age at diagnosis and sex) only, radiomics only, and radiomics plus clinical variables. RESULTS: All selected features were intensity and texture-based on the wavelet-filtered images (3 T1 gray-level co-occurrence matrix (GLCM) texture features, T2 GLCM texture feature, and T2 first-order mean). This multivariable Cox model demonstrated a concordance of 0.68 (95% CI: 0.61-0.74) in the training dataset, significantly outperforming the clinical-only model (C = 0.57 [95% CI: 0.49-0.64]). Adding clinical features to radiomics slightly improved performance (C = 0.70 [95% CI: 0.64-0.77]). The combined radiomics and clinical model was validated in the independent testing dataset (C = 0.59 [95% CI: 0.51-0.67], Noether's test P = .02). CONCLUSIONS: In this international study, we demonstrate the use of radiomic signatures to create a machine learning model for DIPG prognostication. Standardized, quantitative approaches that objectively measure DIPG changes, including computational MRI evaluation, could offer new approaches to assessing tumor phenotype and serve a future role for optimizing clinical trial eligibility and tumor surveillance.

14.
Front Cardiovasc Med ; 7: 591368, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33240940

RESUMO

Cardiovascular magnetic resonance (CMR) radiomics is a novel technique for advanced cardiac image phenotyping by analyzing multiple quantifiers of shape and tissue texture. In this paper, we assess, in the largest sample published to date, the performance of CMR radiomics models for identifying changes in cardiac structure and tissue texture due to cardiovascular risk factors. We evaluated five risk factor groups from the first 5,065 UK Biobank participants: hypertension (n = 1,394), diabetes (n = 243), high cholesterol (n = 779), current smoker (n = 320), and previous smoker (n = 1,394). Each group was randomly matched with an equal number of healthy comparators (without known cardiovascular disease or risk factors). Radiomics analysis was applied to short axis images of the left and right ventricles at end-diastole and end-systole, yielding a total of 684 features per study. Sequential forward feature selection in combination with machine learning (ML) algorithms (support vector machine, random forest, and logistic regression) were used to build radiomics signatures for each specific risk group. We evaluated the degree of separation achieved by the identified radiomics signatures using area under curve (AUC), receiver operating characteristic (ROC), and statistical testing. Logistic regression with L1-regularization was the optimal ML model. Compared to conventional imaging indices, radiomics signatures improved the discrimination of risk factor vs. healthy subgroups as assessed by AUC [diabetes: 0.80 vs. 0.70, hypertension: 0.72 vs. 0.69, high cholesterol: 0.71 vs. 0.65, current smoker: 0.68 vs. 0.65, previous smoker: 0.63 vs. 0.60]. Furthermore, we considered clinical interpretation of risk-specific radiomics signatures. For hypertensive individuals and previous smokers, the surface area to volume ratio was smaller in the risk factor vs. healthy subjects; perhaps reflecting a pattern of global concentric hypertrophy in these conditions. In the diabetes subgroup, the most discriminatory radiomics feature was the median intensity of the myocardium at end-systole, which suggests a global alteration at the myocardial tissue level. This study confirms the feasibility and potential of CMR radiomics for deeper image phenotyping of cardiovascular health and disease. We demonstrate such analysis may have utility beyond conventional CMR metrics for improved detection and understanding of the early effects of cardiovascular risk factors on cardiac structure and tissue.

15.
Arch Plast Surg ; 47(5): 428-434, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-32971594

RESUMO

BACKGROUND: Three-dimensional (3D) model printing improves visualization of anatomical structures in space compared to two-dimensional (2D) data and creates an exact model of the surgical site that can be used for reference during surgery. There is limited evidence on the effects of using 3D models in microsurgical reconstruction on improving clinical outcomes. METHODS: A retrospective review of patients undergoing reconstructive breast microsurgery procedures from 2017 to 2019 who received computed tomography angiography (CTA) scans only or with 3D models for preoperative surgical planning were performed. Preoperative decision-making to undergo a deep inferior epigastric perforator (DIEP) versus muscle-sparing transverse rectus abdominis myocutaneous (MS-TRAM) flap, as well as whether the decision changed during flap harvest and postoperative complications were tracked based on the preoperative imaging used. In addition, we describe three example cases showing direct application of 3D mold as an accurate model to guide intraoperative dissection in complex microsurgical reconstruction. RESULTS: Fifty-eight abdominal-based breast free-flaps performed using conventional CTA were compared with a matched cohort of 58 breast free-flaps performed with 3D model print. There was no flap loss in either group. There was a significant reduction in flap harvest time with use of 3D model (CTA vs. 3D, 117.7±14.2 minutes vs. 109.8±11.6 minutes; P=0.001). In addition, there was no change in preoperative decision on type of flap harvested in all cases in 3D print group (0%), compared with 24.1% change in conventional CTA group. CONCLUSIONS: Use of 3D print model improves accuracy of preoperative planning and reduces flap harvest time with similar postoperative complications in complex microsurgical reconstruction.

16.
Tomography ; 6(2): 111-117, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-32548287

RESUMO

Several institutions have developed image feature extraction software to compute quantitative descriptors of medical images for radiomics analyses. With radiomics increasingly proposed for use in research and clinical contexts, new techniques are necessary for standardizing and replicating radiomics findings across software implementations. We have developed a software toolkit for the creation of 3D digital reference objects with customizable size, shape, intensity, texture, and margin sharpness values. Using user-supplied input parameters, these objects are defined mathematically as continuous functions, discretized, and then saved as DICOM objects. Here, we present the definition of these objects, parameterized derivations of a subset of their radiomics values, computer code for object generation, example use cases, and a user-downloadable sample collection used for the examples cited in this paper.


Assuntos
Processamento de Imagem Assistida por Computador , Radiometria , Software , Radiometria/normas , Padrões de Referência
17.
Radiol Imaging Cancer ; 2(3): e190062, 2020 05 29.
Artigo em Inglês | MEDLINE | ID: mdl-32550600

RESUMO

Purpose: To evaluate interreader agreement in annotating semantic features on preoperative CT images to predict microvascular invasion (MVI) in patients with hepatocellular carcinoma (HCC). Materials and Methods: Preoperative, contrast material-enhanced triphasic CT studies from 89 patients (median age, 64 years; age range, 36-85 years; 70 men) who underwent hepatic resection between 2008 and 2017 for a solitary HCC were reviewed. Three radiologists annotated CT images obtained during the arterial and portal venous phases, independently and in consensus, with features associated with MVI reported by other investigators. The assessed factors were the presence or absence of discrete internal arteries, hypoattenuating halo, tumor-liver difference, peritumoral enhancement, and tumor margin. Testing also included previously proposed MVI signatures: radiogenomic venous invasion (RVI) and two-trait predictor of venous invasion (TTPVI), using single-reader and consensus annotations. Cohen (two-reader) and Fleiss (three-reader) κ and the bootstrap method were used to analyze interreader agreement and differences in model performance, respectively. Results: Of HCCs assessed, 32.6% (29 of 89) had MVI at histopathologic findings. Two-reader agreement, as assessed by pairwise Cohen κ statistics, varied as a function of feature and imaging phase, ranging from 0.02 to 0.6; three-reader Fleiss κ varied from -0.17 to 0.56. For RVI and TTPVI, the best single-reader performance had sensitivity and specificity of 52% and 77% and 67% and 74%, respectively. In consensus, the sensitivity and specificity for the RVI and TTPVI signatures were 59% and 67% and 70% and 62%, respectively. Conclusion: Interreader variability in semantic feature annotation remains a challenge and affects the reproducibility of predictive models for preoperative detection of MVI in HCC.Supplemental material is available for this article.© RSNA, 2020.


Assuntos
Carcinoma Hepatocelular , Neoplasias Hepáticas , Invasividade Neoplásica/diagnóstico por imagem , Adulto , Idoso , Idoso de 80 Anos ou mais , Carcinoma Hepatocelular/diagnóstico por imagem , Carcinoma Hepatocelular/cirurgia , Feminino , Humanos , Neoplasias Hepáticas/diagnóstico por imagem , Neoplasias Hepáticas/cirurgia , Masculino , Pessoa de Meia-Idade , Variações Dependentes do Observador , Reprodutibilidade dos Testes , Semântica , Tomografia Computadorizada por Raios X
18.
Radiology ; 295(2): 328-338, 2020 05.
Artigo em Inglês | MEDLINE | ID: mdl-32154773

RESUMO

Background Radiomic features may quantify characteristics present in medical imaging. However, the lack of standardized definitions and validated reference values have hampered clinical use. Purpose To standardize a set of 174 radiomic features. Materials and Methods Radiomic features were assessed in three phases. In phase I, 487 features were derived from the basic set of 174 features. Twenty-five research teams with unique radiomics software implementations computed feature values directly from a digital phantom, without any additional image processing. In phase II, 15 teams computed values for 1347 derived features using a CT image of a patient with lung cancer and predefined image processing configurations. In both phases, consensus among the teams on the validity of tentative reference values was measured through the frequency of the modal value and classified as follows: less than three matches, weak; three to five matches, moderate; six to nine matches, strong; 10 or more matches, very strong. In the final phase (phase III), a public data set of multimodality images (CT, fluorine 18 fluorodeoxyglucose PET, and T1-weighted MRI) from 51 patients with soft-tissue sarcoma was used to prospectively assess reproducibility of standardized features. Results Consensus on reference values was initially weak for 232 of 302 features (76.8%) at phase I and 703 of 1075 features (65.4%) at phase II. At the final iteration, weak consensus remained for only two of 487 features (0.4%) at phase I and 19 of 1347 features (1.4%) at phase II. Strong or better consensus was achieved for 463 of 487 features (95.1%) at phase I and 1220 of 1347 features (90.6%) at phase II. Overall, 169 of 174 features were standardized in the first two phases. In the final validation phase (phase III), most of the 169 standardized features could be excellently reproduced (166 with CT; 164 with PET; and 164 with MRI). Conclusion A set of 169 radiomics features was standardized, which enabled verification and calibration of different radiomics software. © RSNA, 2020 Online supplemental material is available for this article. See also the editorial by Kuhl and Truhn in this issue.


Assuntos
Biomarcadores/análise , Processamento de Imagem Assistida por Computador/normas , Software , Calibragem , Fluordesoxiglucose F18 , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Imageamento por Ressonância Magnética , Imagens de Fantasmas , Fenótipo , Tomografia por Emissão de Pósitrons , Compostos Radiofarmacêuticos , Reprodutibilidade dos Testes , Sarcoma/diagnóstico por imagem , Tomografia Computadorizada por Raios X
19.
J Med Imaging (Bellingham) ; 7(4): 042803, 2020 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-32206688

RESUMO

Quantitative image features that can be computed from medical images are proving to be valuable biomarkers of underlying cancer biology that can be used for assessing treatment response and predicting clinical outcomes. However, validation and eventual clinical implementation of these tools is challenging due to the absence of shared software algorithms, architectures, and the tools required for computing, comparing, evaluating, and disseminating predictive models. Similarly, researchers need to have programming expertise in order to complete these tasks. The quantitative image feature pipeline (QIFP) is an open-source, web-based, graphical user interface (GUI) of configurable quantitative image-processing pipelines for both planar (two-dimensional) and volumetric (three-dimensional) medical images. This allows researchers and clinicians a GUI-driven approach to process and analyze images, without having to write any software code. The QIFP allows users to upload a repository of linked imaging, segmentation, and clinical data or access publicly available datasets (e.g., The Cancer Imaging Archive) through direct links. Researchers have access to a library of file conversion, segmentation, quantitative image feature extraction, and machine learning algorithms. An interface is also provided to allow users to upload their own algorithms in Docker containers. The QIFP gives researchers the tools and infrastructure for the assessment and development of new imaging biomarkers and the ability to use them for single and multicenter clinical and virtual clinical trials.

20.
Nat Mach Intell ; 2(5): 274-282, 2020 May.
Artigo em Inglês | MEDLINE | ID: mdl-33791593

RESUMO

Lung cancer is the most common fatal malignancy in adults worldwide, and non-small cell lung cancer (NSCLC) accounts for 85% of lung cancer diagnoses. Computed tomography (CT) is routinely used in clinical practice to determine lung cancer treatment and assess prognosis. Here, we developed LungNet, a shallow convolutional neural network for predicting outcomes of NSCLC patients. We trained and evaluated LungNet on four independent cohorts of NSCLC patients from four medical centers: Stanford Hospital (n = 129), H. Lee Moffitt Cancer Center and Research Institute (n = 185), MAASTRO Clinic (n = 311) and Charité - Universitätsmedizin (n=84). We show that outcomes from LungNet are predictive of overall survival in all four independent survival cohorts as measured by concordance indices of 0.62, 0.62, 0.62 and 0.58 on cohorts 1, 2, 3, and 4, respectively. Further, the survival model can be used, via transfer learning, for classifying benign vs malignant nodules on the Lung Image Database Consortium (n = 1010), with improved performance (AUC=0.85) versus training from scratch (AUC=0.82). LungNet can be used as a noninvasive predictor for prognosis in NSCLC patients and can facilitate interpretation of CT images for lung cancer stratification and prognostication.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...