Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
1.
Tomography ; 9(2): 810-828, 2023 04 10.
Artículo en Inglés | MEDLINE | ID: mdl-37104137

RESUMEN

Co-clinical trials are the concurrent or sequential evaluation of therapeutics in both patients clinically and patient-derived xenografts (PDX) pre-clinically, in a manner designed to match the pharmacokinetics and pharmacodynamics of the agent(s) used. The primary goal is to determine the degree to which PDX cohort responses recapitulate patient cohort responses at the phenotypic and molecular levels, such that pre-clinical and clinical trials can inform one another. A major issue is how to manage, integrate, and analyze the abundance of data generated across both spatial and temporal scales, as well as across species. To address this issue, we are developing MIRACCL (molecular and imaging response analysis of co-clinical trials), a web-based analytical tool. For prototyping, we simulated data for a co-clinical trial in "triple-negative" breast cancer (TNBC) by pairing pre- (T0) and on-treatment (T1) magnetic resonance imaging (MRI) from the I-SPY2 trial, as well as PDX-based T0 and T1 MRI. Baseline (T0) and on-treatment (T1) RNA expression data were also simulated for TNBC and PDX. Image features derived from both datasets were cross-referenced to omic data to evaluate MIRACCL functionality for correlating and displaying MRI-based changes in tumor size, vascularity, and cellularity with changes in mRNA expression as a function of treatment.


Asunto(s)
Neoplasias de la Mama Triple Negativas , Humanos , Neoplasias de la Mama Triple Negativas/patología , Imagen por Resonancia Magnética , Procesamiento de Imagen Asistido por Computador
2.
Radiol Artif Intell ; 4(3): e210174, 2022 May.
Artículo en Inglés | MEDLINE | ID: mdl-35652118

RESUMEN

Purpose: To develop a deep learning-based risk stratification system for thyroid nodules using US cine images. Materials and Methods: In this retrospective study, 192 biopsy-confirmed thyroid nodules (175 benign, 17 malignant) in 167 unique patients (mean age, 56 years ± 16 [SD], 137 women) undergoing cine US between April 2017 and May 2018 with American College of Radiology (ACR) Thyroid Imaging Reporting and Data System (TI-RADS)-structured radiology reports were evaluated. A deep learning-based system that exploits the cine images obtained during three-dimensional volumetric thyroid scans and outputs malignancy risk was developed and compared, using fivefold cross-validation, against a two-dimensional (2D) deep learning-based model (Static-2DCNN), a radiomics-based model using cine images (Cine-Radiomics), and the ACR TI-RADS level, with histopathologic diagnosis as ground truth. The system was used to revise the ACR TI-RADS recommendation, and its diagnostic performance was compared against the original ACR TI-RADS. Results: The system achieved higher average area under the receiver operating characteristic curve (AUC, 0.88) than Static-2DCNN (0.72, P = .03) and tended toward higher average AUC than Cine-Radiomics (0.78, P = .16) and ACR TI-RADS level (0.80, P = .21). The system downgraded recommendations for 92 benign and two malignant nodules and upgraded none. The revised recommendation achieved higher specificity (139 of 175, 79.4%) than the original ACR TI-RADS (47 of 175, 26.9%; P < .001), with no difference in sensitivity (12 of 17, 71% and 14 of 17, 82%, respectively; P = .63). Conclusion: The risk stratification system using US cine images had higher diagnostic performance than prior models and improved specificity of ACR TI-RADS when used to revise ACR TI-RADS recommendation.Keywords: Neural Networks, US, Abdomen/GI, Head/Neck, Thyroid, Computer Applications-3D, Oncology, Diagnosis, Supervised Learning, Transfer Learning, Convolutional Neural Network (CNN) Supplemental material is available for this article. © RSNA, 2022.

3.
J Digit Imaging ; 32(4): 544-553, 2019 08.
Artículo en Inglés | MEDLINE | ID: mdl-31222557

RESUMEN

Radiological measurements are reported in free text reports, and it is challenging to extract such measures for treatment planning such as lesion summarization and cancer response assessment. The purpose of this work is to develop and evaluate a natural language processing (NLP) pipeline that can extract measurements and their core descriptors, such as temporality, anatomical entity, imaging observation, RadLex descriptors, series number, image number, and segment from a wide variety of radiology reports (MR, CT, and mammogram). We created a hybrid NLP pipeline that integrates rule-based feature extraction modules and conditional random field (CRF) model for extraction of the measurements from the radiology reports and links them with clinically relevant features such as anatomical entities or imaging observations. The pipeline was trained on 1117 CT/MR reports, and performance of the system was evaluated on an independent set of 100 expert-annotated CT/MR reports and also tested on 25 mammography reports. The system detected 813 out of 806 measurements in the CT/MR reports; 784 were true positives, 29 were false positives, and 0 were false negatives. Similarly, from the mammography reports, 96% of the measurements with their modifiers were extracted correctly. Our approach could enable the development of computerized applications that can utilize summarized lesion measurements from radiology report of varying modalities and improve practice by tracking the same lesions along multiple radiologic encounters.


Asunto(s)
Registros Electrónicos de Salud , Interpretación de Imagen Asistida por Computador/métodos , Procesamiento de Lenguaje Natural , Sistemas de Información Radiológica , Algoritmos , Humanos , Imagen por Resonancia Magnética/métodos , Mamografía/métodos , Tomografía Computarizada por Rayos X/métodos
4.
Tomography ; 5(1): 170-183, 2019 03.
Artículo en Inglés | MEDLINE | ID: mdl-30854455

RESUMEN

Medical imaging is critical for assessing the response of patients to new cancer therapies. Quantitative lesion assessment on images is time-consuming, and adopting new promising quantitative imaging biomarkers of response in clinical trials is challenging. The electronic Physician Annotation Device (ePAD) is a freely available web-based zero-footprint software application for viewing, annotation, and quantitative analysis of radiology images designed to meet the challenges of quantitative evaluation of cancer lesions. For imaging researchers, ePAD calculates a variety of quantitative imaging biomarkers that they can analyze and compare in ePAD to identify potential candidates as surrogate endpoints in clinical trials. For clinicians, ePAD provides clinical decision support tools for evaluating cancer response through reports summarizing changes in tumor burden based on different imaging biomarkers. As a workflow management and study oversight tool, ePAD lets clinical trial project administrators create worklists for users and oversee the progress of annotations created by research groups. To support interoperability of image annotations, ePAD writes all image annotations and results of quantitative imaging analyses in standardized file formats, and it supports migration of annotations from various propriety formats. ePAD also provides a plugin architecture supporting MATLAB server-side modules in addition to client-side plugins, permitting the community to extend the ePAD platform in various ways for new cancer use cases. We present an overview of ePAD as a platform for medical image annotation and quantitative analysis. We also discuss use cases and collaborations with different groups in the Quantitative Imaging Network and future directions.


Asunto(s)
Neoplasias/diagnóstico por imagen , Sistemas de Información Radiológica/organización & administración , Algoritmos , Curaduría de Datos/métodos , Bases de Datos Factuales , Sistemas de Apoyo a Decisiones Clínicas/organización & administración , Humanos , Interpretación de Imagen Asistida por Computador/métodos , Neoplasias/terapia , Sistemas de Información Radiológica/estadística & datos numéricos , Diseño de Software , Resultado del Tratamiento
5.
J Biomed Inform ; 92: 103137, 2019 04.
Artículo en Inglés | MEDLINE | ID: mdl-30807833

RESUMEN

We propose an efficient natural language processing approach for inferring the BI-RADS final assessment categories by analyzing only the mammogram findings reported by the mammographer in narrative form. The proposed hybrid method integrates semantic term embedding with distributional semantics, producing a context-aware vector representation of unstructured mammography reports. A large corpus of unannotated mammography reports (300,000) was used to learn the context of the key-terms using a distributional semantics approach, and the trained model was applied to generate context-aware vector representations of the reports annotated with BI-RADS category (22,091). The vectorized reports were utilized to train a supervised classifier to derive the BI-RADS assessment class. Even though the majority of the proposed embedding pipeline is unsupervised, the classifier was able to recognize substantial semantic information for deriving the BI-RADS categorization not only on a holdout internal testset and also on an external validation set (1900 reports). Our proposed method outperforms a recently published domain-specific rule-based system and could be relevant for evaluating concordance between radiologists. With minimal requirement for task specific customization, the proposed method can be easily transferable to a different domain to support large scale text mining or derivation of patient phenotype.


Asunto(s)
Mama/diagnóstico por imagen , Minería de Datos/métodos , Aprendizaje Profundo , Mamografía , Procesamiento de Lenguaje Natural , Femenino , Humanos , Interpretación de Imagen Radiográfica Asistida por Computador , Semántica
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...