Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
1.
Nat Med ; 29(12): 3033-3043, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37985692

RESUMEN

Pancreatic ductal adenocarcinoma (PDAC), the most deadly solid malignancy, is typically detected late and at an inoperable stage. Early or incidental detection is associated with prolonged survival, but screening asymptomatic individuals for PDAC using a single test remains unfeasible due to the low prevalence and potential harms of false positives. Non-contrast computed tomography (CT), routinely performed for clinical indications, offers the potential for large-scale screening, however, identification of PDAC using non-contrast CT has long been considered impossible. Here, we develop a deep learning approach, pancreatic cancer detection with artificial intelligence (PANDA), that can detect and classify pancreatic lesions with high accuracy via non-contrast CT. PANDA is trained on a dataset of 3,208 patients from a single center. PANDA achieves an area under the receiver operating characteristic curve (AUC) of 0.986-0.996 for lesion detection in a multicenter validation involving 6,239 patients across 10 centers, outperforms the mean radiologist performance by 34.1% in sensitivity and 6.3% in specificity for PDAC identification, and achieves a sensitivity of 92.9% and specificity of 99.9% for lesion detection in a real-world multi-scenario validation consisting of 20,530 consecutive patients. Notably, PANDA utilized with non-contrast CT shows non-inferiority to radiology reports (using contrast-enhanced CT) in the differentiation of common pancreatic lesion subtypes. PANDA could potentially serve as a new tool for large-scale pancreatic cancer screening.


Asunto(s)
Carcinoma Ductal Pancreático , Aprendizaje Profundo , Neoplasias Pancreáticas , Humanos , Inteligencia Artificial , Neoplasias Pancreáticas/diagnóstico por imagen , Neoplasias Pancreáticas/patología , Tomografía Computarizada por Rayos X , Páncreas/diagnóstico por imagen , Páncreas/patología , Carcinoma Ductal Pancreático/diagnóstico por imagen , Carcinoma Ductal Pancreático/patología , Estudios Retrospectivos
2.
Ann Surg ; 278(1): e68-e79, 2023 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-35781511

RESUMEN

OBJECTIVE: To develop an imaging-derived biomarker for prediction of overall survival (OS) of pancreatic cancer by analyzing preoperative multiphase contrast-enhanced computed topography (CECT) using deep learning. BACKGROUND: Exploiting prognostic biomarkers for guiding neoadjuvant and adjuvant treatment decisions may potentially improve outcomes in patients with resectable pancreatic cancer. METHODS: This multicenter, retrospective study included 1516 patients with resected pancreatic ductal adenocarcinoma (PDAC) from 5 centers located in China. The discovery cohort (n=763), which included preoperative multiphase CECT scans and OS data from 2 centers, was used to construct a fully automated imaging-derived prognostic biomarker-DeepCT-PDAC-by training scalable deep segmentation and prognostic models (via self-learning) to comprehensively model the tumor-anatomy spatial relations and their appearance dynamics in multiphase CECT for OS prediction. The marker was independently tested using internal (n=574) and external validation cohorts (n=179, 3 centers) to evaluate its performance, robustness, and clinical usefulness. RESULTS: Preoperatively, DeepCT-PDAC was the strongest predictor of OS in both internal and external validation cohorts [hazard ratio (HR) for high versus low risk 2.03, 95% confidence interval (CI): 1.50-2.75; HR: 2.47, CI: 1.35-4.53] in a multivariable analysis. Postoperatively, DeepCT-PDAC remained significant in both cohorts (HR: 2.49, CI: 1.89-3.28; HR: 2.15, CI: 1.14-4.05) after adjustment for potential confounders. For margin-negative patients, adjuvant chemoradiotherapy was associated with improved OS in the subgroup with DeepCT-PDAC low risk (HR: 0.35, CI: 0.19-0.64), but did not affect OS in the subgroup with high risk. CONCLUSIONS: Deep learning-based CT imaging-derived biomarker enabled the objective and unbiased OS prediction for patients with resectable PDAC. This marker is applicable across hospitals, imaging protocols, and treatments, and has the potential to tailor neoadjuvant and adjuvant treatments at the individual level.


Asunto(s)
Carcinoma Ductal Pancreático , Aprendizaje Profundo , Neoplasias Pancreáticas , Humanos , Estudios Retrospectivos , Neoplasias Pancreáticas/patología , Carcinoma Ductal Pancreático/patología , Pronóstico , Neoplasias Pancreáticas
3.
Clin Cancer Res ; 27(14): 3948-3959, 2021 07 15.
Artículo en Inglés | MEDLINE | ID: mdl-33947697

RESUMEN

PURPOSE: Accurate prognostic stratification of patients with oropharyngeal squamous cell carcinoma (OPSCC) is crucial. We developed an objective and robust deep learning-based fully-automated tool called the DeepPET-OPSCC biomarker for predicting overall survival (OS) in OPSCC using [18F]fluorodeoxyglucose (FDG)-PET imaging. EXPERIMENTAL DESIGN: The DeepPET-OPSCC prediction model was built and tested internally on a discovery cohort (n = 268) by integrating five convolutional neural network models for volumetric segmentation and ten models for OS prognostication. Two external test cohorts were enrolled-the first based on the Cancer Imaging Archive (TCIA) database (n = 353) and the second being a clinical deployment cohort (n = 31)-to assess the DeepPET-OPSCC performance and goodness of fit. RESULTS: After adjustment for potential confounders, DeepPET-OPSCC was found to be an independent predictor of OS in both discovery and TCIA test cohorts [HR = 2.07; 95% confidence interval (CI), 1.31-3.28 and HR = 2.39; 95% CI, 1.38-4.16; both P = 0.002]. The tool also revealed good predictive performance, with a c-index of 0.707 (95% CI, 0.658-0.757) in the discovery cohort, 0.689 (95% CI, 0.621-0.757) in the TCIA test cohort, and 0.787 (95% CI, 0.675-0.899) in the clinical deployment test cohort; the average time taken was 2 minutes for calculation per exam. The integrated nomogram of DeepPET-OPSCC and clinical risk factors significantly outperformed the clinical model [AUC at 5 years: 0.801 (95% CI, 0.727-0.874) vs. 0.749 (95% CI, 0.649-0.842); P = 0.031] in the TCIA test cohort. CONCLUSIONS: DeepPET-OPSCC achieved an accurate OS prediction in patients with OPSCC and enabled an objective, unbiased, and rapid assessment for OPSCC prognostication.


Asunto(s)
Aprendizaje Profundo , Fluorodesoxiglucosa F18 , Neoplasias de Cabeza y Cuello/diagnóstico por imagen , Neoplasias de Cabeza y Cuello/mortalidad , Neoplasias Orofaríngeas/diagnóstico por imagen , Neoplasias Orofaríngeas/mortalidad , Tomografía de Emisión de Positrones , Radiofármacos , Carcinoma de Células Escamosas de Cabeza y Cuello/diagnóstico por imagen , Carcinoma de Células Escamosas de Cabeza y Cuello/mortalidad , Estudios de Cohortes , Femenino , Humanos , Masculino , Persona de Mediana Edad , Valor Predictivo de las Pruebas , Pronóstico , Tasa de Supervivencia
4.
IEEE J Biomed Health Inform ; 21(6): 1633-1643, 2017 11.
Artículo en Inglés | MEDLINE | ID: mdl-28541229

RESUMEN

Automation-assisted cervical screening via Pap smear or liquid-based cytology (LBC) is a highly effective cell imaging based cancer detection tool, where cells are partitioned into "abnormal" and "normal" categories. However, the success of most traditional classification methods relies on the presence of accurate cell segmentations. Despite sixty years of research in this field, accurate segmentation remains a challenge in the presence of cell clusters and pathologies. Moreover, previous classification methods are only built upon the extraction of hand-crafted features, such as morphology and texture. This paper addresses these limitations by proposing a method to directly classify cervical cells-without prior segmentation-based on deep features, using convolutional neural networks (ConvNets). First, the ConvNet is pretrained on a natural image dataset. It is subsequently fine-tuned on a cervical cell dataset consisting of adaptively resampled image patches coarsely centered on the nuclei. In the testing phase, aggregation is used to average the prediction scores of a similar set of image patches. The proposed method is evaluated on both Pap smear and LBC datasets. Results show that our method outperforms previous algorithms in classification accuracy (98.3%), area under the curve (0.99) values, and especially specificity (98.3%), when applied to the Herlev benchmark Pap smear dataset and evaluated using five-fold cross validation. Similar superior performances are also achieved on the HEMLBC (H&E stained manual LBC) dataset. Our method is promising for the development of automation-assisted reading systems in primary cervical screening.


Asunto(s)
Cuello del Útero/citología , Interpretación de Imagen Asistida por Computador/métodos , Redes Neurales de la Computación , Prueba de Papanicolaou/métodos , Frotis Vaginal/métodos , Femenino , Humanos , Aprendizaje Automático
5.
AJOB Empir Bioeth ; 7(1): 1-7, 2016 Jan 01.
Artículo en Inglés | MEDLINE | ID: mdl-27004235

RESUMEN

BACKGROUND: Evidence shows both a tendency for research participants to conflate research and clinical care and limited public understanding of research. Conflation of research and care by participants is often referred to as the therapeutic misconception. Despite a lack of evidence, few studies have explicitly asked participants, and especially minors, to explain what they think research is and how they think it differs from regular medical care. METHODS: As part of a longer semi-structured interview evaluating assent and parental permission for research, adolescent research participants, including adolescents with illnesses and healthy volunteers (N=177), and their parents (N=177) were asked to describe medical research in their own words and say whether and how they thought being in medical research was different from seeing a regular doctor. Qualitative responses were coded and themes identified through an iterative process. RESULTS: When asked to describe medical research, the majority described research in terms of its goals of helping to advance science, develop treatments and medicines, and help others; fewer described research as having the goal of helping particular research participants, and fewer still in terms of the methods used in research. The majority of teen and parent respondents said being in research is different than seeing a regular doctor and explained this by describing different goals, different or more procedures, differences in the engagement of the doctors/researchers, and in logistics. CONCLUSIONS: Adolescents participating in clinical research and their parents generally describe medical research in terms of its goals of advancing science and finding new medicines and treatments, sometimes in combination with helping the enrolled individuals. The majority perceives a difference between research and regular medical care and described these differences in various ways. Further exploration is warranted about how such perceived differences matter to participants and how this understanding could be used to enhance informed consent and the overall research experience.

6.
IEEE Trans Med Imaging ; 35(5): 1285-98, 2016 05.
Artículo en Inglés | MEDLINE | ID: mdl-26886976

RESUMEN

Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and deep convolutional neural networks (CNNs). CNNs enable learning data-driven, highly representative, hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks.


Asunto(s)
Diagnóstico por Computador/métodos , Redes Neurales de la Computación , Bases de Datos Factuales , Humanos , Interpretación de Imagen Asistida por Computador , Enfermedades Pulmonares Intersticiales/diagnóstico por imagen , Ganglios Linfáticos/diagnóstico por imagen , Reproducibilidad de los Resultados
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA