Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Artículo en Inglés | MEDLINE | ID: mdl-38687670

RESUMEN

Automated colorectal cancer (CRC) segmentation in medical imaging is the key to achieving automation of CRC detection, staging, and treatment response monitoring. Compared with magnetic resonance imaging (MRI) and computed tomography colonography (CTC), conventional computed tomography (CT) has enormous potential because of its broad implementation, superiority for the hollow viscera (colon), and convenience without needing bowel preparation. However, the segmentation of CRC in conventional CT is more challenging due to the difficulties presenting with the unprepared bowel, such as distinguishing the colorectum from other structures with similar appearance and distinguishing the CRC from the contents of the colorectum. To tackle these challenges, we introduce DeepCRC-SL, the first automated segmentation algorithm for CRC and colorectum in conventional contrast-enhanced CT scans. We propose a topology-aware deep learning-based approach, which builds a novel 1-D colorectal coordinate system and encodes each voxel of the colorectum with a relative position along the coordinate system. We then induce an auxiliary regression task to predict the colorectal coordinate value of each voxel, aiming to integrate global topology into the segmentation network and thus improve the colorectum's continuity. Self-attention layers are utilized to capture global contexts for the coordinate regression task and enhance the ability to differentiate CRC and colorectum tissues. Moreover, a coordinate-driven self-learning (SL) strategy is introduced to leverage a large amount of unlabeled data to improve segmentation performance. We validate the proposed approach on a dataset including 227 labeled and 585 unlabeled CRC cases by fivefold cross-validation. Experimental results demonstrate that our method outperforms some recent related segmentation methods and achieves the segmentation accuracy in DSC for CRC of 0.669 and colorectum of 0.892, reaching to the performance (at 0.639 and 0.890, respectively) of a medical resident with two years of specialized CRC imaging fellowship.

2.
Nat Med ; 29(12): 3033-3043, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37985692

RESUMEN

Pancreatic ductal adenocarcinoma (PDAC), the most deadly solid malignancy, is typically detected late and at an inoperable stage. Early or incidental detection is associated with prolonged survival, but screening asymptomatic individuals for PDAC using a single test remains unfeasible due to the low prevalence and potential harms of false positives. Non-contrast computed tomography (CT), routinely performed for clinical indications, offers the potential for large-scale screening, however, identification of PDAC using non-contrast CT has long been considered impossible. Here, we develop a deep learning approach, pancreatic cancer detection with artificial intelligence (PANDA), that can detect and classify pancreatic lesions with high accuracy via non-contrast CT. PANDA is trained on a dataset of 3,208 patients from a single center. PANDA achieves an area under the receiver operating characteristic curve (AUC) of 0.986-0.996 for lesion detection in a multicenter validation involving 6,239 patients across 10 centers, outperforms the mean radiologist performance by 34.1% in sensitivity and 6.3% in specificity for PDAC identification, and achieves a sensitivity of 92.9% and specificity of 99.9% for lesion detection in a real-world multi-scenario validation consisting of 20,530 consecutive patients. Notably, PANDA utilized with non-contrast CT shows non-inferiority to radiology reports (using contrast-enhanced CT) in the differentiation of common pancreatic lesion subtypes. PANDA could potentially serve as a new tool for large-scale pancreatic cancer screening.


Asunto(s)
Carcinoma Ductal Pancreático , Aprendizaje Profundo , Neoplasias Pancreáticas , Humanos , Inteligencia Artificial , Neoplasias Pancreáticas/diagnóstico por imagen , Neoplasias Pancreáticas/patología , Tomografía Computarizada por Rayos X , Páncreas/diagnóstico por imagen , Páncreas/patología , Carcinoma Ductal Pancreático/diagnóstico por imagen , Carcinoma Ductal Pancreático/patología , Estudios Retrospectivos
3.
Ann Surg ; 278(1): e68-e79, 2023 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-35781511

RESUMEN

OBJECTIVE: To develop an imaging-derived biomarker for prediction of overall survival (OS) of pancreatic cancer by analyzing preoperative multiphase contrast-enhanced computed topography (CECT) using deep learning. BACKGROUND: Exploiting prognostic biomarkers for guiding neoadjuvant and adjuvant treatment decisions may potentially improve outcomes in patients with resectable pancreatic cancer. METHODS: This multicenter, retrospective study included 1516 patients with resected pancreatic ductal adenocarcinoma (PDAC) from 5 centers located in China. The discovery cohort (n=763), which included preoperative multiphase CECT scans and OS data from 2 centers, was used to construct a fully automated imaging-derived prognostic biomarker-DeepCT-PDAC-by training scalable deep segmentation and prognostic models (via self-learning) to comprehensively model the tumor-anatomy spatial relations and their appearance dynamics in multiphase CECT for OS prediction. The marker was independently tested using internal (n=574) and external validation cohorts (n=179, 3 centers) to evaluate its performance, robustness, and clinical usefulness. RESULTS: Preoperatively, DeepCT-PDAC was the strongest predictor of OS in both internal and external validation cohorts [hazard ratio (HR) for high versus low risk 2.03, 95% confidence interval (CI): 1.50-2.75; HR: 2.47, CI: 1.35-4.53] in a multivariable analysis. Postoperatively, DeepCT-PDAC remained significant in both cohorts (HR: 2.49, CI: 1.89-3.28; HR: 2.15, CI: 1.14-4.05) after adjustment for potential confounders. For margin-negative patients, adjuvant chemoradiotherapy was associated with improved OS in the subgroup with DeepCT-PDAC low risk (HR: 0.35, CI: 0.19-0.64), but did not affect OS in the subgroup with high risk. CONCLUSIONS: Deep learning-based CT imaging-derived biomarker enabled the objective and unbiased OS prediction for patients with resectable PDAC. This marker is applicable across hospitals, imaging protocols, and treatments, and has the potential to tailor neoadjuvant and adjuvant treatments at the individual level.


Asunto(s)
Carcinoma Ductal Pancreático , Aprendizaje Profundo , Neoplasias Pancreáticas , Humanos , Estudios Retrospectivos , Neoplasias Pancreáticas/patología , Carcinoma Ductal Pancreático/patología , Pronóstico , Neoplasias Pancreáticas
4.
Artículo en Inglés | MEDLINE | ID: mdl-36624800

RESUMEN

Federated learning is an emerging research paradigm enabling collaborative training of machine learning models among different organizations while keeping data private at each institution. Despite recent progress, there remain fundamental challenges such as the lack of convergence and the potential for catastrophic forgetting across real-world heterogeneous devices. In this paper, we demonstrate that self-attention-based architectures (e.g., Transformers) are more robust to distribution shifts and hence improve federated learning over heterogeneous data. Concretely, we conduct the first rigorous empirical investigation of different neural architectures across a range of federated algorithms, real-world benchmarks, and heterogeneous data splits. Our experiments show that simply replacing convolutional networks with Transformers can greatly reduce catastrophic forgetting of previous devices, accelerate convergence, and reach a better global model, especially when dealing with heterogeneous data. We release our code and pretrained models to encourage future exploration in robust architectures as an alternative to current research efforts on the optimization front.

5.
Med Image Anal ; 65: 101766, 2020 10.
Artículo en Inglés | MEDLINE | ID: mdl-32623276

RESUMEN

Although having achieved great success in medical image segmentation, deep learning-based approaches usually require large amounts of well-annotated data, which can be extremely expensive in the field of medical image analysis. Unlabeled data, on the other hand, is much easier to acquire. Semi-supervised learning and unsupervised domain adaptation both take the advantage of unlabeled data, and they are closely related to each other. In this paper, we propose uncertainty-aware multi-view co-training (UMCT), a unified framework that addresses these two tasks for volumetric medical image segmentation. Our framework is capable of efficiently utilizing unlabeled data for better performance. We firstly rotate and permute the 3D volumes into multiple views and train a 3D deep network on each view. We then apply co-training by enforcing multi-view consistency on unlabeled data, where an uncertainty estimation of each view is utilized to achieve accurate labeling. Experiments on the NIH pancreas segmentation dataset and a multi-organ segmentation dataset show state-of-the-art performance of the proposed framework on semi-supervised medical image segmentation. Under unsupervised domain adaptation settings, we validate the effectiveness of this work by adapting our multi-organ segmentation model to two pathological organs from the Medical Segmentation Decathlon Datasets. Additionally, we show that our UMCT-DA model can even effectively handle the challenging situation where labeled source data is inaccessible, demonstrating strong potentials for real-world applications.


Asunto(s)
Aprendizaje Automático Supervisado , Humanos , Incertidumbre
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...