Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
1.
Diagn Interv Imaging ; 105(1): 33-39, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37598013

RESUMO

PURPOSE: The purpose of this study was to develop a radiomics-signature using computed tomography (CT) data for the preoperative prediction of grade of nonfunctional pancreatic neuroendocrine tumors (NF-PNETs). MATERIALS AND METHODS: A retrospective study was performed on patients undergoing resection for NF-PNETs between 2010 and 2019. A total of 2436 radiomic features were extracted from arterial and venous phases of pancreas-protocol CT examinations. Radiomic features that were associated with final pathologic grade observed in the surgical specimens were subjected to joint mutual information maximization for hierarchical feature selection and the development of the radiomic-signature. Youden-index was used to identify optimal cutoff for determining tumor grade. A random forest prediction model was trained and validated internally. The performance of this tool in predicting tumor grade was compared to that of EUS-FNA sampling that was used as the standard of reference. RESULTS: A total of 270 patients were included and a fusion radiomic-signature based on 10 selected features was developed using the development cohort (n = 201). There were 149 men and 121 women with a mean age of 59.4 ± 12.3 (standard deviation) years (range: 23.3-85.0 years). Upon internal validation in a new set of 69 patients, a strong discrimination was observed with an area under the curve (AUC) of 0.80 (95% confidence interval [CI]: 0.71-0.90) with corresponding sensitivity and specificity of 87.5% (95% CI: 79.7-95.3) and 73.3% (95% CI: 62.9-83.8) respectively. Of the study population, 143 patients (52.9%) underwent EUS-FNA. Biopsies were non-diagnostic in 26 patients (18.2%) and could not be graded due to insufficient sample in 42 patients (29.4%). In the cohort of 75 patients (52.4%) in whom biopsies were graded the radiomic-signature demonstrated not different AUC as compared to EUS-FNA (AUC: 0.69 vs. 0.67; P = 0.723), however greater sensitivity (i.e., ability to accurately identify G2/3 lesion was observed (80.8% vs. 42.3%; P < 0.001). CONCLUSION: Non-invasive assessment of tumor grade in patients with PNETs using the proposed radiomic-signature demonstrated high accuracy. Prospective validation and optimization could overcome the commonly experienced diagnostic uncertainty in the assessment of tumor grade in patients with PNETs and could facilitate clinical decision-making.


Assuntos
Tumores Neuroectodérmicos Primitivos , Tumores Neuroendócrinos , Neoplasias Pancreáticas , Masculino , Humanos , Feminino , Pessoa de Meia-Idade , Idoso , Estudos Retrospectivos , Tumores Neuroendócrinos/diagnóstico por imagem , Gradação de Tumores , Radiômica , Neoplasias Pancreáticas/diagnóstico por imagem , Neoplasias Pancreáticas/patologia , Tomografia Computadorizada por Raios X
2.
Abdom Radiol (NY) ; 49(2): 501-511, 2024 02.
Artigo em Inglês | MEDLINE | ID: mdl-38102442

RESUMO

PURPOSE: Delay in diagnosis can contribute to poor outcomes in pancreatic ductal adenocarcinoma (PDAC), and new tools for early detection are required. Recent application of artificial intelligence to cancer imaging has demonstrated great potential in detecting subtle early lesions. The aim of the study was to evaluate global and local accuracies of deep neural network (DNN) segmentation of normal and abnormal pancreas with pancreatic mass. METHODS: Our previously developed and reported residual deep supervision network for segmentation of PDAC was applied to segment pancreas using CT images of potential renal donors (normal pancreas) and patients with suspected PDAC (abnormal pancreas). Accuracy of DNN pancreas segmentation was assessed using DICE simulation coefficient (DSC), average symmetric surface distance (ASSD), and Hausdorff distance 95% percentile (HD95) as compared to manual segmentation. Furthermore, two radiologists semi-quantitatively assessed local accuracies and estimated volume of correctly segmented pancreas. RESULTS: Forty-two normal and 49 abnormal CTs were assessed. Average DSC was 87.4 ± 3.1% and 85.5 ± 3.2%, ASSD 0.97 ± 0.30 and 1.34 ± 0.65, HD95 4.28 ± 2.36 and 6.31 ± 6.31 for normal and abnormal pancreas, respectively. Semi-quantitatively, ≥95% of pancreas volume was correctly segmented in 95.2% and 53.1% of normal and abnormal pancreas by both radiologists, and 97.6% and 75.5% by at least one radiologist. Most common segmentation errors were made on pancreatic and duodenal borders in both groups, and related to pancreatic tumor including duct dilatation, atrophy, tumor infiltration and collateral vessels. CONCLUSION: Pancreas DNN segmentation is accurate in a majority of cases, however, minor manual editing may be necessary; particularly in abnormal pancreas.


Assuntos
Carcinoma Ductal Pancreático , Neoplasias Pancreáticas , Humanos , Inteligência Artificial , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Redes Neurais de Computação , Pâncreas/diagnóstico por imagem , Neoplasias Pancreáticas/diagnóstico por imagem
4.
Nat Commun ; 13(1): 6137, 2022 10 17.
Artigo em Inglês | MEDLINE | ID: mdl-36253346

RESUMO

Accurate organ-at-risk (OAR) segmentation is critical to reduce radiotherapy complications. Consensus guidelines recommend delineating over 40 OARs in the head-and-neck (H&N). However, prohibitive labor costs cause most institutions to delineate a substantially smaller subset of OARs, neglecting the dose distributions of other OARs. Here, we present an automated and highly effective stratified OAR segmentation (SOARS) system using deep learning that precisely delineates a comprehensive set of 42 H&N OARs. We train SOARS using 176 patients from an internal institution and independently evaluate it on 1327 external patients across six different institutions. It consistently outperforms other state-of-the-art methods by at least 3-5% in Dice score for each institutional evaluation (up to 36% relative distance error reduction). Crucially, multi-user studies demonstrate that 98% of SOARS predictions need only minor or no revisions to achieve clinical acceptance (reducing workloads by 90%). Moreover, segmentation and dosimetric accuracy are within or smaller than the inter-user variation.


Assuntos
Neoplasias de Cabeça e Pescoço , Órgãos em Risco , Neoplasias de Cabeça e Pescoço/radioterapia , Humanos , Processamento de Imagem Assistida por Computador/métodos , Pescoço , Radiometria
5.
Med Image Anal ; 65: 101766, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-32623276

RESUMO

Although having achieved great success in medical image segmentation, deep learning-based approaches usually require large amounts of well-annotated data, which can be extremely expensive in the field of medical image analysis. Unlabeled data, on the other hand, is much easier to acquire. Semi-supervised learning and unsupervised domain adaptation both take the advantage of unlabeled data, and they are closely related to each other. In this paper, we propose uncertainty-aware multi-view co-training (UMCT), a unified framework that addresses these two tasks for volumetric medical image segmentation. Our framework is capable of efficiently utilizing unlabeled data for better performance. We firstly rotate and permute the 3D volumes into multiple views and train a 3D deep network on each view. We then apply co-training by enforcing multi-view consistency on unlabeled data, where an uncertainty estimation of each view is utilized to achieve accurate labeling. Experiments on the NIH pancreas segmentation dataset and a multi-organ segmentation dataset show state-of-the-art performance of the proposed framework on semi-supervised medical image segmentation. Under unsupervised domain adaptation settings, we validate the effectiveness of this work by adapting our multi-organ segmentation model to two pathological organs from the Medical Segmentation Decathlon Datasets. Additionally, we show that our UMCT-DA model can even effectively handle the challenging situation where labeled source data is inaccessible, demonstrating strong potentials for real-world applications.


Assuntos
Aprendizado de Máquina Supervisionado , Humanos , Incerteza
7.
IEEE Trans Pattern Anal Mach Intell ; 37(12): 2361-73, 2015 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26539843

RESUMO

View-based 3D shape retrieval is a popular branch in 3D shape analysis owing to the high discriminative property of 2D views. However, many previous works do not scale up to large 3D shape databases. We propose a two layer coding (TLC) framework to conduct shape matching much more efficiently. The first layer coding is applied to pairs of views represented as depth images. The spatial relationship of each view pair is captured with so-called eigen-angle, which is the planar angle between the two views measured at the center of the 3D shape. Prior to the second layer coding, the view pairs are divided into subsets according to their eigen-angles. Consequently, view pairs that differ significantly in their eigen-angles are encoded with different codewords, which implies that spatial arrangement of views is preserved in the second layer coding. The final feature vector of a 3D shape is the concatenation of all the encoded features from different subsets, which is used for efficient indexing directly. TLC is not limited to encode the local features from 2D views, but can be also applied to encoding 3D features. Exhaustive experimental results confirm that TLC achieves state-of-the-art performance in both retrieval accuracy and efficiency.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA