Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20
Filtrar
1.
Med Image Anal ; 95: 103206, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38776844

RESUMO

The correct interpretation of breast density is important in the assessment of breast cancer risk. AI has been shown capable of accurately predicting breast density, however, due to the differences in imaging characteristics across mammography systems, models built using data from one system do not generalize well to other systems. Though federated learning (FL) has emerged as a way to improve the generalizability of AI without the need to share data, the best way to preserve features from all training data during FL is an active area of research. To explore FL methodology, the breast density classification FL challenge was hosted in partnership with the American College of Radiology, Harvard Medical Schools' Mass General Brigham, University of Colorado, NVIDIA, and the National Institutes of Health National Cancer Institute. Challenge participants were able to submit docker containers capable of implementing FL on three simulated medical facilities, each containing a unique large mammography dataset. The breast density FL challenge ran from June 15 to September 5, 2022, attracting seven finalists from around the world. The winning FL submission reached a linear kappa score of 0.653 on the challenge test data and 0.413 on an external testing dataset, scoring comparably to a model trained on the same data in a central location.


Assuntos
Algoritmos , Densidade da Mama , Neoplasias da Mama , Mamografia , Humanos , Feminino , Mamografia/métodos , Neoplasias da Mama/diagnóstico por imagem , Aprendizado de Máquina
2.
Int J Comput Assist Radiol Surg ; 19(4): 655-664, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38498132

RESUMO

PURPOSE: Pancreatic duct dilation is associated with an increased risk of pancreatic cancer, the most lethal malignancy with the lowest 5-year relative survival rate. Automatic segmentation of the dilated pancreatic duct from contrast-enhanced CT scans would facilitate early diagnosis. However, pancreatic duct segmentation poses challenges due to its small anatomical structure and poor contrast in abdominal CT. In this work, we investigate an anatomical attention strategy to address this issue. METHODS: Our proposed anatomical attention strategy consists of two steps: pancreas localization and pancreatic duct segmentation. The coarse pancreatic mask segmentation is used to guide the fully convolutional networks (FCNs) to concentrate on the pancreas' anatomy and disregard unnecessary features. We further apply a multi-scale aggregation scheme to leverage the information from different scales. Moreover, we integrate the tubular structure enhancement as an additional input channel of FCN. RESULTS: We performed extensive experiments on 30 cases of contrast-enhanced abdominal CT volumes. To evaluate the pancreatic duct segmentation performance, we employed four measurements, including the Dice similarity coefficient (DSC), sensitivity, normalized surface distance, and 95 percentile Hausdorff distance. The average DSC achieves 55.7%, surpassing other pancreatic duct segmentation methods on single-phase CT scans only. CONCLUSIONS: We proposed an anatomical attention-based strategy for the dilated pancreatic duct segmentation. Our proposed strategy significantly outperforms earlier approaches. The attention mechanism helps to focus on the pancreas region, while the enhancement of the tubular structure enables FCNs to capture the vessel-like structure. The proposed technique might be applied to other tube-like structure segmentation tasks within targeted anatomies.


Assuntos
Abdome , Processamento de Imagem Assistida por Computador , Humanos , Processamento de Imagem Assistida por Computador/métodos , Pâncreas , Tomografia Computadorizada por Raios X , Ductos Pancreáticos/diagnóstico por imagem
3.
Abdom Radiol (NY) ; 49(5): 1545-1556, 2024 05.
Artigo em Inglês | MEDLINE | ID: mdl-38512516

RESUMO

OBJECTIVE: Automated methods for prostate segmentation on MRI are typically developed under ideal scanning and anatomical conditions. This study evaluates three different prostate segmentation AI algorithms in a challenging population of patients with prior treatments, variable anatomic characteristics, complex clinical history, or atypical MRI acquisition parameters. MATERIALS AND METHODS: A single institution retrospective database was queried for the following conditions at prostate MRI: prior prostate-specific oncologic treatment, transurethral resection of the prostate (TURP), abdominal perineal resection (APR), hip prosthesis (HP), diversity of prostate volumes (large ≥ 150 cc, small ≤ 25 cc), whole gland tumor burden, magnet strength, noted poor quality, and various scanners (outside/vendors). Final inclusion criteria required availability of axial T2-weighted (T2W) sequence and corresponding prostate organ segmentation from an expert radiologist. Three previously developed algorithms were evaluated: (1) deep learning (DL)-based model, (2) commercially available shape-based model, and (3) federated DL-based model. Dice Similarity Coefficient (DSC) was calculated compared to expert. DSC by model and scan factors were evaluated with Wilcox signed-rank test and linear mixed effects (LMER) model. RESULTS: 683 scans (651 patients) met inclusion criteria (mean prostate volume 60.1 cc [9.05-329 cc]). Overall DSC scores for models 1, 2, and 3 were 0.916 (0.707-0.971), 0.873 (0-0.997), and 0.894 (0.025-0.961), respectively, with DL-based models demonstrating significantly higher performance (p < 0.01). In sub-group analysis by factors, Model 1 outperformed Model 2 (all p < 0.05) and Model 3 (all p < 0.001). Performance of all models was negatively impacted by prostate volume and poor signal quality (p < 0.01). Shape-based factors influenced DL models (p < 0.001) while signal factors influenced all (p < 0.001). CONCLUSION: Factors affecting anatomical and signal conditions of the prostate gland can adversely impact both DL and non-deep learning-based segmentation models.


Assuntos
Algoritmos , Inteligência Artificial , Imageamento por Ressonância Magnética , Neoplasias da Próstata , Humanos , Masculino , Estudos Retrospectivos , Imageamento por Ressonância Magnética/métodos , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/cirurgia , Neoplasias da Próstata/patologia , Interpretação de Imagem Assistida por Computador/métodos , Pessoa de Meia-Idade , Idoso , Próstata/diagnóstico por imagem , Aprendizado Profundo
4.
medRxiv ; 2024 Jan 03.
Artigo em Inglês | MEDLINE | ID: mdl-37961086

RESUMO

Background: Diffuse midline gliomas (DMG) are aggressive pediatric brain tumors that are diagnosed and monitored through MRI. We developed an automatic pipeline to segment subregions of DMG and select radiomic features that predict patient overall survival (OS). Methods: We acquired diagnostic and post-radiation therapy (RT) multisequence MRI (T1, T1ce, T2, T2 FLAIR) and manual segmentations from two centers of 53 (internal cohort) and 16 (external cohort) DMG patients. We pretrained a deep learning model on a public adult brain tumor dataset, and finetuned it to automatically segment tumor core (TC) and whole tumor (WT) volumes. PyRadiomics and sequential feature selection were used for feature extraction and selection based on the segmented volumes. Two machine learning models were trained on our internal cohort to predict patient 1-year survival from diagnosis. One model used only diagnostic tumor features and the other used both diagnostic and post-RT features. Results: For segmentation, Dice score (mean [median]±SD) was 0.91 (0.94)±0.12 and 0.74 (0.83)±0.32 for TC, and 0.88 (0.91)±0.07 and 0.86 (0.89)±0.06 for WT for internal and external cohorts, respectively. For OS prediction, accuracy was 77% and 81% at time of diagnosis, and 85% and 78% post-RT for internal and external cohorts, respectively. Homogeneous WT intensity in baseline T2 FLAIR and larger post-RT TC/WT volume ratio indicate shorter OS. Conclusions: Machine learning analysis of MRI radiomics has potential to accurately and non-invasively predict which pediatric patients with DMG will survive less than one year from the time of diagnosis to provide patient stratification and guide therapy.

5.
Artigo em Inglês | MEDLINE | ID: mdl-38083430

RESUMO

Children with optic pathway gliomas (OPGs), a low-grade brain tumor associated with neurofibromatosis type 1 (NF1-OPG), are at risk for permanent vision loss. While OPG size has been associated with vision loss, it is unclear how changes in size, shape, and imaging features of OPGs are associated with the likelihood of vision loss. This paper presents a fully automatic framework for accurate prediction of visual acuity loss using multi-sequence magnetic resonance images (MRIs). Our proposed framework includes a transformer-based segmentation network using transfer learning, statistical analysis of radiomic features, and a machine learning method for predicting vision loss. Our segmentation network was evaluated on multi-sequence MRIs acquired from 75 pediatric subjects with NF1-OPG and obtained an average Dice similarity coefficient of 0.791. The ability to predict vision loss was evaluated on a subset of 25 subjects with ground truth using cross-validation and achieved an average accuracy of 0.8. Analyzing multiple MRI features appear to be good indicators of vision loss, potentially permitting early treatment decisions.Clinical relevance- Accurately determining which children with NF1-OPGs are at risk and hence require preventive treatment before vision loss remains challenging, towards this we present a fully automatic deep learning-based framework for vision outcome prediction, potentially permitting early treatment decisions.


Assuntos
Neurofibromatose 1 , Glioma do Nervo Óptico , Humanos , Criança , Glioma do Nervo Óptico/complicações , Glioma do Nervo Óptico/diagnóstico por imagem , Glioma do Nervo Óptico/patologia , Neurofibromatose 1/complicações , Neurofibromatose 1/diagnóstico por imagem , Neurofibromatose 1/patologia , Imageamento por Ressonância Magnética/métodos , Transtornos da Visão , Acuidade Visual
6.
Health Informatics J ; 29(4): 14604582231207744, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37864543

RESUMO

Cross-institution collaborations are constrained by data-sharing challenges. These challenges hamper innovation, particularly in artificial intelligence, where models require diverse data to ensure strong performance. Federated learning (FL) solves data-sharing challenges. In typical collaborations, data is sent to a central repository where models are trained. With FL, models are sent to participating sites, trained locally, and model weights aggregated to create a master model with improved performance. At the 2021 Radiology Society of North America's (RSNA) conference, a panel was conducted titled "Accelerating AI: How Federated Learning Can Protect Privacy, Facilitate Collaboration and Improve Outcomes." Two groups shared insights: researchers from the EXAM study (EMC CXR AI Model) and members of the National Cancer Institute's Early Detection Research Network's (EDRN) pancreatic cancer working group. EXAM brought together 20 institutions to create a model to predict oxygen requirements of patients seen in the emergency department with COVID-19 symptoms. The EDRN collaboration is focused on improving outcomes for pancreatic cancer patients through earlier detection. This paper describes major insights from the panel, including direct quotes. The panelists described the impetus for FL, the long-term potential vision of FL, challenges faced in FL, and the immediate path forward for FL.


Assuntos
Inteligência Artificial , Neoplasias Pancreáticas , Humanos , Privacidade , Aprendizagem , Neoplasias Pancreáticas
7.
Radiology ; 306(1): 172-182, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36098642

RESUMO

Background Approximately 40% of pancreatic tumors smaller than 2 cm are missed at abdominal CT. Purpose To develop and to validate a deep learning (DL)-based tool able to detect pancreatic cancer at CT. Materials and Methods Retrospectively collected contrast-enhanced CT studies in patients diagnosed with pancreatic cancer between January 2006 and July 2018 were compared with CT studies of individuals with a normal pancreas (control group) obtained between January 2004 and December 2019. An end-to-end tool comprising a segmentation convolutional neural network (CNN) and a classifier ensembling five CNNs was developed and validated in the internal test set and a nationwide real-world validation set. The sensitivities of the computer-aided detection (CAD) tool and radiologist interpretation were compared using the McNemar test. Results A total of 546 patients with pancreatic cancer (mean age, 65 years ± 12 [SD], 297 men) and 733 control subjects were randomly divided into training, validation, and test sets. In the internal test set, the DL tool achieved 89.9% (98 of 109; 95% CI: 82.7, 94.9) sensitivity and 95.9% (141 of 147; 95% CI: 91.3, 98.5) specificity (area under the receiver operating characteristic curve [AUC], 0.96; 95% CI: 0.94, 0.99), without a significant difference (P = .11) in sensitivity compared with the original radiologist report (96.1% [98 of 102]; 95% CI: 90.3, 98.9). In a test set of 1473 real-world CT studies (669 malignant, 804 control) from institutions throughout Taiwan, the DL tool distinguished between CT malignant and control studies with 89.7% (600 of 669; 95% CI: 87.1, 91.9) sensitivity and 92.8% specificity (746 of 804; 95% CI: 90.8, 94.5) (AUC, 0.95; 95% CI: 0.94, 0.96), with 74.7% (68 of 91; 95% CI: 64.5, 83.3) sensitivity for malignancies smaller than 2 cm. Conclusion The deep learning-based tool enabled accurate detection of pancreatic cancer on CT scans, with reasonable sensitivity for tumors smaller than 2 cm. © RSNA, 2022 Online supplemental material is available for this article. See also the editorial by Aisen and Rodrigues in this issue.


Assuntos
Aprendizado Profundo , Neoplasias Pancreáticas , Masculino , Humanos , Idoso , Estudos Retrospectivos , Sensibilidade e Especificidade , Tomografia Computadorizada por Raios X/métodos , Pâncreas
8.
Int J Comput Assist Radiol Surg ; 17(2): 343-354, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-34951681

RESUMO

PURPOSE: Pancreatic duct dilation can be considered an early sign of pancreatic ductal adenocarcinoma (PDAC). However, there is little existing research focused on dilated pancreatic duct segmentation as a potential screening tool for people without PDAC. Dilated pancreatic duct segmentation is difficult due to the lack of readily available labeled data and strong voxel imbalance between the pancreatic duct region and other regions. To overcome these challenges, we propose a two-step approach for dilated pancreatic duct segmentation from abdominal computed tomography (CT) volumes using fully convolutional networks (FCNs). METHODS: Our framework segments the pancreatic duct in a cascaded manner. The pancreatic duct occupies a tiny portion of abdominal CT volumes. Therefore, to concentrate on the pancreas regions, we use a public pancreas dataset to train an FCN to generate an ROI covering the pancreas and use a 3D U-Net-like FCN for coarse pancreas segmentation. To further improve the dilated pancreatic duct segmentation, we deploy a skip connection on each corresponding resolution level and an attention mechanism in the bottleneck layer. Moreover, we introduce a combined loss function based on Dice loss and Focal loss. Random data augmentation is adopted throughout the experiments to improve the generalizability of the model. RESULTS: We manually created a dilated pancreatic duct dataset with semi-automated annotation tools. Experimental results showed that our proposed framework is practical for dilated pancreatic duct segmentation. The average Dice score and sensitivity were 49.9% and 51.9%, respectively. These results show the potential of our approach as a clinical screening tool. CONCLUSIONS: We investigate an automated framework for dilated pancreatic duct segmentation. The cascade strategy effectively improved the segmentation performance of the pancreatic duct. Our modifications to the FCNs together with random data augmentation and the proposed combined loss function facilitate automated segmentation.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X , Abdome , Humanos , Pâncreas , Ductos Pancreáticos/diagnóstico por imagem
9.
IEEE Trans Med Imaging ; 40(10): 2534-2547, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-33373298

RESUMO

Active learning is a unique abstraction of machine learning techniques where the model/algorithm could guide users for annotation of a set of data points that would be beneficial to the model, unlike passive machine learning. The primary advantage being that active learning frameworks select data points that can accelerate the learning process of a model and can reduce the amount of data needed to achieve full accuracy as compared to a model trained on a randomly acquired data set. Multiple frameworks for active learning combined with deep learning have been proposed, and the majority of them are dedicated to classification tasks. Herein, we explore active learning for the task of segmentation of medical imaging data sets. We investigate our proposed framework using two datasets: 1.) MRI scans of the hippocampus, 2.) CT scans of pancreas and tumors. This work presents a query-by-committee approach for active learning where a joint optimizer is used for the committee. At the same time, we propose three new strategies for active learning: 1.) increasing frequency of uncertain data to bias the training data set; 2.) Using mutual information among the input images as a regularizer for acquisition to ensure diversity in the training dataset; 3.) adaptation of Dice log-likelihood for Stein variational gradient descent (SVGD). The results indicate an improvement in terms of data reduction by achieving full accuracy while only using 22.69% and 48.85% of the available data for each dataset, respectively.


Assuntos
Imageamento por Ressonância Magnética , Algoritmos , Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X , Incerteza
10.
Pancreas ; 48(10): 1250-1258, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31688587

RESUMO

A workshop on research gaps and opportunities for Precision Medicine in Pancreatic Disease was sponsored by the National Institute of Diabetes and Digestive Kidney Diseases on July 24, 2019, in Pittsburgh. The workshop included an overview lecture on precision medicine in cancer and 4 sessions: (1) general considerations for the application of bioinformatics and artificial intelligence; (2) omics, the combination of risk factors and biomarkers; (3) precision imaging; and (4) gaps, barriers, and needs to move from precision to personalized medicine for pancreatic disease. Current precision medicine approaches and tools were reviewed, and participants identified knowledge gaps and research needs that hinder bringing precision medicine to pancreatic diseases. Most critical were (a) multicenter efforts to collect large-scale patient data sets from multiple data streams in the context of environmental and social factors; (b) new information systems that can collect, annotate, and quantify data to inform disease mechanisms; (c) novel prospective clinical trial designs to test and improve therapies; and (d) a framework for measuring and assessing the value of proposed approaches to the health care system. With these advances, precision medicine can identify patients early in the course of their pancreatic disease and prevent progression to chronic or fatal illness.


Assuntos
Pesquisa Biomédica , Pancreatopatias , Medicina de Precisão , Biomarcadores , Biologia Computacional , Conjuntos de Dados como Assunto , Aprendizado Profundo , Humanos , Metabolômica , Pancreatopatias/diagnóstico , Pancreatopatias/etiologia , Pancreatopatias/terapia , Pesquisa
11.
Comput Med Imaging Graph ; 77: 101642, 2019 10.
Artigo em Inglês | MEDLINE | ID: mdl-31525543

RESUMO

This paper presents a new approach for precisely estimating the renal vascular dominant region using a Voronoi diagram. To provide computer-assisted diagnostics for the pre-surgical simulation of partial nephrectomy surgery, we must obtain information on the renal arteries and the renal vascular dominant regions. We propose a fully automatic segmentation method that combines a neural network and tensor-based graph-cut methods to precisely extract the kidney and renal arteries. First, we use a convolutional neural network to localize the kidney regions and extract tiny renal arteries with a tensor-based graph-cut method. Then we generate a Voronoi diagram to estimate the renal vascular dominant regions based on the segmented kidney and renal arteries. The accuracy of kidney segmentation in 27 cases with 8-fold cross validation reached a Dice score of 95%. The accuracy of renal artery segmentation in 8 cases obtained a centerline overlap ratio of 80%. Each partition region corresponds to a renal vascular dominant region. The final dominant-region estimation accuracy achieved a Dice coefficient of 80%. A clinical application showed the potential of our proposed estimation approach in a real clinical surgical environment. Further validation using large-scale database is our future work.


Assuntos
Artérias/anatomia & histologia , Rim/irrigação sanguínea , Redes Neurais de Computação , Tomografia Computadorizada por Raios X , Aprendizado Profundo , Humanos , Processamento de Imagem Assistida por Computador , Nefrectomia
12.
Int J Comput Assist Radiol Surg ; 14(12): 2069-2081, 2019 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-31493112

RESUMO

PURPOSE : The purpose of this paper is to present a fully automated abdominal artery segmentation method from a CT volume. Three-dimensional (3D) blood vessel structure information is important for diagnosis and treatment. Information about blood vessels (including arteries) can be used in patient-specific surgical planning and intra-operative navigation. Since blood vessels have large inter-patient variations in branching patterns and positions, a patient-specific blood vessel segmentation method is necessary. Even though deep learning-based segmentation methods provide good segmentation accuracy among large organs, small organs such as blood vessels are not well segmented. We propose a deep learning-based abdominal artery segmentation method from a CT volume. Because the artery is one of small organs that is difficult to segment, we introduced an original training sample generation method and a three-plane segmentation approach to improve segmentation accuracy. METHOD : Our proposed method segments abdominal arteries from an abdominal CT volume with a fully convolutional network (FCN). To segment small arteries, we employ a 2D patch-based segmentation method and an area imbalance reduced training patch generation (AIRTPG) method. AIRTPG adjusts patch number imbalances between patches with artery regions and patches without them. These methods improved the segmentation accuracies of small artery regions. Furthermore, we introduced a three-plane segmentation approach to obtain clear 3D segmentation results from 2D patch-based processes. In the three-plane approach, we performed three segmentation processes using patches generated on axial, coronal, and sagittal planes and combined the results to generate a 3D segmentation result. RESULTS : The evaluation results of the proposed method using 20 cases of abdominal CT volumes show that the averaged F-measure, precision, and recall rates were 87.1%, 85.8%, and 88.4%, respectively. This result outperformed our previous automated FCN-based segmentation method. Our method offers competitive performance compared to the previous blood vessel segmentation methods from 3D volumes. CONCLUSIONS : We developed an abdominal artery segmentation method using FCN. The 2D patch-based and AIRTPG methods effectively segmented the artery regions. In addition, the three-plane approach generated good 3D segmentation results.


Assuntos
Abdome/irrigação sanguínea , Artérias/diagnóstico por imagem , Imageamento Tridimensional/métodos , Redes Neurais de Computação , Tomografia Computadorizada por Raios X/métodos , Tomografia Computadorizada de Feixe Cônico , Humanos
13.
J Med Imaging (Bellingham) ; 6(2): 024007, 2019 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-31205977

RESUMO

Accurate and automated prostate whole gland and central gland segmentations on MR images are essential for aiding any prostate cancer diagnosis system. Our work presents a 2-D orthogonal deep learning method to automatically segment the whole prostate and central gland from T2-weighted axial-only MR images. The proposed method can generate high-density 3-D surfaces from low-resolution ( z axis) MR images. In the past, most methods have focused on axial images alone, e.g., 2-D based segmentation of the prostate from each 2-D slice. Those methods suffer the problems of over-segmenting or under-segmenting the prostate at apex and base, which adds a major contribution for errors. The proposed method leverages the orthogonal context to effectively reduce the apex and base segmentation ambiguities. It also overcomes jittering or stair-step surface artifacts when constructing a 3-D surface from 2-D segmentation or direct 3-D segmentation approaches, such as 3-D U-Net. The experimental results demonstrate that the proposed method achieves 92.4 % ± 3 % Dice similarity coefficient (DSC) for prostate and DSC of 90.1 % ± 4.6 % for central gland without trimming any ending contours at apex and base. The experiments illustrate the feasibility and robustness of the 2-D-based holistically nested networks with short connections method for MR prostate and central gland segmentation. The proposed method achieves segmentation results on par with the current literature.

14.
Eur Radiol ; 29(3): 1074-1082, 2019 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-30116959

RESUMO

OBJECTIVE: To develop and evaluate a radiomics nomogram for differentiating the malignant risk of gastrointestinal stromal tumours (GISTs). METHODS: A total of 222 patients (primary cohort: n = 130, our centre; external validation cohort: n = 92, two other centres) with pathologically diagnosed GISTs were enrolled. A Relief algorithm was used to select the feature subset with the best distinguishing characteristics and to establish a radiomics model with a support vector machine (SVM) classifier for malignant risk differentiation. Determinant clinical characteristics and subjective CT features were assessed to separately construct a corresponding model. The models showing statistical significance in a multivariable logistic regression analysis were used to develop a nomogram. The diagnostic performance of these models was evaluated using ROC curves. Further calibration of the nomogram was evaluated by calibration curves. RESULTS: The generated radiomics model had an AUC value of 0.867 (95% CI 0.803-0.932) in the primary cohort and 0.847 (95% CI 0.765-0.930) in the external cohort. In the entire cohort, the AUCs for the radiomics model, subjective CT findings model, clinical index model and radiomics nomogram were 0.858 (95% CI 0.807-0.908), 0.774 (95% CI 0.713-0.835), 0.759 (95% CI 0.697-0.821) and 0.867 (95% CI 0.818-0.915), respectively. The nomogram showed good calibration. CONCLUSIONS: This radiomics nomogram predicted the malignant potential of GISTs with excellent accuracy and may be used as an effective tool to guide preoperative clinical decision-making. KEY POINTS: • CT-based radiomics model can differentiate low- and high-malignant-potential GISTs with satisfactory accuracy compared with subjective CT findings and clinical indexes. • Radiomics nomogram integrated with the radiomics signature, subjective CT findings and clinical indexes can achieve individualised risk prediction with improved diagnostic performance. • This study might provide significant and valuable background information for further studies such as response evaluation of neoadjuvant imatinib and recurrence risk prediction.


Assuntos
Algoritmos , Tumores do Estroma Gastrointestinal/diagnóstico , Imageamento Tridimensional/métodos , Gradação de Tumores/métodos , Nomogramas , Tomografia Computadorizada por Raios X/métodos , Diagnóstico Diferencial , Feminino , Tumores do Estroma Gastrointestinal/classificação , Tumores do Estroma Gastrointestinal/cirurgia , Humanos , Masculino , Pessoa de Meia-Idade , Período Pré-Operatório , Curva ROC , Máquina de Vetores de Suporte
15.
IEEE Trans Med Imaging ; 35(5): 1170-81, 2016 05.
Artigo em Inglês | MEDLINE | ID: mdl-26441412

RESUMO

Automated computer-aided detection (CADe) has been an important tool in clinical practice and research. State-of-the-art methods often show high sensitivities at the cost of high false-positives (FP) per patient rates. We design a two-tiered coarse-to-fine cascade framework that first operates a candidate generation system at sensitivities  âˆ¼ 100% of but at high FP levels. By leveraging existing CADe systems, coordinates of regions or volumes of interest (ROI or VOI) are generated and function as input for a second tier, which is our focus in this study. In this second stage, we generate 2D (two-dimensional) or 2.5D views via sampling through scale transformations, random translations and rotations. These random views are used to train deep convolutional neural network (ConvNet) classifiers. In testing, the ConvNets assign class (e.g., lesion, pathology) probabilities for a new set of random views that are then averaged to compute a final per-candidate classification probability. This second tier behaves as a highly selective process to reject difficult false positives while preserving high sensitivities. The methods are evaluated on three data sets: 59 patients for sclerotic metastasis detection, 176 patients for lymph node detection, and 1,186 patients for colonic polyp detection. Experimental results show the ability of ConvNets to generalize well to different medical imaging CADe applications and scale elegantly to various data sets. Our proposed methods improve performance markedly in all cases. Sensitivities improved from 57% to 70%, 43% to 77%, and 58% to 75% at 3 FPs per patient for sclerotic metastases, lymph nodes and colonic polyps, respectively.


Assuntos
Redes Neurais de Computação , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Adolescente , Adulto , Idoso , Criança , Pólipos do Colo/diagnóstico por imagem , Bases de Dados Factuais , Feminino , Humanos , Linfonodos/diagnóstico por imagem , Aprendizado de Máquina , Masculino , Pessoa de Meia-Idade , Neoplasias da Coluna Vertebral/diagnóstico por imagem , Tomografia Computadorizada por Raios X , Adulto Jovem
16.
Med Image Comput Comput Assist Interv ; 17(Pt 1): 544-52, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25333161

RESUMO

Enlarged lymph nodes (LNs) can provide important information for cancer diagnosis, staging, and measuring treatment reactions, making automated detection a highly sought goal. In this paper, we propose a new algorithm representation of decomposing the LN detection problem into a set of 2D object detection subtasks on sampled CT slices, largely alleviating the curse of dimensionality issue. Our 2D detection can be effectively formulated as linear classification on a single image feature type of Histogram of Oriented Gradients (HOG), covering a moderate field-of-view of 45 by 45 voxels. We exploit both max-pooling and sparse linear fusion schemes to aggregate these 2D detection scores for the final 3D LN detection. In this manner, detection is more tractable and does not need to perform perfectly at instance level (as weak hypotheses) since our aggregation process will robustly harness collective information for LN detection. Two datasets (90 patients with 389 mediastinal LNs and 86 patients with 595 abdominal LNs) are used for validation. Cross-validation demonstrates 78.0% sensitivity at 6 false positives/volume (FP/vol.) (86.1% at 10 FP/vol.) and 73.1% sensitivity at 6 FP/vol. (87.2% at 10 FP/vol.), for the mediastinal and abdominal datasets respectively. Our results compare favorably to previous state-of-the-art methods.


Assuntos
Inteligência Artificial , Imageamento Tridimensional/métodos , Linfonodos/diagnóstico por imagem , Metástase Linfática/diagnóstico por imagem , Reconhecimento Automatizado de Padrão/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Simulação por Computador , Humanos , Modelos Lineares , Intensificação de Imagem Radiográfica/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
17.
Radiology ; 273(2): 417-24, 2014 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-24991991

RESUMO

PURPOSE: To evaluate the accuracy of a method of automatic coregistration of the endoluminal surfaces at computed tomographic (CT) colonography performed on separate occasions to facilitate identification of polyps in patients undergoing polyp surveillance. MATERIALS AND METHODS: Institutional review board and HIPAA approval were obtained. A registration algorithm that was designed to coregister the coordinates of endoluminal colonic surfaces on images from prone and supine CT colonographic acquisitions was used to match polyps in sequential studies in patients undergoing polyp surveillance. Initial and follow-up CT colonographic examinations in 26 patients (35 polyps) were selected and the algorithm was tested by means of two methods, the longitudinal method (polyp coordinates from the initial prone and supine acquisitions were used to identify the expected polyp location automatically at follow-up CT colonography) and the consistency method (polyp coordinates from the initial supine acquisition were used to identify polyp location on images from the initial prone acquisition, then on those for follow-up prone and follow-up supine acquisitions). Two observers measured the Euclidean distance between true and expected polyp locations, and mean per-patient registration accuracy was calculated. Segments with and without collapse were compared by using the Kruskal-Wallace test, and the relationship between registration error and temporal separation was investigated by using the Pearson correlation. RESULTS: Coregistration was achieved for all 35 polyps by using both longitudinal and consistency methods. Mean ± standard deviation Euclidean registration error for the longitudinal method was 17.4 mm ± 12.1 and for the consistency method, 26.9 mm ± 20.8. There was no significant difference between these results and the registration error when prone and supine acquisitions in the same study were compared (16.9 mm ± 17.6; P = .451). CONCLUSION: Automatic endoluminal coregistration by using an algorithm at initial CT colonography allowed prediction of endoluminal polyp location at subsequent CT colonography, thereby facilitating detection of known polyps in patients undergoing CT colonographic surveillance.


Assuntos
Pólipos do Colo/diagnóstico por imagem , Colonografia Tomográfica Computadorizada/métodos , Idoso , Idoso de 80 Anos ou mais , Algoritmos , Meios de Contraste , Diatrizoato , Seguimentos , Humanos , Pessoa de Meia-Idade , Vigilância da População , Interpretação de Imagem Radiográfica Assistida por Computador
18.
Med Image Anal ; 17(8): 946-58, 2013 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-23845949

RESUMO

Computed Tomographic (CT) colonography is a technique used for the detection of bowel cancer or potentially precancerous polyps. The procedure is performed routinely with the patient both prone and supine to differentiate fixed colonic pathology from mobile faecal residue. Matching corresponding locations is difficult and time consuming for radiologists due to colonic deformations that occur during patient repositioning. We propose a novel method to establish correspondence between the two acquisitions automatically. The problem is first simplified by detecting haustral folds using a graph cut method applied to a curvature-based metric applied to a surface mesh generated from segmentation of the colonic lumen. A virtual camera is used to create a set of images that provide a metric for matching pairs of folds between the prone and supine acquisitions. Image patches are generated at the fold positions using depth map renderings of the endoluminal surface and optimised by performing a virtual camera registration over a restricted set of degrees of freedom. The intensity difference between image pairs, along with additional neighbourhood information to enforce geometric constraints over a 2D parameterisation of the 3D space, are used as unary and pair-wise costs respectively, and included in a Markov Random Field (MRF) model to estimate the maximum a posteriori fold labelling assignment. The method achieved fold matching accuracy of 96.0% and 96.1% in patient cases with and without local colonic collapse. Moreover, it improved upon an existing surface-based registration algorithm by providing an initialisation. The set of landmark correspondences is used to non-rigidly transform a 2D source image derived from a conformal mapping process on the 3D endoluminal surface mesh. This achieves full surface correspondence between prone and supine views and can be further refined with an intensity based registration showing a statistically significant improvement (p<0.001), and decreasing mean error from 11.9 mm to 6.0 mm measured at 1743 reference points from 17 CTC datasets.


Assuntos
Algoritmos , Colonografia Tomográfica Computadorizada/métodos , Imageamento Tridimensional/métodos , Reconhecimento Automatizado de Padrão/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Técnica de Subtração , Humanos , Intensificação de Imagem Radiográfica/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
19.
Radiology ; 268(3): 752-60, 2013 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-23687175

RESUMO

PURPOSE: To perform external validation of a computer-assisted registration algorithm for prone and supine computed tomographic (CT) colonography and to compare the results with those of an existing centerline method. MATERIALS AND METHODS: All contributing centers had institutional review board approval; participants provided informed consent. A validation sample of CT colonographic examinations of 51 patients with 68 polyps (6-55 mm) was selected from a publicly available, HIPAA compliant, anonymized archive. No patients were excluded because of poor preparation or inadequate distension. Corresponding prone and supine polyp coordinates were recorded, and endoluminal surfaces were registered automatically by using a computer algorithm. Two observers independently scored three-dimensional endoluminal polyp registration success. Results were compared with those obtained by using the normalized distance along the colonic centerline (NDACC) method. Pairwise Wilcoxon signed rank tests were used to compare gross registration error and McNemar tests were used to compare polyp conspicuity. RESULTS: Registration was possible in all 51 patients, and 136 paired polyp coordinates were generated (68 polyps) to test the algorithm. Overall mean three-dimensional polyp registration error (mean ± standard deviation, 19.9 mm ± 20.4) was significantly less than that for the NDACC method (mean, 27.4 mm ± 15.1; P = .001). Accuracy was unaffected by colonic segment (P = .76) or luminal collapse (P = .066). During endoluminal review by two observers (272 matching tasks, 68 polyps, prone to supine and supine to prone coordinates), 223 (82%) polyp matches were visible (120° field of view) compared with just 129 (47%) when the NDACC method was used (P < .001). By using multiplanar visualization, 48 (70%) polyps were visible after scrolling ± 15 mm in any multiplanar axis compared with 16 (24%) for NDACC (P < .001). CONCLUSION: Computer-assisted registration is more accurate than the NDACC method for mapping the endoluminal surface and matching the location of polyps in corresponding prone and supine CT colonographic acquisitions.


Assuntos
Algoritmos , Pólipos do Colo/diagnóstico por imagem , Pólipos do Colo/epidemiologia , Colonografia Tomográfica Computadorizada/estatística & dados numéricos , Posicionamento do Paciente/estatística & dados numéricos , Intensificação de Imagem Radiográfica/métodos , Técnica de Subtração/estatística & dados numéricos , Pontos de Referência Anatômicos/diagnóstico por imagem , Humanos , Prevalência , Decúbito Ventral , Decúbito Dorsal , Estados Unidos/epidemiologia
20.
Med Phys ; 38(6): 3077-89, 2011 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-21815381

RESUMO

PURPOSE: Computed tomographic (CT) colonography is a relatively new technique for detecting bowel cancer or potentially precancerous polyps. CT scanning is combined with three-dimensional (3D) image reconstruction to produce a virtual endoluminal representation similar to optical colonoscopy. Because retained fluid and stool can mimic pathology, CT data are acquired with the bowel cleansed and insufflated with gas and patient in both prone and supine positions. Radiologists then match visually endoluminal locations between the two acquisitions in order to determine whether apparent pathology is real or not. This process is hindered by the fact that the colon, essentially a long tube, can undergo considerable deformation between acquisitions. The authors present a novel approach to automatically establish spatial correspondence between prone and supine endoluminal colonic surfaces after surface parameterization, even in the case of local colon collapse. METHODS: The complexity of the registration task was reduced from a 3D to a 2D problem by mapping the surfaces extracted from prone and supine CT colonography onto a cylindrical parameterization. A nonrigid cylindrical registration was then performed to align the full colonic surfaces. The curvature information from the original 3D surfaces was used to determine correspondence. The method can also be applied to cases with regions of local colonic collapse by ignoring the collapsed regions during the registration. RESULTS: Using a development set, suitable parameters were found to constrain the cylindrical registration method. Then, the same registration parameters were applied to a different set of 13 validation cases, consisting of 8 fully distended cases and 5 cases exhibiting multiple colonic collapses. All polyps present were well aligned, with a mean (+/- std. dev.) registration error of 5.7 (+/- 3.4) mm. An additional set of 1175 reference points on haustral folds spread over the full endoluminal colon surfaces resulted in an error of 7.7 (+/- 7.4) mm. Here, 82% of folds were aligned correctly after registration with a further 15% misregistered by just onefold. CONCLUSIONS: The proposed method reduces the 3D registration task to a cylindrical registration representing the endoluminal surface of the colon. Our algorithm uses surface curvature information as a similarity measure to drive registration to compensate for the large colorectal deformations that occur between prone and supine data acquisitions. The method has the potential to both enhance polyp detection and decrease the radiologist's interpretation time.


Assuntos
Colo/diagnóstico por imagem , Colonografia Tomográfica Computadorizada/métodos , Colo/patologia , Pólipos do Colo/diagnóstico por imagem , Pólipos do Colo/patologia , Humanos , Decúbito Ventral , Reprodutibilidade dos Testes , Decúbito Dorsal
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA