Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
1.
Radiology ; 302(2): 309-316, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-34812674

RESUMO

Background Separate noncontrast CT to quantify the coronary artery calcium (CAC) score often precedes coronary CT angiography (CTA). Quantifying CAC scores directly at CTA would eliminate the additional radiation produced at CT but remains challenging. Purpose To quantify CAC scores automatically from a single CTA scan. Materials and Methods In this retrospective study, a deep learning method to quantify CAC scores automatically from a single CTA scan was developed on training and validation sets of 292 patients and 73 patients collected from March 2019 to July 2020. Virtual noncontrast scans obtained with a spectral CT scanner were used to develop the algorithm to alleviate tedious manual annotation of calcium regions. The proposed method was validated on an independent test set of 240 CTA scans collected from three different CT scanners from August 2020 to November 2020 using the Pearson correlation coefficient, the coefficient of determination, or r2, and the Bland-Altman plot against the semiautomatic Agatston score at noncontrast CT. The cardiovascular risk categorization performance was evaluated using weighted κ based on the Agatston score (CAC score risk categories: 0-10, 11-100, 101-400, and >400). Results Two hundred forty patients (mean age, 60 years ± 11 [standard deviation]; 146 men) were evaluated. The positive correlation between the automatic deep learning CTA and semiautomatic noncontrast CT CAC score was excellent (Pearson correlation = 0.96; r2 = 0.92). The risk categorization agreement based on deep learning CTA and noncontrast CT CAC scores was excellent (weighted κ = 0.94 [95% CI: 0.91, 0.97]), with 223 of 240 scans (93%) categorized correctly. All patients who were miscategorized were in the direct neighboring risk groups. The proposed method's differences from the noncontrast CT CAC score were not statistically significant with regard to scanner (P = .15), sex (P = .051), and section thickness (P = .67). Conclusion A deep learning automatic calcium scoring method accurately quantified coronary artery calcium from CT angiography images and categorized risk. © RSNA, 2021 See also the editorial by Goldfarb and Cao et al in this issue.


Assuntos
Angiografia por Tomografia Computadorizada , Angiografia Coronária , Doença da Artéria Coronariana/diagnóstico por imagem , Aprendizado Profundo , Calcificação Vascular/diagnóstico por imagem , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos
2.
Eur Radiol ; 32(7): 4801-4812, 2022 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-35166895

RESUMO

OBJECTIVES: To demonstrate the effectiveness of automatic segmentation of diffuse large B-cell lymphoma (DLBCL) in 3D FDG-PET scans using a deep learning approach and validate its value in prognosis in an external validation cohort. METHODS: Two PET datasets were retrospectively analysed: 297 patients from a local centre for training and 117 patients from an external centre for validation. A 3D U-Net architecture was trained on patches randomly sampled within the PET images. Segmentation performance was evaluated by six metrics, including the Dice similarity coefficient (DSC), Jaccard similarity coefficient (JSC), sensitivity (Se), positive predictive value (PPV), Hausdorff distance 95 (HD 95), and average symmetric surface distance (ASSD). Finally, the prognostic value of predictive total metabolic tumour volume (pTMTV) was validated in real clinical applications. RESULTS: The mean DSC, JSC, Se, PPV, HD 95, and ASSD (with standard deviation) for the validation cohort were 0.78 ± 0.25, 0.69 ± 0.26, 0.81 ± 0.27, 0.82 ± 0.25, 24.58 ± 35.18, and 4.46 ± 8.92, respectively. The mean ground truth TMTV (gtTMTV) and pTMTV were 276.6 ± 393.5 cm3 and 301.9 ± 510.5 cm3 in the validation cohort, respectively. Perfect homogeneity in the Bland-Altman analysis and a strong positive correlation in the linear regression analysis (R2 linear = 0.874, p < 0.001) were demonstrated between gtTMTV and pTMTV. pTMTV (≥ 201.2 cm3) (PFS: HR = 3.097, p = 0.001; OS: HR = 6.601, p < 0.001) was shown to be an independent factor of PFS and OS. CONCLUSIONS: The FCN model with a U-Net architecture can accurately segment lymphoma lesions and allow fully automatic assessment of TMTV on PET scans for DLBCL patients. Furthermore, pTMTV is an independent prognostic factor of survival in DLBCL patients. KEY POINTS: •The segmentation model based on a U-Net architecture shows high performance in the segmentation of DLBCL patients on FDG-PET images. •The proposed method can provide quantitative information as a predictive TMTV for predicting the prognosis of DLBCL patients.


Assuntos
Aprendizado Profundo , Linfoma Difuso de Grandes Células B , Fluordesoxiglucose F18 , Humanos , Linfoma Difuso de Grandes Células B/diagnóstico por imagem , Linfoma Difuso de Grandes Células B/patologia , Tomografia por Emissão de Pósitrons , Prognóstico , Estudos Retrospectivos , Carga Tumoral
3.
Pattern Recognit ; 122: 108341, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-34565913

RESUMO

Segmentation of infections from CT scans is important for accurate diagnosis and follow-up in tackling the COVID-19. Although the convolutional neural network has great potential to automate the segmentation task, most existing deep learning-based infection segmentation methods require fully annotated ground-truth labels for training, which is time-consuming and labor-intensive. This paper proposed a novel weakly supervised segmentation method for COVID-19 infections in CT slices, which only requires scribble supervision and is enhanced with the uncertainty-aware self-ensembling and transformation-consistent techniques. Specifically, to deal with the difficulty caused by the shortage of supervision, an uncertainty-aware mean teacher is incorporated into the scribble-based segmentation method, encouraging the segmentation predictions to be consistent under different perturbations for an input image. This mean teacher model can guide the student model to be trained using information in images without requiring manual annotations. On the other hand, considering the output of the mean teacher contains both correct and unreliable predictions, equally treating each prediction in the teacher model may degrade the performance of the student network. To alleviate this problem, the pixel level uncertainty measure on the predictions of the teacher model is calculated, and then the student model is only guided by reliable predictions from the teacher model. To further regularize the network, a transformation-consistent strategy is also incorporated, which requires the prediction to follow the same transformation if a transform is performed on an input image of the network. The proposed method has been evaluated on two public datasets and one local dataset. The experimental results demonstrate that the proposed method is more effective than other weakly supervised methods and achieves similar performance as those fully supervised.

4.
Pattern Recognit ; 113: 107828, 2021 May.
Artigo em Inglês | MEDLINE | ID: mdl-33495661

RESUMO

Understanding chest CT imaging of the coronavirus disease 2019 (COVID-19) will help detect infections early and assess the disease progression. Especially, automated severity assessment of COVID-19 in CT images plays an essential role in identifying cases that are in great need of intensive clinical care. However, it is often challenging to accurately assess the severity of this disease in CT images, due to variable infection regions in the lungs, similar imaging biomarkers, and large inter-case variations. To this end, we propose a synergistic learning framework for automated severity assessment of COVID-19 in 3D CT images, by jointly performing lung lobe segmentation and multi-instance classification. Considering that only a few infection regions in a CT image are related to the severity assessment, we first represent each input image by a bag that contains a set of 2D image patches (with each cropped from a specific slice). A multi-task multi-instance deep network (called M 2 UNet) is then developed to assess the severity of COVID-19 patients and also segment the lung lobe simultaneously. Our M 2 UNet consists of a patch-level encoder, a segmentation sub-network for lung lobe segmentation, and a classification sub-network for severity assessment (with a unique hierarchical multi-instance learning strategy). Here, the context information provided by segmentation can be implicitly employed to improve the performance of severity assessment. Extensive experiments were performed on a real COVID-19 CT image dataset consisting of 666 chest CT images, with results suggesting the effectiveness of our proposed method compared to several state-of-the-art methods.

5.
IEEE Trans Med Imaging ; 42(11): 3395-3407, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37339020

RESUMO

Cross-modality magnetic resonance (MR) image synthesis can be used to generate missing modalities from given ones. Existing (supervised learning) methods often require a large number of paired multi-modal data to train an effective synthesis model. However, it is often challenging to obtain sufficient paired data for supervised training. In reality, we often have a small number of paired data while a large number of unpaired data. To take advantage of both paired and unpaired data, in this paper, we propose a Multi-scale Transformer Network (MT-Net) with edge-aware pre-training for cross-modality MR image synthesis. Specifically, an Edge-preserving Masked AutoEncoder (Edge-MAE) is first pre-trained in a self-supervised manner to simultaneously perform 1) image imputation for randomly masked patches in each image and 2) whole edge map estimation, which effectively learns both contextual and structural information. Besides, a novel patch-wise loss is proposed to enhance the performance of Edge-MAE by treating different masked patches differently according to the difficulties of their respective imputations. Based on this proposed pre-training, in the subsequent fine-tuning stage, a Dual-scale Selective Fusion (DSF) module is designed (in our MT-Net) to synthesize missing-modality images by integrating multi-scale features extracted from the encoder of the pre-trained Edge-MAE. Furthermore, this pre-trained encoder is also employed to extract high-level features from the synthesized image and corresponding ground-truth image, which are required to be similar (consistent) in the training. Experimental results show that our MT-Net achieves comparable performance to the competing methods even using 70% of all available paired data. Our code will be released at https://github.com/lyhkevin/MT-Net.

6.
Med Phys ; 48(4): 1571-1583, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-33438284

RESUMO

PURPOSE: We developed a system that can automatically classify cases of scoliosis secondary to neurofibromatosis type 1 (NF1-S) using deep learning algorithms (DLAs) and improve the accuracy and effectiveness of classification, thereby assisting surgeons with the auxiliary diagnosis. METHODS: Comprehensive experiments in NF1 classification were performed based on a dataset consisting 211 NF1-S (131 dystrophic and 80 nondystrophic NF1-S) patients. Additionally, 100 congenital scoliosis (CS), 100 adolescent idiopathic scoliosis (AIS) patients, and 114 normal controls were used for experiments in primary classification. For identification of NF1-S with nondystrophic or dystrophic curves, we devised a novel network (i.e., Bilateral convolutional neural network [CNN]) utilizing a bilinear-like operation to discover the similar interest features between whole spine AP and lateral x-ray images. The performance of Bilateral CNN was compared with spine surgeons, conventional DLAs (i.e., VGG-16, ResNet-50, and Bilinear CNN [BCNN]), recently proposed DLAs (i.e., ShuffleNet, MobileNet, and EfficientNet), and Two-path BCNN which was the extension of BCNN using AP and lateral x-ray images as inputs. RESULTS: In NF1 classification, our proposed Bilateral CNN with 80.36% accuracy outperformed the other seven DLAs ranging from 61.90% to 76.19% with fivefold cross-validation. It also outperformed the spine surgeons (with an average accuracy of 77.5% for the senior surgeons and 65.0% for the junior surgeons). Our method is highly generalizable due to the proposed methodology and data augmentation. Furthermore, the heatmaps extracted by Bilateral CNN showed curve pattern and morphology of ribs and vertebrae contributing most to the classification results. In primary classification, our proposed method with an accuracy of 87.92% also outperformed all the other methods with varied accuracies between 52.58% and 83.35% with fivefold cross-validation. CONCLUSIONS: The proposed Bilateral CNN can automatically capture representative features for classifying NF1-S utilizing AP and lateral x-ray images, leading to a relatively good performance. Moreover, the proposed method can identify other spine deformities for auxiliary diagnosis.


Assuntos
Neurofibromatose 1 , Escoliose , Adolescente , Algoritmos , Humanos , Redes Neurais de Computação , Neurofibromatose 1/complicações , Neurofibromatose 1/diagnóstico por imagem , Escoliose/diagnóstico por imagem
7.
IEEE Trans Med Imaging ; 40(8): 2118-2128, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-33848243

RESUMO

Accurate segmentation of the prostate is a key step in external beam radiation therapy treatments. In this paper, we tackle the challenging task of prostate segmentation in CT images by a two-stage network with 1) the first stage to fast localize, and 2) the second stage to accurately segment the prostate. To precisely segment the prostate in the second stage, we formulate prostate segmentation into a multi-task learning framework, which includes a main task to segment the prostate, and an auxiliary task to delineate the prostate boundary. Here, the second task is applied to provide additional guidance of unclear prostate boundary in CT images. Besides, the conventional multi-task deep networks typically share most of the parameters (i.e., feature representations) across all tasks, which may limit their data fitting ability, as the specificity of different tasks are inevitably ignored. By contrast, we solve them by a hierarchically-fused U-Net structure, namely HF-UNet. The HF-UNet has two complementary branches for two tasks, with the novel proposed attention-based task consistency learning block to communicate at each level between the two decoding branches. Therefore, HF-UNet endows the ability to learn hierarchically the shared representations for different tasks, and preserve the specificity of learned representations for different tasks simultaneously. We did extensive evaluations of the proposed method on a large planning CT image dataset and a benchmark prostate zonal dataset. The experimental results show HF-UNet outperforms the conventional multi-task network architectures and the state-of-the-art methods.


Assuntos
Próstata , Tomografia Computadorizada por Raios X , Humanos , Processamento de Imagem Assistida por Computador , Masculino , Próstata/diagnóstico por imagem
8.
IEEE Trans Cybern ; 51(4): 2153-2165, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-31869812

RESUMO

Automatic pancreas segmentation is crucial to the diagnostic assessment of diabetes or pancreatic cancer. However, the relatively small size of the pancreas in the upper body, as well as large variations of its location and shape in retroperitoneum, make the segmentation task challenging. To alleviate these challenges, in this article, we propose a cascaded multitask 3-D fully convolution network (FCN) to automatically segment the pancreas. Our cascaded network is composed of two parts. The first part focuses on fast locating the region of the pancreas, and the second part uses a multitask FCN with dense connections to refine the segmentation map for fine voxel-wise segmentation. In particular, our multitask FCN with dense connections is implemented to simultaneously complete tasks of the voxel-wise segmentation and skeleton extraction from the pancreas. These two tasks are complementary, that is, the extracted skeleton provides rich information about the shape and size of the pancreas in retroperitoneum, which can boost the segmentation of pancreas. The multitask FCN is also designed to share the low- and mid-level features across the tasks. A feature consistency module is further introduced to enhance the connection and fusion of different levels of feature maps. Evaluations on two pancreas datasets demonstrate the robustness of our proposed method in correctly segmenting the pancreas in various settings. Our experimental results outperform both baseline and state-of-the-art methods. Moreover, the ablation study shows that our proposed parts/modules are critical for effective multitask learning.


Assuntos
Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Redes Neurais de Computação , Pâncreas/diagnóstico por imagem , Humanos , Neoplasias Pancreáticas/diagnóstico por imagem
9.
IEEE Rev Biomed Eng ; 14: 4-15, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-32305937

RESUMO

The pandemic of coronavirus disease 2019 (COVID-19) is spreading all over the world. Medical imaging such as X-ray and computed tomography (CT) plays an essential role in the global fight against COVID-19, whereas the recently emerging artificial intelligence (AI) technologies further strengthen the power of the imaging tools and help medical specialists. We hereby review the rapid responses in the community of medical imaging (empowered by AI) toward COVID-19. For example, AI-empowered image acquisition can significantly help automate the scanning procedure and also reshape the workflow with minimal contact to patients, providing the best protection to the imaging technicians. Also, AI can improve work efficiency by accurate delineation of infections in X-ray and CT images, facilitating subsequent quantification. Moreover, the computer-aided platforms help radiologists make clinical decisions, i.e., for disease diagnosis, tracking, and prognosis. In this review paper, we thus cover the entire pipeline of medical imaging and analysis techniques involved with COVID-19, including image acquisition, segmentation, diagnosis, and follow-up. We particularly focus on the integration of AI with X-ray and CT, both of which are widely used in the frontline hospitals, in order to depict the latest progress of medical imaging and radiology fighting against COVID-19.


Assuntos
COVID-19/diagnóstico , SARS-CoV-2/patogenicidade , Inteligência Artificial , Humanos , Pandemias/prevenção & controle , Tomografia Computadorizada por Raios X/métodos
10.
Med Image Anal ; 71: 102039, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-33831595

RESUMO

Fully convolutional networks (FCNs), including UNet and VNet, are widely-used network architectures for semantic segmentation in recent studies. However, conventional FCN is typically trained by the cross-entropy or Dice loss, which only calculates the error between predictions and ground-truth labels for pixels individually. This often results in non-smooth neighborhoods in the predicted segmentation. This problem becomes more serious in CT prostate segmentation as CT images are usually of low tissue contrast. To address this problem, we propose a two-stage framework, with the first stage to quickly localize the prostate region, and the second stage to precisely segment the prostate by a multi-task UNet architecture. We introduce a novel online metric learning module through voxel-wise sampling in the multi-task network. Therefore, the proposed network has a dual-branch architecture that tackles two tasks: (1) a segmentation sub-network aiming to generate the prostate segmentation, and (2) a voxel-metric learning sub-network aiming to improve the quality of the learned feature space supervised by a metric loss. Specifically, the voxel-metric learning sub-network samples tuples (including triplets and pairs) in voxel-level through the intermediate feature maps. Unlike conventional deep metric learning methods that generate triplets or pairs in image-level before the training phase, our proposed voxel-wise tuples are sampled in an online manner and operated in an end-to-end fashion via multi-task learning. To evaluate the proposed method, we implement extensive experiments on a real CT image dataset consisting 339 patients. The ablation studies show that our method can effectively learn more representative voxel-level features compared with the conventional learning methods with cross-entropy or Dice loss. And the comparisons show that the proposed method outperforms the state-of-the-art methods by a reasonable margin.


Assuntos
Próstata , Tomografia Computadorizada por Raios X , Humanos , Processamento de Imagem Assistida por Computador , Masculino , Próstata/diagnóstico por imagem
11.
Mitochondrial DNA B Resour ; 5(1): 1102-1104, 2020 Feb 11.
Artigo em Inglês | MEDLINE | ID: mdl-33366893

RESUMO

In this study, high-throughput Illumina sequencing was employed to assemble the complete mitochondrial genome of the Meiren yak (Bos grunniens), a local yak breed from Gansu Province, China. The mitochondrial genome is 16,321 bp long with an A + T-biased nucleotide composition and harbors 13 protein-coding, 22 Trna, and 2 rRNA genes, and a noncoding control region. The mitogenomic organization and codon usage are highly similar to those of previously published congeneric mitochondrial genomes. Bayesian phylogenetic analysis indicates that Meiren yak is most closely related to nine other yak breeds (incl. Datong, Huanhu, Pali, Pamir, Polled, Qilian, Seron, Sunan, and Tianjun yaks).

12.
Front Neurosci ; 14: 626154, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33551735

RESUMO

Frontotemporal dementia (FTD) and Alzheimer's disease (AD) have overlapping symptoms, and accurate differential diagnosis is important for targeted intervention and treatment. Previous studies suggest that the deep learning (DL) techniques have the potential to solve the differential diagnosis problem of FTD, AD and normal controls (NCs), but its performance is still unclear. In addition, existing DL-assisted diagnostic studies still rely on hypothesis-based expert-level preprocessing. On the one hand, it imposes high requirements on clinicians and data themselves; On the other hand, it hinders the backtracking of classification results to the original image data, resulting in the classification results cannot be interpreted intuitively. In the current study, a large cohort of 3D T1-weighted structural magnetic resonance imaging (MRI) volumes (n = 4,099) was collected from two publicly available databases, i.e., the ADNI and the NIFD. We trained a DL-based network directly based on raw T1 images to classify FTD, AD and corresponding NCs. And we evaluated the convergence speed, differential diagnosis ability, robustness and generalizability under nine scenarios. The proposed network yielded an accuracy of 91.83% based on the most common T1-weighted sequence [magnetization-prepared rapid acquisition with gradient echo (MPRAGE)]. The knowledge learned by the DL network through multiple classification tasks can also be used to solve subproblems, and the knowledge is generalizable and not limited to a specified dataset. Furthermore, we applied a gradient visualization algorithm based on guided backpropagation to calculate the contribution graph, which tells us intuitively why the DL-based networks make each decision. The regions making valuable contributions to FTD were more widespread in the right frontal white matter regions, while the left temporal, bilateral inferior frontal and parahippocampal regions were contributors to the classification of AD. Our results demonstrated that DL-based networks have the ability to solve the enigma of differential diagnosis of diseases without any hypothesis-based preprocessing. Moreover, they may mine the potential patterns that may be different from human clinicians, which may provide new insight into the understanding of FTD and AD.

13.
Med Image Anal ; 54: 168-178, 2019 05.
Artigo em Inglês | MEDLINE | ID: mdl-30928830

RESUMO

Accurate segmentation of the prostate and organs at risk (e.g., bladder and rectum) in CT images is a crucial step for radiation therapy in the treatment of prostate cancer. However, it is a very challenging task due to unclear boundaries, large intra- and inter-patient shape variability, and uncertain existence of bowel gases and fiducial markers. In this paper, we propose a novel automatic segmentation framework using fully convolutional networks with boundary sensitive representation to address this challenging problem. Our novel segmentation framework contains three modules. First, an organ localization model is designed to focus on the candidate segmentation region of each organ for better performance. Then, a boundary sensitive representation model based on multi-task learning is proposed to represent the semantic boundary information in a more robust and accurate manner. Finally, a multi-label cross-entropy loss function combining boundary sensitive representation is introduced to train a fully convolutional network for the organ segmentation. The proposed method is evaluated on a large and diverse planning CT dataset with 313 images from 313 prostate cancer patients. Experimental results show that the performance of our proposed method outperforms the baseline fully convolutional networks, as well as other state-of-the-art methods in CT male pelvic organ segmentation.


Assuntos
Aprendizado Profundo , Neoplasias da Próstata/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Humanos , Imageamento Tridimensional , Masculino , Órgãos em Risco/efeitos da radiação , Reto/efeitos da radiação , Bexiga Urinária/efeitos da radiação
14.
IEEE Trans Med Imaging ; 38(2): 585-595, 2019 02.
Artigo em Inglês | MEDLINE | ID: mdl-30176583

RESUMO

Accurate segmentation of pelvic organs (i.e., prostate, bladder, and rectum) from CT image is crucial for effective prostate cancer radiotherapy. However, it is a challenging task due to: 1) low soft tissue contrast in CT images and 2) large shape and appearance variations of pelvic organs. In this paper, we employ a two-stage deep learning-based method, with a novel distinctive curve-guided fully convolutional network (FCN), to solve the aforementioned challenges. Specifically, the first stage is for fast and robust organ detection in the raw CT images. It is designed as a coarse segmentation network to provide region proposals for three pelvic organs. The second stage is for fine segmentation of each organ, based on the region proposal results. To better identify those indistinguishable pelvic organ boundaries, a novel morphological representation, namely, distinctive curve, is also introduced to help better conduct the precise segmentation. To implement this, in this second stage, a multi-task FCN is initially utilized to learn the distinctive curve and the segmentation map separately and then combine these two tasks to produce accurate segmentation map. The final segmentation results of all three pelvic organs are generated by a weighted max-voting strategy. We have conducted exhaustive experiments on a large and diverse pelvic CT data set for evaluating our proposed method. The experimental results demonstrate that our proposed method is accurate and robust for this challenging segmentation task, by also outperforming the state-of-the-art segmentation methods.


Assuntos
Interpretação de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Pelve/diagnóstico por imagem , Próstata/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Humanos , Masculino , Neoplasias da Próstata/diagnóstico por imagem
15.
Artigo em Inglês | MEDLINE | ID: mdl-30106714

RESUMO

Accurate segmentation of pelvic organs (i.e., prostate, bladder and rectum) from CT image is crucial for effective prostate cancer radiotherapy. However, it is a challenging task due to 1) low soft tissue contrast in CT images and 2) large shape and appearance variations of pelvic organs. In this paper, we employ a two-stage deep learning based method, with a novel distinctive curve guided fully convolutional network (FCN), to solve the aforementioned challenges. Specifically, the first stage is for fast and robust organ detection in the raw CT images. It is designed as a coarse segmentation network to provide region proposals for three pelvic organs. The second stage is for fine segmentation of each organ, based on the region proposal results. To better identify those indistinguishable pelvic organ boundaries, a novel morphological representation, namely distinctive curve, is also introduced to help better conduct the precise segmentation. To implement this, in this second stage, a multi-task FCN is initially utilized to learn the distinctive curve and the segmentation map separately, and then combine these two tasks to produce accurate segmentation map. The final segmentation results of all three pelvic organs are generated by a weighted max-voting strategy. We have conducted exhaustive experiments on a large and diverse pelvic CT dataset for evaluating our proposed method. The experimental results demonstrate that our proposed method is accurate and robust for this challenging segmentation task, by also outperforming the state-of-the-art segmentation methods.

16.
Front Oncol ; 7: 8, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28168166

RESUMO

INTRODUCTION: Computed tomography (CT), combined positron emitted tomography and CT (PET/CT), and magnetic resonance imaging (MRI) are commonly used in head and neck radiation planning. Hybrid PET/MRI has garnered attention for potential added value in cancer staging and treatment planning. Herein, we compare PET/MRI vs. planning CT for head and neck cancer gross tumor volume (GTV) delineation. MATERIAL AND METHODS: We prospectively enrolled patients with head and neck cancer treated with definitive chemoradiation to 60-70 Gy using IMRT. We performed pretreatment contrast-enhanced planning CT and gadolinium-enhanced PET/MRI. Primary and nodal volumes were delineated on planning CT (GTV-CT) prospectively before treatment and PET/MRI (GTV-PET/MRI) retrospectively after treatment. GTV-PET/MRI was compared to GTV-CT using separate rigid registrations for each tumor volume. The Dice similarity coefficient (DSC) metric evaluating spatial overlap and modified Hausdorff distance (mHD) evaluating mean orthogonal distance difference were calculated. Minimum dose to 95% of GTVs (D95) was compared. RESULTS: Eleven patients were evaluable (10 oropharynx, 1 larynx). Nine patients had evaluable primary tumor GTVs and seven patients had evaluable nodal GTVs. Mean primary GTV-CT and GTV-PET/MRI size were 13.2 and 14.3 cc, with mean intersection 8.7 cc, DSC 0.63, and mHD 1.6 mm. D95 was 65.3 Gy for primary GTV-CT vs. 65.2 Gy for primary GTV-PET/MRI. Mean nodal GTV-CT and GTV-PET/MRI size were 19.0 and 23.0 cc, with mean intersection 14.4 cc, DSC 0.69, and mHD 2.3 mm. D95 was 62.3 Gy for both nodal GTV-CT and GTV-PET/MRI. CONCLUSION: In this series of patients with head and neck (primarily oropharynx) cancer, PET/MRI and CT-GTVs had similar volumes (though there were individual cases with larger differences) with overall small discrepancies in spatial overlap, small mean orthogonal distance differences, and similar radiation doses.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA