Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 25
Filtrar
1.
Artigo em Chinês | WPRIM | ID: wpr-1020722

RESUMO

Objective To explore the feasibility of automatic segmentation of clinical target volume(CTV)and organs at risk(OARs)for cervical cancer using AccuLearning(AL)based on geometric and dosimetric indices.Methods Seventy-five CT localization images with manual contouring data of postoperative cervical cancer were enrolled in this study.Sixty cases were randomly selected to trained to generate automatic segmentation model by AL,and the CTV and OARs of the remaining 15 cases were automatically contoured.Radiotherapy plans on the automatic segmentation contours were imported on the CT images of manual contours.The efficiency,Dice similarity coefficient(DSC),Hausdorff distance(HD)and dosimetric parameters were compared between the two methods.Results The time of automatic segmentation was significantly shorter than that of the manual contour(P<0.05).The DSC of all structures were≥0.87.The HD of bowel bag and rectum were about 10 mm,and that of the rest of OARs were less than 5 mm.CTV(D98,V90% ,V95% ,Dmean,HI),bowel bag(V50)and bladder(V50)had significant differences in dosimetric comparison(P<0.05).Conclusion The automatic segmentation model based on AL can improve the efficiency of radiotherapy.Automatic segmentation of OARs has the potential of clinical application,while that of CTV still needs to be further modified.

2.
Chinese Journal of Medical Imaging ; (12): 94-99,104, 2024.
Artigo em Chinês | WPRIM | ID: wpr-1026356

RESUMO

Purpose To evaluate the consistency and repeatability of cerebral blood flow(CBF)values measured by automatic segmentation of region of interest(ROI)and arterial spin labeling(ASL)functional image fusion in hippocampal sclerosis patients with medial temporal lobe epilepsy.Materials and Methods From January 2021 to October 2022,a total of 52 patients with medial temporal lobe epilepsy confirmed by MRI or pathology in General Hospital of Ningxia Medical University were retrospectively collected.All subjects were scanned on 3.0T MRI to obtain axial T1 weighted three-dimensional magnetization reserve gradient echo(3D-T1W1-MPGAGE)sequence and three-dimensional pseudo continuous ASL sequence.The 3D-T1W1-MPGAGE imaging were automatically segmented.Two physicians used the freeview visualization interface of freeSurfer software to fuse the ROI and ASL functional images of the hippocampal subregions and to measure the CBF values.The intra-observer and inter-observer consistency and repeatability were evaluated and analyzed.The consistency analysis and repeatability evaluation were performed via intraclass correlation coefficient(ICC),Bland-Altman diagram and Wilcoxon rank sum test.Results The ICC of CBF values measured by two physicians were all>0.750,with an average of 0.868±0.095.The ICC of left and right hippocampal subregions were as follows:subiculum(SUB):0.818/0.801,cornu ammonis(CA)1:0.920/0.907,CA2-3:0.759/0.978,CA4:0.757/0.758 and dentate gyrus(DG):0.990/0.991;The ICC delineated by the same physician's ROI were all>0.990 with an average of 0.994±0.002.The ICC of left and right hippocampal subregions were as follows:SUB:0.993/0.993,CA1:0.996/0.995,CA2-3:0.989/0.994,CA4:0.992/0.995 and DG:0.993/0.996.The Bland-Altman diagram showed the scatter distribution and consistency,and the coefficient of repeatability was obtained.The same observer had certain repeatability for the fusion measurement of automatic segmentation ROI and ASL functional images.Conclusion The CBF values measured by fusing ROI and ASL functional images of automatically segmented hippocampal subregion have higher consistency and repeatability.

3.
Artigo em Chinês | WPRIM | ID: wpr-1027398

RESUMO

Objective:To investigate the effectiveness and feasibility of a 3D U-Net in conjunction with a three-phase CT image segmentation model in the automatic segmentation of GTVnx and GTVnd in nasopharyngeal carcinoma.Methods:A total of 645 sets of computed tomography (CT) images were retrospectively collected from 215 nasopharyngeal carcinoma cases, including three phases: plain scan (CT), contrast-enhanced CT (CTC), and delayed CT (CTD). The dataset was grouped into a training set consisting of 172 cases and a test set comprising 43 cases using the random number table method. Meanwhile, six experimental groups, A1, A2, A3, A4, B1, and B2, were established. Among them, the former four groups used only CT, only CTC, only CTD, and all three phases, respectively. The B1 and B2 groups used phase fine-tuning CTC models. The Dice similarity coefficient (DSC) and 95% Hausdorff distance (HD95) served as quantitative evaluation indicators.Results:Compared to only monophasic CT (group A1/A2/A3), triphasic CT (group A4) yielded better result in the automatic segmentation of GTVnd (DSC: 0.67 vs. 0.61, 0.64, 0.64; t = 7.48, 3.27, 4.84, P < 0.01; HD95: 36.45 vs. 79.23, 59.55, 65.17; t = 5.24, 2.99, 3.89, P < 0.01), with statistically significant differences ( P < 0.01). However, triphasic CT (group A4) showed no significant enhancement in the automatic segmentation of GTVnx compared to monophasic CT (group A1/A2/A3) (DSC: 0.73 vs. 0.74, 0.74, 0.73; HD95: 14.17 mm vs. 8.06, 8.11, 8.10 mm), with no statistically significant difference ( P > 0.05). For the automatic segmentation of GTVnd, group B1/B2 showed higher automatic segmentation accuracy compared to group A1 (DSC: 0.63, 0.63 vs. 0.61, t = 4.10, 3.03, P<0.01; HD95: 58.11, 50.31 mm vs. 79.23 mm, t = 2.75, 3.10, P < 0.01). Conclusions:Triphasic CT scanning can improve the automatic segmentation of the GTVnd in nasopharyngeal carcinoma. Additionally, phase fine-tuning models can enhance the automatic segmentation accuracy of the GTVnd on plain CT images.

4.
Chinese Journal of Medical Physics ; (6): 1463-1467, 2023.
Artigo em Chinês | WPRIM | ID: wpr-1026165

RESUMO

Objective To develop an auto-segmentation model based on no new U-net for delineating high-risk clinical target volume(HR-CTV)and organs-at-risk(OAR)in CT-guided brachytherapy of cervical cancer,and to explore its clinical value.Methods The CT images of 63 patients with locally advanced cervical cancer who had completed image-guided brachytherapy were collected.The HR-CTV and OAR including bladder,rectum and sigmoid colon were delineated manually by a senior oncologist,and the results were taken as the gold standard.The automatic and manual segmentation results were compared,and Dice similarity coefficient was used to evaluate HR-CTV and OAR auto-segmentation accuracies.Results The Dice similarity coefficients of HR-CTV,bladder,rectum,and sigmoid colon were 0.903±0.015,0.948±0.011,0.903±0.008,and 0.803±0.024,respectively.Conclusion The established model can realize the accurate segmentations of HR-CTV,bladder,rectum and sigmoid colon,but the oncologist still needs to scrupulously check the results.

5.
Artigo em Chinês | WPRIM | ID: wpr-1027378

RESUMO

Objective:To investigate the effects of CT images reconstructed using different field of view (FOV) sizes on the automatic segmentation of organs at risk and dose calculation accuracy in radiotherapy after radical mastectomy.Methods:Under the same scanning conditions, CT values-electron density conversion curves were established by reconstructing the original CT images of a phantom placed at the isocenter and extended FOV (eFOV) positions using FOV sizes of 50, 60, 70 and 80 cm. Then, these curves were compared. A standard phantom with a known volume was scanned, and the automatic segmentation result of the phantom on CT images reconstructed using different FOV sizes was compared. A total of 30 patients in Guangdong Second Provincial General Hospital from January 2020 to June 2022 with breast cancer were randomly selected. Through simulated positioning, their CT images were reconstructed using different FOV sizes for the purpose of automatic segmentation of organs at risk, followed by comparison between the outcomes of automatic segmentation and physicians′segmentation. The treatment plan established based on CT images reconstructed using a FOV size of 50 cm (FOV 50 images for short) was applied to CT images reconstructed using FOV sizes of 60, 70 and 80 cm (FOV 60, FOV 70 and FOV 80 images for short) for dose calculation, and the dose calculation result were compared. Results:The CT values - electron density conversion curves derived from CT images reconstructed using different FOV sizes were roughly consistent. At the isocenter, the difference between the segmented volume and actual volume of the standard phantom increased up to a maximum of 6 cm 3 (4.8%) with an increase in the FOV size. As indicated by the automatic segmentation result, the segmentation accuracy of the spinal cord, trachea, esophagus, thyroid, healthy mammary gland, and skin decreased with an increase in the FOV size ( t = -28.43-8.23, P < 0.05). The comparison of dose calculated based on CT images reconstructed using different FOV sizes showed that there was no statistically significant differences( P>0.05) in the dose to target volume ( V95) and the maximum and average doses in the supraclavicular lymph node region, as well as the dose to organs at risk. The coverage for planned target volume decreased with an increase in the FOV size, with a maximum difference of 4.06%. Conclusions:It is recommended that, for radiotherapy after radical mastectomy, FOV 50 images should be selected for the automatic segmentation of organs at risk, CT-values-electron density conversion curves should be established based on the electron density phantom images of the eFOV region, and the eFOV 80 images should be preferred for dose calculation.

6.
Artigo em Chinês | WPRIM | ID: wpr-956847

RESUMO

Objective:To explore the effects of multimodal imaging on the performance of automatic segmentation of glioblastoma targets for radiotherapy based on a deep learning approach.Methods:The computed tomography (CT) images and the contrast-enhanced T1 weighted (T1C) sequence and the T2 fluid attenuated inversion recovery (T2- FLAIR) sequence of magnetic resonance imaging (MRI) of 30 patients with glioblastoma were collected. The gross tumor volumes (GTV) and their corresponding clinical target volumes CTV1 and CTV2 of the 30 patients were manually delineated according to the criteria of the Radiation Therapy Oncology Group (RTOG). Moreover, four different datasets were designed, namely a unimodal CT dataset (only containing the CT sequences of 30 cases), a multimodal CT-T1C dataset (containing the CT and T1C sequences of 30 cases), a multimodal CT-T2-FLAIR dataset (containing the CT and T2- FLAIR sequences of the 30 cases), and a trimodal CT-MRI dataset (containing the CT, T1C, and T2- FLAIR sequences of 30 cases). For each dataset, the data of 25 cases were used for training the modified 3D U-Net model, while the data of the rest five cases were used for testing. Furthermore, this study evaluated the segmentation performance of the GTV, CTV1, and CTV2 of the testing cases obtained using the 3D U-Net model according to the indices including Dice similarity coefficient (DSC), 95% Hausdorff distance (HD95), and relative volume error (RVE).Results:The best automatic segmentation result of GTV were achieved using the CT-MRI dataset. Compared with the segmentation result using the CT dataset (DSC: 0.94 vs. 0.79, HD95: 2.09 mm vs. 12.33 mm, and RVE: 1.16% vs. 20.14%), there were statistically significant differences in DSC ( t=3.78, P<0.05) and HD95 ( t=4.07, P<0.05) obtained using the CT-MRI dataset. Highly consistent automatic segmentation result of CTV1 and CTV2 were also achieved using the CT-MRI dataset (DSC: 0.90 vs. 0.91, HD95: 3.78 mm vs. 2.41 mm, RVE: 3.61% vs. 5.35%). However, compared to the CT dataset, there were no statistically significant differences in DSC and HD95 of CTV1 and CTV2 ( P>0.05). Additionally, the 3D U-Net model yielded some errors in predicting the upper and lower bounds of GTV and the adjacent organs (e.g., the brainstem and eyeball) of CTV2. Conclusions:The modified 3D U-Net model based on the multimodal CT-MRI dataset can achieve better segmentation result of glioblastoma targets and its application potentially benefits clinical practice.

7.
Artigo em Chinês | WPRIM | ID: wpr-932626

RESUMO

Objective:Due to the low contrast between tumors and surrounding tissues in CBCT images, this study was designed to propose an automatic segmentation method for central lung cancer in CBCT images.Methods:There are 221 patients with central lung cancer were recruited. Among them, 176 patients underwent CT localization and 45 patients underwent enhanced CT localization. The enhanced CT images were set as the lung window and mediastinal window, and elastic registration was performed with the first CBCT validation images to obtain the paired data set. The CBCT images could be transformed into" enhanced CT" under the lung window and mediastinal window by loading the paired data sets into cycleGAN network for style transformation. Finally, the transformed images were loaded into the UNET-attention network for deep learning of GTV. The results of segmentation were evaluated by Dice similarity coefficient (DSC), Hausdorff distance (HD) and the area under the receiver operating characteristic curve (AUC).Results:The contrast between tumors and surrounding tissues was significantly improved after style transformation. The DSC value of cycleGAN+ UNET-attention network was 0.78±0.05, HD value was 9.22±3.42 and AUC value was 0.864, respectively.Conclusion:The cycleGAN+ UNET-attention network can effectively segment central lung cancer in CBCT images.

8.
Artigo em Chinês | WPRIM | ID: wpr-932665

RESUMO

Objective:Hybrid attention U-net (HA-U-net) neural network was designed based on U-net for automatic delineation of craniospinal clinical target volume (CTV) and the segmentation results were compared with those of U-net automatic segmentation model.Methods:The data of 110 craniospinal patients were reviewed, Among them, 80 cases were selected for the training set, 10 cases for the validation set and 20 cases for the test set. HA-U-net took U-net as the basic network architecture, double attention module was added at the input of U-net network, and attention gate module was combined in skip-connection to establish the craniospinal automatic delineation network model. The evaluation parameters included Dice similarity coefficient (DSC), Hausdorff distance (HD) and precision.Results:The DSC, HD and precision of HA-U-net network were 0.901±0.041, 2.77±0.29 mm and 0.903±0.038, respectively, which were better than those of U-net (all P<0.05). Conclusion:The results show that HA-U-net convolutional neural network can effectively improve the accuracy of automatic segmentation of craniospinal CTV, and help doctors to improve the work efficiency and the consistent delineation of CTV.

9.
Artigo em Chinês | WPRIM | ID: wpr-910492

RESUMO

Objective:To evaluate the application of a multi-task learning-based light-weight convolution neural network (MTLW-CNN) for the automatic segmentation of organs at risk (OARs) in thorax.Methods:MTLW-CNN consisted of several layers for sharing features and 3 branches for segmenting 3 OARs. 497 cases with thoracic tumors were collected. Among them, the computed tomography (CT) images encompassing lung, heart and spinal cord were included in this study. The corresponding contours delineated by experienced radiation oncologists were ground truth. All cases were randomly categorized into the training and validation set ( n=300) and test set ( n=197). By applying MTLW-CNN on the test set, the Dice similarity coefficients (DSCs) of 3 OARs, training and testing time and space complexity (S) were calculated and compared with those of Unet and DeepLabv3+ . To evaluate the effect of multi-task learning on the generalization performance of the model, 3 single-task light-weight CNNs (STLW-CNNs) were built. Their structures were totally the same as the corresponding branches in MTLW-CNN. After using the same data and algorithm to train STLW-CNN, the DSCs were statistically compared with MTLW-CNN on the testing set. Results:For MTLW-CNN, the averages (μ) of lung, heart and spinal cord DSCs were 0.954, 0.921 and 0.904, respectively. The differences of μ between MTLW-CNN and other two models (Unet and DeepLabv3+ ) were less than 0.020. The training and testing time of MTLW-CNN were 1/3 to 1/30 of that of Unet and DeepLabv3+ . S of MTLW-CNN was 1/42 of that of Unet and 1/1 220 of that of DeepLabv3+ . The differences of μ and standard deviation (σ) of lung and heart between MTLW-CNN and STLW-CNN were approximately 0.005 and 0.002. The difference of μ of spinal cord was 0.001, but σof STLW-CNN was 0.014 higher than that of MTLW-CNN.Conclusions:MTLW-CNN spends less time and space on high-precision automatic segmentation of thoracic OARs. It can improve the application efficiency and generalization performance of the models.

10.
Artigo em Chinês | WPRIM | ID: wpr-922062

RESUMO

OBJECTIVE@#To explore the feasibility of using the bidirectional local distance based medical similarity index (MSI) to evaluate automatic segmentation on medical images.@*METHODS@#Taking the intermediate risk clinical target volume for nasopharyngeal carcinoma manually segmented by an experience radiation oncologist as region of interest, using Atlas-based and deep-learning-based methods to obtain automatic segmentation respectively, and calculated multiple MSI and Dice similarity coefficient (DSC) between manual segmentation and automatic segmentation. Then the difference between MSI and DSC was comparatively analyzed.@*RESULTS@#DSC values for Atlas-based and deep-learning-based automatic segmentation were 0.73 and 0.84 respectively. MSI values for them varied between 0.29~0.78 and 0.44~0.91 under different inside-outside-level.@*CONCLUSIONS@#It is feasible to use MSI to evaluate the results of automatic segmentation. By setting the penalty coefficient, it can reflect phenomena such as under-delineation and over-delineation, and improve the sensitivity of medical image contour similarity evaluation.


Assuntos
Estudos de Viabilidade , Planejamento da Radioterapia Assistida por Computador
11.
Artigo em Chinês | WPRIM | ID: wpr-888233

RESUMO

The background of abdominal computed tomography (CT) images is complex, and kidney tumors have different shapes, sizes and unclear edges. Consequently, the segmentation methods applying to the whole CT images are often unable to effectively segment the kidney tumors. To solve these problems, this paper proposes a multi-scale network based on cascaded 3D U-Net and DeepLabV3+ for kidney tumor segmentation, which uses atrous convolution feature pyramid to adaptively control receptive field. Through the fusion of high-level and low-level features, the segmented edges of large tumors and the segmentation accuracies of small tumors are effectively improved. A total of 210 CT data published by Kits2019 were used for five-fold cross validation, and 30 CT volume data collected from Suzhou Science and Technology Town Hospital were independently tested by trained segmentation models. The results of five-fold cross validation experiments showed that the Dice coefficient, sensitivity and precision were 0.796 2 ± 0.274 1, 0.824 5 ± 0.276 3, and 0.805 1 ± 0.284 0, respectively. On the external test set, the Dice coefficient, sensitivity and precision were 0.817 2 ± 0.110 0, 0.829 6 ± 0.150 7, and 0.831 8 ± 0.116 8, respectively. The results show a great improvement in the segmentation accuracy compared with other semantic segmentation methods.


Assuntos
Humanos , Neoplasias Renais/diagnóstico por imagem , Redes Neurais de Computação , Manejo de Espécimes , Tomografia Computadorizada por Raios X
12.
Artigo em Chinês | WPRIM | ID: wpr-879252

RESUMO

The three-dimensional (3D) liver and tumor segmentation of liver computed tomography (CT) has very important clinical value for assisting doctors in diagnosis and prognosis. This paper proposes a tumor 3D conditional generation confrontation segmentation network (T3scGAN) based on conditional generation confrontation network (cGAN), and at the same time, a coarse-to-fine 3D automatic segmentation framework is used to accurately segment liver and tumor area. This paper uses 130 cases in the 2017 Liver and Tumor Segmentation Challenge (LiTS) public data set to train, verify and test the T3scGAN model. Finally, the average Dice coefficients of the validation set and test set segmented in the 3D liver regions were 0.963 and 0.961, respectively, while the average Dice coefficients of the validation set and test set segmented in the 3D tumor regions were 0.819 and 0.796, respectively. Experimental results show that the proposed T3scGAN model can effectively segment the 3D liver and its tumor regions, so it can better assist doctors in the accurate diagnosis and treatment of liver cancer.


Assuntos
Humanos , Processamento de Imagem Assistida por Computador , Neoplasias Hepáticas/diagnóstico por imagem , Tomografia Computadorizada por Raios X
13.
Artigo em Chinês | WPRIM | ID: wpr-910486

RESUMO

Objective:To evaluate the application value of deep deconvolutional neural network (DDNN) model for automatic segmentation of target volume and organs at risk (OARs) in patients with nasopharngeal carcinoma (NPC).Methods:Based on the CT images of 800 NPC patients, an end-to-end automatic segmentation model was established based on DDNN algorithm. Ten newly diagnosed with NPC were allocated into the test set. Using this DDNN model, 10 junior physicians contoured the region of interest (ROI) on 10 patients by using both manual contour (MC) and DDNN deep learning-assisted contour (DLAC) methods independently. The accuracy of ROI contouring was evaluated by using the DICE coefficient and mean distance to agreement (MDTA). The coefficient of variation (CV) and standard distance deviation (SDD) were rendered to measure the inter-observer variability or consistency. The time consumed for each of the two contouring methods was also compared.Results:DICE values of gross target volume (GTV) and clinical target volume (CTV), MDTA of GTV and CTV by using DLAC were 0.67±0.15 and 0.841±0.032, (0.315±0.23) mm and (0.032±0.098) mm, respectively, which were significantly better than those in the MC group (all P<0.001). Except for the spinal cord, lens and mandible, DLAC improved the DICE values of the other OARs, in which mandible had the highest DICE value and optic chiasm had the lowest DICE value. Compared with the MC group, GTV, CTV, CV and SDD of OAR were significantly reduced (all P<0.001), and the total contouring time was significantly shortened by 63.7% in the DLAC group ( P<0.001). Conclusion:Compared with MC, DLAC is a promising method to obtain superior accuracy, consistency, and efficiency for the GTV, CTV and OAR in NPC patients.

14.
Artigo em Chinês | WPRIM | ID: wpr-788886

RESUMO

The segmentation of organs at risk is an important part of radiotherapy. The current method of manual segmentation depends on the knowledge and experience of physicians, which is very time-consuming and difficult to ensure the accuracy, consistency and repeatability. Therefore, a deep convolutional neural network (DCNN) is proposed for the automatic and accurate segmentation of head and neck organs at risk. The data of 496 patients with nasopharyngeal carcinoma were reviewed. Among them, 376 cases were randomly selected for training set, 60 cases for validation set and 60 cases for test set. Using the three-dimensional (3D) U-NET DCNN, combined with two loss functions of Dice Loss and Generalized Dice Loss, the automatic segmentation neural network model for the head and neck organs at risk was trained. The evaluation parameters are Dice similarity coefficient and Jaccard distance. The average Dice Similarity coefficient of the 19 organs at risk was 0.91, and the Jaccard distance was 0.15. The results demonstrate that 3D U-NET DCNN combined with Dice Loss function can be better applied to automatic segmentation of head and neck organs at risk.

15.
Artigo em Chinês | WPRIM | ID: wpr-828120

RESUMO

Compared with the previous automatic segmentation neural network for the target area which considered the target area as an independent area, a stacked neural network which uses the position and shape information of the organs around the target area to regulate the shape and position of the target area through the superposition of multiple networks and fusion of spatial position information to improve the segmentation accuracy on medical images was proposed in this paper. Taking the Graves' ophthalmopathy disease as an example, the left and right radiotherapy target areas were segmented by the stacked neural network based on the fully convolutional neural network. The volume Dice similarity coefficient (DSC) and bidirectional Hausdorff distance (HD) were calculated based on the target area manually drawn by the doctor. Compared with the full convolutional neural network, the stacked neural network segmentation results can increase the volume DSC on the left and right sides by 1.7% and 3.4% respectively, while the two-way HD on the left and right sides decrease by 0.6. The results show that the stacked neural network improves the degree of coincidence between the automatic segmentation result and the doctor's delineation of the target area, while reducing the segmentation error of small areas. The stacked neural network can effectively improve the accuracy of the automatic delineation of the radiotherapy target area of Graves' ophthalmopathy.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Tomografia Computadorizada por Raios X
16.
Artigo em Chinês | WPRIM | ID: wpr-828165

RESUMO

When applying deep learning to the automatic segmentation of organs at risk in medical images, we combine two network models of Dense Net and V-Net to develop a Dense V-network for automatic segmentation of three-dimensional computed tomography (CT) images, in order to solve the problems of degradation and gradient disappearance of three-dimensional convolutional neural networks optimization as training samples are insufficient. This algorithm is applied to the delineation of pelvic endangered organs and we take three representative evaluation parameters to quantitatively evaluate the segmentation effect. The clinical result showed that the Dice similarity coefficient values of the bladder, small intestine, rectum, femoral head and spinal cord were all above 0.87 (average was 0.9); Jaccard distance of these were within 2.3 (average was 0.18). Except for the small intestine, the Hausdorff distance of other organs were less than 0.9 cm (average was 0.62 cm). The Dense V-Network has been proven to achieve the accurate segmentation of pelvic endangered organs.


Assuntos
Humanos , Algoritmos , Processamento de Imagem Assistida por Computador , Imageamento Tridimensional , Redes Neurais de Computação , Órgãos em Risco , Pelve , Tomografia Computadorizada por Raios X
17.
Artigo em Chinês | WPRIM | ID: wpr-868579

RESUMO

Objective In this study,the deep learning algorithm and the commercial planning system were integrated to establish and validate an automatic segmentation platform for clinical target volume (CTV) and organs at risk (OARs) in breast cancer patients.Methods A total of 400 patients with left and right breast cancer receiving radiotherapy after breast-conserving surgery in Cancer Hospital CAMS were enrolled in this study.A deep residual convolutional neural network was used to train CTV and OARs segmentation models.An end-to-end deep learning-based automatic segmentation platform (DLAS) was established.The accuracy of the DLAS platform delineation was verified using 42 left breast cancer and 40 right breast cancer patients.The overall Dice Similarity Coefficient (DSC) and the average Hausdorff Distance (AHD) were calculated.The relationship between the relative layer position and the DSC value of each layer (DSC_s) was calculated and analyzed layer-by-layer.Results The mean overall DSC and AHD of global CTV in left/right breast cancer patients were 0.87/0.88 and 9.38/8.71 mm.The average overall DSC and AHD range for all OARs in left/right breast cancer patients were ranged from 0.86 to 0.97 and 0.89 to 9.38 mm.The layer-by-layer analysis of CTV and OARs reached 0.90 or above,indicating that the doctors were only required to make slight or no modification,and the DSC_s ≥ 0.9 of CTV automatic delineation accounted for approximately 44.7% of the layers.The automatic delineation range for OARs was 50.9%-89.6%.For DSC_s < 0.7,the DSC_s values of CTV and the regions of interest other than the spinal cord were significantly decreased in the boundary regions on both sides (layer positions 0-0.2,and 0.8-1.0),and the level of decrease toward the edge was more pronounced.The spinal cord was delineated in a full-scale manner,and no significant decrease in DSC_s was observed in a particular area.Conclusions The end-to-end automatic segmentation platform based on deep learning can integrate the breast cancer segmentation model and achieve excellent automatic segmentation effect.In the boundary areas on both sides of the superior and inferior directions,the consistency of the delineation decreases more obviously,which needs to be further improved.

18.
Artigo em Chinês | WPRIM | ID: wpr-745298

RESUMO

Objective To evaluate the accuracy and validate the feasibility of auto-segmentation based on self-registration and Atlas in adaptive radiotherapy for cervical cancer using MIM-Maestro software.Methods The CT scan images and delineation results of 60 cervical cancer patients were obtained to establish the Atlas template database.The planning CT (pCT) and replanning CT (rCT) images were randomly selected from 15 patients for the contouring of clinical target volume (CTV) and organs at risk (OAR) by an experienced radiation oncologist.The rCT images of 15 patients were auto-contoured using Atlas-based auto-segmentation (Atlas group),and mapping contours from the pCT to the rCT images was performed by rigid and deformable image registration (rigid group and deformable group).The time for the three methods of auto-segmentation was also recorded.The similarity of the auto-contours and reference contours was assessed using dice similarity coefficient (DSC),overlap index (OI),the average hausdorff distance (AHD) and the deviation of centroid (DC),and the results were statistically compared among three groups by using one-way analysis of variance.Results The mean time was 89.2 s,22.4 s and 42.6 s in the Atlas,rigid and deformable groups respectively.The DSC,OI and AHD for the CTV and rectum in the rigid and deformable groups significantly differed from those in the Atlas group (all P<0.001).In the rigid and deformable groups,the OI for the intestine significantly differed from that in the Atlas group.The mean DSC for CTV was 0.89 in the rigid and deformable groups,and 0.76 in the Atlas group.The optimal delineation of the bladder,pelvis and femoral heads was obtained in the deformable group.Conclusions AIl three methods of auto-segmentation can automatically and rapidly contour the CTV and OARs.The performance in the deformable group is better than that in the rigid and Atlas groups.

19.
Artigo em Chinês | WPRIM | ID: wpr-687605

RESUMO

Liver cancer is a common type of malignant tumor in digestive system. At present, computed tomography (CT) plays an important role in the diagnosis and treatment of liver cancer. Segmentation of tumor lesions based on CT is thus critical in clinical diagnosis and treatment. Due to the limitations of manual segmentation, such as inefficiency and subjectivity, the automatic and accurate segmentation based on advanced computational techniques is becoming more and more popular. In this review, we summarize the research progress of automatic segmentation of liver cancer lesions based on CT scans. By comparing and analyzing the results of experiments, this review evaluate various methods objectively, so that researchers in related fields can better understand the current research progress of liver cancer segmentation based on CT scans.

20.
Artigo em Inglês | WPRIM | ID: wpr-55943

RESUMO

OBJECTIVE: Several modalities are available for volumetric measurement of the intracranial aneurysm. We discuss the challenges involved in manual segmentation, and analyze the application of alternative methods using automatic segmentation and geometric formulae in measurement of aneurysm volumes and coil packing density. METHODS: The volumes and morphology of 38 aneurysms treated with endovascular coiling at a single center were measured using three-dimensional rotational angiography (3DRA) reconstruction software using automatic segmentation. Aneurysm volumes were also calculated from their height, width, depth, size of neck, and assumed shape in 3DRA images using simple geometric formulae. The aneurysm volumes were dichotomized as "small" or "large" using the median volume of the studied population (54 mm3) measured by automatic segmentation as the cut-off value for further statistical analysis. RESULTS: A greater proportion of aneurysms were categorized as being "small" when geometric formulae were applied. The median aneurysm volumes obtained were 54.5 mm3 by 3DRA software, and 30.6 mm3 using mathematical equations. An underestimation of aneurysm volume with a resultant overestimation in the calculated coil packing density (p = 0.002) was observed. CONCLUSION: Caution must be exercised in the application of simple geometric formulae in the management of intracranial aneurysms as volumes may potentially be underestimated and packing densities falsely elevated. Future research should focus on validation of automatic segmentation in volumetric measurement and improving its accuracy to enhance its application in clinical practice.


Assuntos
Aneurisma , Angiografia , Aneurisma Intracraniano , Pescoço
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA