Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Bases de dados
Ano de publicação
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-38083025

RESUMO

CT scans of the head and neck have multiple clinical uses, and simulating deformation of these CT scans allows for predicting patient motion and data augmentation for machine-learning methods. Current methods for creating patient-derived deformed CT scans require multiple scans or use unrealistic head and neck motion. This paper describes the CTHeadDeformation software package which allows for realistic synthetic deformation of head and neck CT scans for small amounts of motion. CTHeadDeformation is a python-based package that uses a kinematics-based approach using anatomical landmarks, and rigid/non-rigid registration to create a realistic patient-derived deformed CT scan. CTHeadDeformation is also designed for simple clinical implementation. The CTHeadDeformation software package was demonstrated on a head and neck CT scan of one patient. The CT scan was deformed in the anterior-posterior, superior-inferior, and left-right directions. Internal organ motion and more complex combination motions were also simulated. The results showed the patient's CT scan was able to be deformed in a way that preserved the shape and location of the anatomy.Clinical Relevance- This method allows for the realistic simulation of head and neck motion in CT scans. Clinical applications including simulating how patient motion affects radiation therapy treatment effectiveness. The CTHeadDeformation software can also be used to train machine-learning networks that are robust to patient motion, or to generate ground truth images for imaging or segmentation grand challenges.


Assuntos
Cabeça , Processamento de Imagem Assistida por Computador , Humanos , Fenômenos Biomecânicos , Processamento de Imagem Assistida por Computador/métodos , Cabeça/diagnóstico por imagem , Pescoço/diagnóstico por imagem , Tomografia Computadorizada por Raios X
2.
Med Phys ; 50(7): 4206-4219, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37029643

RESUMO

BACKGROUND: Using radiation therapy (RT) to treat head and neck (H&N) cancers requires precise targeting of the tumor to avoid damaging the surrounding healthy organs. Immobilisation masks and planning target volume margins are used to attempt to mitigate patient motion during treatment, however patient motion can still occur. Patient motion during RT can lead to decreased treatment effectiveness and a higher chance of treatment related side effects. Tracking tumor motion would enable motion compensation during RT, leading to more accurate dose delivery. PURPOSE: The purpose of this paper is to develop a method to detect and segment the tumor in kV images acquired during RT. Unlike previous tumor segmentation methods for kV images, in this paper, a process for generating realistic and synthetic CT deformations was developed to augment the training data and make the segmentation method robust to patient motion. Detecting the tumor in 2D kV images is a necessary step toward 3D tracking of the tumor position during treatment. METHOD: In this paper, a conditional generative adversarial network (cGAN) is presented that can detect and segment the gross tumor volume (GTV) in kV images acquired during H&N RT. Retrospective data from 15 H&N cancer patients obtained from the Cancer Imaging Archive were used to train and test patient-specific cGANs. The training data consisted of digitally reconstructed radiographs (DRRs) generated from each patient's planning CT and contoured GTV. Training data was augmented by using synthetically deformed CTs to generate additional DRRs (in total 39 600 DRRs per patient or 25 200 DRRs for nasopharyngeal patients) containing realistic patient motion. The method for deforming the CTs was a novel deformation method based on simulating head rotation and internal tumor motion. The testing dataset consisted of 1080 DRRs for each patient, obtained by deforming the planning CT and GTV at different magnitudes to the training data. The accuracy of the generated segmentations was evaluated by measuring the segmentation centroid error, Dice similarity coefficient (DSC) and mean surface distance (MSD). This paper evaluated the hypothesis that when patient motion occurs, using a cGAN to segment the GTV would create a more accurate segmentation than no-tracking segmentations from the original contoured GTV, the current standard-of-care. This hypothesis was tested using the 1-tailed Mann-Whitney U-test. RESULTS: The magnitude of our cGAN segmentation centroid error was (mean ± standard deviation) 1.1 ± 0.8 mm and the DSC and MSD values were 0.90 ± 0.03 and 1.6 ± 0.5 mm, respectively. Our cGAN segmentation method reduced the segmentation centroid error (p < 0.001), and MSD (p = 0.031) when compared to the no-tracking segmentation, but did not significantly increase the DSC (p = 0.294). CONCLUSIONS: The accuracy of our cGAN segmentation method demonstrates the feasibility of this method for H&N cancer patients during RT. Accurate tumor segmentation of H&N tumors would allow for intrafraction monitoring methods to compensate for tumor motion during treatment, ensuring more accurate dose delivery and enabling better H&N cancer patient outcomes.


Assuntos
Aprendizado Profundo , Neoplasias de Cabeça e Pescoço , Humanos , Estudos Retrospectivos , Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Neoplasias de Cabeça e Pescoço/radioterapia , Radiografia , Tomografia Computadorizada por Raios X , Processamento de Imagem Assistida por Computador/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA