Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
J Med Imaging (Bellingham) ; 11(2): 024009, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38595327

RESUMO

Purpose: Segmentation of the prostate and surrounding organs at risk from computed tomography is required for radiation therapy treatment planning. We propose an automatic two-step deep learning-based segmentation pipeline that consists of an initial multi-organ segmentation network for organ localization followed by organ-specific fine segmentation. Approach: Initial segmentation of all target organs is performed using a hybrid convolutional-transformer model, axial cross-attention UNet. The output from this model allows for region of interest computation and is used to crop tightly around individual organs for organ-specific fine segmentation. Information from this network is also propagated to the fine segmentation stage through an image enhancement module, highlighting regions of interest in the original image that might be difficult to segment. Organ-specific fine segmentation is performed on these cropped and enhanced images to produce the final output segmentation. Results: We apply the proposed approach to segment the prostate, bladder, rectum, seminal vesicles, and femoral heads from male pelvic computed tomography (CT). When tested on a held-out test set of 30 images, our two-step pipeline outperformed other deep learning-based multi-organ segmentation algorithms, achieving average dice similarity coefficient (DSC) of 0.836±0.071 (prostate), 0.947±0.038 (bladder), 0.828±0.057 (rectum), 0.724±0.101 (seminal vesicles), and 0.933±0.020 (femoral heads). Conclusions: Our results demonstrate that a two-step segmentation pipeline with initial multi-organ segmentation and additional fine segmentation can delineate male pelvic CT organs well. The utility of this additional layer of fine segmentation is most noticeable in challenging cases, as our two-step pipeline produces noticeably more accurate and less erroneous results compared to other state-of-the-art methods on such images.

2.
Med Phys ; 51(6): 3972-3984, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38669457

RESUMO

BACKGROUND: Volumetric modulated arc therapy (VMAT) machine parameter optimization (MPO) remains computationally expensive and sensitive to input dose objectives creating challenges for manual and automatic planning. Reinforcement learning (RL) involves machine learning through extensive trial-and-error, demonstrating performance exceeding humans, and existing algorithms in several domains. PURPOSE: To develop and evaluate an RL approach for VMAT MPO for localized prostate cancer to rapidly and automatically generate deliverable VMAT plans for a clinical linear accelerator (linac) and compare resultant dosimetry to clinical plans. METHODS: We extended our previous RL approach to enable VMAT MPO of a 3D beam model for a clinical linac through a policy network. It accepts an input state describing the current control point and predicts continuous machine parameters for the next control point, which are used to update the input state, repeating until plan termination. RL training was conducted to minimize a dose-based cost function for prescription of 60 Gy in 20 fractions using CT scans and contours from 136 retrospective localized prostate cancer patients, 20 of which had existing plans used to initialize training. Data augmentation was employed to mitigate over-fitting, and parameter exploration was achieved using Gaussian perturbations. Following training, RL VMAT was applied to an independent cohort of 15 patients, and the resultant dosimetry was compared to clinical plans. We also combined the RL approach with our clinical treatment planning system (TPS) to automate final plan refinement, and creating the potential for manual review and edits as required for clinical use. RESULTS: RL training was conducted for 5000 iterations, producing 40 000 plans during exploration. Mean ± SD execution time to produce deliverable VMAT plans in the test cohort was 3.3 ± 0.5 s which were automatically refined in the TPS taking an additional 77.4 ± 5.8 s. When normalized to provide equivalent target coverage, the RL+TPS plans provided a similar mean ± SD overall maximum dose of 63.2 ± 0.6 Gy and a lower mean rectum dose of 17.4 ± 7.4 compared to 63.9 ± 1.5 Gy (p = 0.061) and 21.0 ± 6.0 (p = 0.024) for the clinical plans. CONCLUSIONS: An approach for VMAT MPO using RL for a clinical linac model was developed and applied to automatically generate deliverable plans for localized prostate cancer patients, and when combined with the clinical TPS shows potential to rapidly generate high-quality plans. The RL VMAT approach shows promise to discover advanced linac control policies through trial-and-error, and algorithm limitations and future directions are identified and discussed.


Assuntos
Aprendizado Profundo , Neoplasias da Próstata , Planejamento da Radioterapia Assistida por Computador , Radioterapia de Intensidade Modulada , Masculino , Humanos , Neoplasias da Próstata/radioterapia , Neoplasias da Próstata/diagnóstico por imagem , Planejamento da Radioterapia Assistida por Computador/métodos , Dosagem Radioterapêutica , Aprendizado de Máquina
3.
J Appl Clin Med Phys ; 25(3): e14310, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38373283

RESUMO

PURPOSE: Radiation therapy (RT) of pediatric brain cancer is known to be associated with long-term neurocognitive deficits. Although target and organs-at-risk (OARs) are contoured as part of treatment planning, other structures linked to cognitive functions are often not included. This paper introduces a novel automatic segmentation tool specifically designed for the unique challenges posed by pediatric patients undergoing brain RT, as well as its seamless integration into the existing clinical workflow. METHODS AND MATERIALS: Images of 47 pediatric brain cancer patients aged 1 to 20 years old and 33 two-year-old healthy infants were used to train a vision transformer, UNesT, for the segmentation of five brain OARs. The trained model was then incorporated to clinical workflow via DICOM connections between a treatment planning system (TPS) and a server hosting the trained model such that scans are sent from TPS to the server, automatically segmented, and sent back to TPS for treatment planning. RESULTS: The proposed automatic segmentation framework achieved a median dice similarity coefficient of 0.928 (frontal white matter), 0.908 (corpus callosum), 0.933 (hippocampi), 0.819 (temporal lobes), and 0.960 (brainstem) with a mean ± SD run time of 1.8 ± 0.67 s over 20 test cases. CONCLUSIONS: The pediatric brain segmentation tool showed promising performance on five OARs linked to neurocognitive functions and can easily be extended for additional structures. The proposed integration to the clinic enables easy access to the tool from clinical platforms and minimizes disruption to existing workflow while maximizing its benefits.


Assuntos
Neoplasias Encefálicas , Aprendizado Profundo , Humanos , Criança , Lactente , Pré-Escolar , Adolescente , Adulto Jovem , Adulto , Fluxo de Trabalho , Processamento de Imagem Assistida por Computador/métodos , Órgãos em Risco , Planejamento da Radioterapia Assistida por Computador/métodos , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/radioterapia , Encéfalo/diagnóstico por imagem
4.
Int J Comput Assist Radiol Surg ; 17(5): 877-883, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-35364774

RESUMO

PURPOSE: Intra-retinal delivery of novel sight-restoring therapies will require the precision of robotic systems accompanied by excellent visualisation of retinal layers. Intra-operative Optical Coherence Tomography (iOCT) provides cross-sectional retinal images in real time but at the cost of image quality that is insufficient for intra-retinal therapy delivery.This paper proposes a super-resolution methodology that improves iOCT image quality leveraging spatiotemporal consistency of incoming iOCT video streams. METHODS: To overcome the absence of ground truth high-resolution (HR) images, we first generate HR iOCT images by fusing spatially aligned iOCT video frames. Then, we automatically assess the quality of the HR images on key retinal layers using a deep semantic segmentation model. Finally, we use image-to-image translation models (Pix2Pix and CycleGAN) to enhance the quality of LR images via quality transfer from the estimated HR domain. RESULTS: Our proposed methodology generates iOCT images of improved quality according to both full-reference and no-reference metrics. A qualitative study with expert clinicians also confirms the improvement in the delineation of pertinent layers and in the reduction of artefacts. Furthermore, our approach outperforms conventional denoising filters and the learning-based state-of-the-art. CONCLUSIONS: The results indicate that the learning-based methods using the estimated, through our pipeline, HR domain can be used to enhance the iOCT image quality. Therefore, the proposed method can computationally augment the capabilities of iOCT imaging helping this modality support the vitreoretinal surgical interventions of the future.


Assuntos
Retina , Tomografia de Coerência Óptica , Estudos Transversais , Humanos , Retina/diagnóstico por imagem , Retina/cirurgia , Lâmpada de Fenda , Tomografia de Coerência Óptica/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA