Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 94
Filtrar
1.
Radiother Oncol ; 176: 101-107, 2022 Sep 24.
Artigo em Inglês | MEDLINE | ID: mdl-36167194

RESUMO

BACKGROUND AND PURPOSE: This study aims to investigate how accurate our deep learning (DL) dose prediction models for intensity modulated radiotherapy (IMRT) and pencil beam scanning (PBS) treatments, when chained with normal tissue complication probability (NTCP) models, are at identifying esophageal cancer patients who are at high risk of toxicity and should be switched to proton therapy (PT). MATERIALS AND METHODS: Two U-Net were created, for photon (XT) and proton (PT) plans, respectively. To estimate the dose distribution for each patient, they were trained on a database of 40 uniformly planned patients using cross validation and a circulating test set. These models were combined with a NTCP model for postoperative pulmonary complications. The NTCP model used the mean lung dose, age, histology type, and body mass index as predicting variables. The treatment choice is then done by using a ΔNTCP threshold between XT and PT plans. Patients with ΔNTCP ≥ 10% were referred to PT. RESULTS: Our DL models succeed in predicting dose distributions with a mean error on the mean dose to the lungs (MLD) of 1.14 ± 0.93% for XT and 0.66 ± 0.48% for PT. The complete automated workflow (DL chained with NTCP) achieved 100% accuracy in patient referral. The average residual (ΔNTCP ground truth - ΔNTCP predicted) is 1.43 ± 1.49%. CONCLUSION: This study evaluates our DL dose prediction models in a broader patient referral context and demonstrates their ability to support clinical decisions.

2.
Comput Biol Med ; 148: 105609, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-35803749

RESUMO

Arc proton therapy (ArcPT) is an emerging modality in cancer treatments. It delivers the proton beams following a sequence of irradiation angles while the gantry is continuously rotating around the patient. Compared to conventional proton treatments (intensity modulated proton therapy, IMPT), the number of beams is significantly increased bringing new degrees of freedom that leads to potentially better cancer care. However, the optimization of such treatment plans becomes more complex and several alternative statements of the problem can be considered and compared in order to solve the ArcPT problem. Three such problem statements, distinct in their mathematical formulation and properties, are investigated and applied to solving the ArcPT optimization problem. They make use of (i) fast iterative shrinkage-thresholding algorithm (FISTA), (ii) local search (LS) and (iii) mixed-integer programming (MIP). The treatment plans obtained with those methods are compared among them, but also with IMPT and an existing state-of-the-art method: Spot-Scanning Proton Arc (SPArc). MIP stands out at low scale problems both in terms of dose quality and time delivery efficiency. FISTA shows high dose quality but experiences difficulty to optimize the energy sequence while LS is mostly the antagonist. This detailed study describes independent approaches to solve the ArcPT problem and depending on the clinical case, one should be cautiously picked rather than the other. This paper gives the first formal definition of the problem at stake, as well as a first reference benchmark. Finally, empirical conclusions are drawn, based on realistic assumptions.


Assuntos
Terapia com Prótons , Radioterapia de Intensidade Modulada , Algoritmos , Humanos , Prótons , Planejamento da Radioterapia Assistida por Computador
3.
Phys Med Biol ; 67(11)2022 05 27.
Artigo em Inglês | MEDLINE | ID: mdl-35421855

RESUMO

The interest in machine learning (ML) has grown tremendously in recent years, partly due to the performance leap that occurred with new techniques of deep learning, convolutional neural networks for images, increased computational power, and wider availability of large datasets. Most fields of medicine follow that popular trend and, notably, radiation oncology is one of those that are at the forefront, with already a long tradition in using digital images and fully computerized workflows. ML models are driven by data, and in contrast with many statistical or physical models, they can be very large and complex, with countless generic parameters. This inevitably raises two questions, namely, the tight dependence between the models and the datasets that feed them, and the interpretability of the models, which scales with its complexity. Any problems in the data used to train the model will be later reflected in their performance. This, together with the low interpretability of ML models, makes their implementation into the clinical workflow particularly difficult. Building tools for risk assessment and quality assurance of ML models must involve then two main points: interpretability and data-model dependency. After a joint introduction of both radiation oncology and ML, this paper reviews the main risks and current solutions when applying the latter to workflows in the former. Risks associated with data and models, as well as their interaction, are detailed. Next, the core concepts of interpretability, explainability, and data-model dependency are formally defined and illustrated with examples. Afterwards, a broad discussion goes through key applications of ML in workflows of radiation oncology as well as vendors' perspectives for the clinical implementation of ML.


Assuntos
Radioterapia (Especialidade) , Aprendizado de Máquina , Redes Neurais de Computação
4.
Phys Med ; 89: 93-103, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-34358755

RESUMO

INTRODUCTION: Monte Carlo (MC) algorithms provide accurate modeling of dose calculation by simulating the delivery and interaction of many particles through patient geometry. Fast MC simulations using large number of particles are desirable as they can lead to reliable clinical decisions. In this work, we assume that faster simulations with fewer particles can approximate slower ones by denoising them with deep learning. MATERIALS AND METHODS: We use mean squared error (MSE) as loss function to train networks (sNet and dUNet), with 2.5D and 3D setups considering volumes of 7 and 24 slices. Our models are trained on proton therapy MC dose distributions of six different tumor sites acquired from 50 patients. We provide networks with input MC dose distributions simulated using 1 × 106 particles while keeping 1 × 109 particles as reference. RESULTS: On average over 10 new patients with different tumor sites, in 2.5D and 3D, our models recover relative residual error on target volume, ΔD95TV of 0.67 ± 0.43% and 1.32 ± 0.87% for sNet vs. 0.83 ± 0.53% and 1.66 ± 0.98% for dUNet, compared to the noisy input at 12.40 ± 4.06%. Moreover, the denoising time for a dose distribution is: < 9s and  < 1s for sNet vs. < 16s and  < 1.5s for dUNet in 2.5D and 3D, in comparison to about 100 min (MC simulation using 1 × 109 particles). CONCLUSION: We propose a fast framework that can successfully denoise MC dose distributions. Starting from MC doses with 1 × 106 particles only, the networks provide comparable results as MC doses with1 × 109 particles, reducing simulation time significantly.


Assuntos
Neoplasias , Terapia com Prótons , Algoritmos , Humanos , Método de Monte Carlo , Neoplasias/radioterapia , Redes Neurais de Computação , Imagens de Fantasmas , Dosagem Radioterapêutica , Planejamento da Radioterapia Assistida por Computador
5.
Phys Med ; 83: 242-256, 2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-33979715

RESUMO

Artificial intelligence (AI) has recently become a very popular buzzword, as a consequence of disruptive technical advances and impressive experimental results, notably in the field of image analysis and processing. In medicine, specialties where images are central, like radiology, pathology or oncology, have seized the opportunity and considerable efforts in research and development have been deployed to transfer the potential of AI to clinical applications. With AI becoming a more mainstream tool for typical medical imaging analysis tasks, such as diagnosis, segmentation, or classification, the key for a safe and efficient use of clinical AI applications relies, in part, on informed practitioners. The aim of this review is to present the basic technological pillars of AI, together with the state-of-the-art machine learning methods and their application to medical imaging. In addition, we discuss the new trends and future research directions. This will help the reader to understand how AI methods are now becoming an ubiquitous tool in any medical image analysis workflow and pave the way for the clinical implementation of AI-based solutions.


Assuntos
Inteligência Artificial , Radiologia , Algoritmos , Aprendizado de Máquina , Tecnologia
6.
Phys Med ; 83: 52-63, 2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-33713919

RESUMO

PURPOSE: To investigate the effect of data quality and quantity on the performance of deep learning (DL) models, for dose prediction of intensity-modulated radiotherapy (IMRT) of esophageal cancer. MATERIAL AND METHODS: Two databases were used: a variable database (VarDB) with 56 clinical cases extracted retrospectively, including user-dependent variability in delineation and planning, different machines and beam configurations; and a homogenized database (HomDB), created to reduce this variability by re-contouring and re-planning all patients with a fixed class-solution protocol. Experiment 1 analysed the user-dependent variability, using 26 patients planned with the same machine and beam setup (E26-VarDB versus E26-HomDB). Experiment 2 increased the training set by groups of 10 patients (E16, E26, E36, E46, and E56) for both databases. Model evaluation metrics were the mean absolute error (MAE) for selected dose-volume metrics and the global MAE for all body voxels. RESULTS: For Experiment 1, E26-HomDB reduced the MAE for the considered dose-volume metrics compared to E26-VarDB (e.g. reduction of 0.2 Gy for D95-PTV, 1.2 Gy for Dmean-heart or 3.3% for V5-lungs). For Experiment 2, increasing the database size slightly improved performance for HomDB models (e.g. decrease in global MAE of 0.13 Gy for E56-HomDB versus E26-HomDB), but increased the error for the VarDB models (e.g. increase in global MAE of 0.20 Gy for E56-VarDB versus E26-VarDB). CONCLUSION: A small database may suffice to obtain good DL prediction performance, provided that homogenous training data is used. Data variability reduces the performance of DL models, which is further pronounced when increasing the training set.


Assuntos
Aprendizado Profundo , Neoplasias Esofágicas , Radioterapia de Intensidade Modulada , Confiabilidade dos Dados , Neoplasias Esofágicas/radioterapia , Humanos , Órgãos em Risco , Dosagem Radioterapêutica , Planejamento da Radioterapia Assistida por Computador , Estudos Retrospectivos
7.
Comput Biol Med ; 131: 104269, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-33639352

RESUMO

In radiation therapy, a CT image is used to manually delineate the organs and plan the treatment. During the treatment, a cone beam CT (CBCT) is often acquired to monitor the anatomical modifications. For this purpose, automatic organ segmentation on CBCT is a crucial step. However, manual segmentations on CBCT are scarce, and models trained with CT data do not generalize well to CBCT images. We investigate adversarial networks and intensity-based data augmentation, two strategies leveraging large databases of annotated CTs to train neural networks for segmentation on CBCT. Adversarial networks consist of a 3D U-Net segmenter and a domain classifier. The proposed framework is aimed at encouraging the learning of filters producing more accurate segmentations on CBCT. Intensity-based data augmentation consists in modifying the training CT images to reduce the gap between CT and CBCT distributions. The proposed adversarial networks reach DSCs of 0.787, 0.447, and 0.660 for the bladder, rectum, and prostate respectively, which is an improvement over the DSCs of 0.749, 0.179, and 0.629 for "source only" training. Our brightness-based data augmentation reaches DSCs of 0.837, 0.701, and 0.734, which outperforms the morphons registration algorithms for the bladder (0.813) and rectum (0.653), while performing similarly on the prostate (0.731). The proposed adversarial training framework can be used for any segmentation application where training and test distributions differ. Our intensity-based data augmentation can be used for CBCT segmentation to help achieve the prescribed dose on target and lower the dose delivered to healthy organs.


Assuntos
Tomografia Computadorizada de Feixe Cônico , Processamento de Imagem Assistida por Computador , Algoritmos , Humanos , Masculino , Pelve , Próstata , Planejamento da Radioterapia Assistida por Computador
8.
Hum Brain Mapp ; 41(18): 5164-5175, 2020 12 15.
Artigo em Inglês | MEDLINE | ID: mdl-32845057

RESUMO

Anatomical brain templates are commonly used as references in neurological MRI studies, for bringing data into a common space for group-level statistics and coordinate reporting. Given the inherent variability in brain morphology across age and geography, it is important to have templates that are as representative as possible for both age and population. A representative-template increases the accuracy of alignment, decreases distortions as well as potential biases in final coordinate reports. In this study, we developed and validated a new set of T1w Indian brain templates (IBT) from a large number of brain scans (total n = 466) acquired across different locations and multiple 3T MRI scanners in India. A new tool in AFNI, make_template_dask.py, was created to efficiently make five age-specific IBTs (ages 6-60 years) as well as maximum probability map (MPM) atlases for each template; for each age-group's template-atlas pair, there is both a "population-average" and a "typical" version. Validation experiments on an independent Indian structural and functional-MRI dataset show the appropriateness of IBTs for spatial normalization of Indian brains. The results indicate significant structural differences when comparing the IBTs and MNI template, with these differences being maximal along the Anterior-Posterior and Inferior-Superior axes, but minimal Left-Right. For each age-group, the MPM brain atlases provide reasonably good representation of the native-space volumes in the IBT space, except in a few regions with high intersubject variability. These findings provide evidence to support the use of age and population-specific templates in human brain mapping studies.


Assuntos
Algoritmos , Atlas como Assunto , Encéfalo/anatomia & histologia , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Neuroimagem/métodos , Adolescente , Adulto , Criança , Feminino , Humanos , Índia , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos , Adulto Jovem
9.
Med Phys ; 47(7): 2746-2754, 2020 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-32155667

RESUMO

PURPOSE: Robust optimization is a computational expensive process resulting in long plan computation times. This issue is especially critical for moving targets as these cases need a large number of uncertainty scenarios to robustly optimize their treatment plans. In this study, we propose a novel worst-case robust optimization algorithm, called dynamic minimax, that accelerates the conventional minimax optimization. Dynamic minimax optimization aims at speeding up the plan optimization process by decreasing the number of evaluated scenarios in the optimization. METHODS: For a given pool of scenarios (e.g., 63 = 7 setup  × 3 range  × 3 breathing phases), the proposed dynamic minimax algorithm only considers a reduced number of candidate-worst scenarios, selected from the full 63 scenario set. These scenarios are updated throughout the optimization by randomly sampling new scenarios according to a hidden variable P, called the "probability acceptance function," which associates with each scenario the probability of it being selected as the worst case. By doing so, the algorithm favors scenarios that are mostly "active," that is, frequently evaluated as the worst case. Additionally, unconsidered scenarios have the possibility to be re-considered, later on in the optimization, depending on the convergence towards a particular solution. The proposed algorithm was implemented in the open-source robust optimizer MIROpt and tested for six four-dimensional (4D) IMPT lung tumor patients with various tumor sizes and motions. Treatment plans were evaluated by performing comprehensive robustness tests (simulating range errors, systematic setup errors, and breathing motion) using the open-source Monte Carlo dose engine MCsquare. RESULTS: The dynamic minimax algorithm achieved an optimization time gain of 84%, on average. The dynamic minimax optimization results in a significantly noisier optimization process due to the fact that more scenarios are accessed in the optimization. However, the increased noise level does not harm the final quality of the plan. In fact, the plan quality is similar between dynamic and conventional minimax optimization with regard to target coverage and normal tissue sparing: on average, the difference in worst-case D95 is 0.2 Gy and the difference in mean lung dose and mean heart dose is 0.4 and 0.1 Gy, respectively (evaluated in the nominal scenario). CONCLUSIONS: The proposed worst-case 4D robust optimization algorithm achieves a significant optimization time gain of 84%, without compromising target coverage or normal tissue sparing.


Assuntos
Terapia com Prótons , Radioterapia de Intensidade Modulada , Algoritmos , Humanos , Método de Monte Carlo , Dosagem Radioterapêutica , Planejamento da Radioterapia Assistida por Computador
10.
Med Phys ; 47(2): 681-692, 2020 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-31660623

RESUMO

PURPOSE: Due to the increasing complexity of IMRT/IMPT treatments, quality assurance (QA) is essential to verify the quality of the dose distribution actually delivered. In this context, Monte Carlo (MC) simulations are more and more often used to verify the accuracy of the treatment planning system (TPS). The most common method of dose comparison is the γ-test, which combines dose difference and distance-to-agreement (DTA) criteria. However, this method is known to be dependent on the noise level in dose distributions. We propose here a method to correct the bias of the γ passing rate (GPR) induced by MC noise. METHODS: The GPR amplitude was studied as a function of the MC noise level. A model of this noise effect was mathematically derived. This model was then used to predict the time-consuming low-noise GPR by fitting multiple fast MC dose calculations. MC dose maps with a noise level between 2% and 20% were computed, and the GPR was predicted at a noise level of 0.3%. Due to the asymmetry of the γ-test, two different cases were considered: the MC dose was first set as reference dose, then as evaluated dose in the γ-test. Our method was applied on six proton therapy plans including analytical doses from the TPS or patient-specific QA measurements. RESULTS: An average absolute error of 4.31% was observed on the GPR computed for MC doses with 2% statistical noise. Our method was able to improve the accuracy of the gamma passing rate by up to 13%. The method was found especially efficient to correct the noise bias when the DTA criterion is low. CONCLUSIONS: We propose a method to enhance the γ-evaluation of a treatment plan when there is noise in one of the compared distributions. The method allows, in a tractable time, to detect the cases for which a correction is necessary and can improve the accuracy of the resulting passing rates.


Assuntos
Planejamento da Radioterapia Assistida por Computador/métodos , Radioterapia de Intensidade Modulada/métodos , Algoritmos , Humanos , Aumento da Imagem , Modelos Teóricos , Método de Monte Carlo , Garantia da Qualidade dos Cuidados de Saúde , Dosagem Radioterapêutica , Reprodutibilidade dos Testes , Razão Sinal-Ruído
11.
Front Neuroinform ; 13: 67, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31749693

RESUMO

In this paper, we describe a Bayesian deep neural network (DNN) for predicting FreeSurfer segmentations of structural MRI volumes, in minutes rather than hours. The network was trained and evaluated on a large dataset (n = 11,480), obtained by combining data from more than a hundred different sites, and also evaluated on another completely held-out dataset (n = 418). The network was trained using a novel spike-and-slab dropout-based variational inference approach. We show that, on these datasets, the proposed Bayesian DNN outperforms previously proposed methods, in terms of the similarity between the segmentation predictions and the FreeSurfer labels, and the usefulness of the estimate uncertainty of these predictions. In particular, we demonstrated that the prediction uncertainty of this network at each voxel is a good indicator of whether the network has made an error and that the uncertainty across the whole brain can predict the manual quality control ratings of a scan. The proposed Bayesian DNN method should be applicable to any new network architecture for addressing the segmentation problem.

12.
Med Phys ; 46(12): 5790-5798, 2019 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-31600829

RESUMO

PURPOSE: Monte Carlo (MC) algorithms offer accurate modeling of dose calculation by simulating the transport and interactions of many particles through the patient geometry. However, given their random nature, the resulting dose distributions have statistical uncertainty (noise), which prevents making reliable clinical decisions. This issue is partly addressable using a huge number of simulated particles but is computationally expensive as it results in significantly greater computation times. Therefore, there is a trade-off between the computation time and the noise level in MC dose maps. In this work, we address the mitigation of noise inherent to MC dose distributions using dilated U-Net - an encoder-decoder-styled fully convolutional neural network, which allows fast and fully automated denoising of whole-volume dose maps. METHODS: We use mean squared error (MSE) as loss function to train the model, where training is done in 2D and 2.5D settings by considering a number of adjacent slices. Our model is trained on proton therapy MC dose distributions of different tumor sites (brain, head and neck, liver, lungs, and prostate) acquired from 35 patients. We provide the network with input MC dose distributions simulated using 1 × 10 6 particles while keeping 1 × 10 9 particles as reference. RESULTS: After training, our model successfully denoises new MC dose maps. On average (averaged over five patients with different tumor sites), our model recovers D 95 of 55.99 Gy from the noisy MC input of 49.51 Gy, whereas the low noise MC (reference) offers 56.03 Gy. We observed a significant reduction in average RMSE (thresholded >10% max ref) for reference vs denoised (1.25 Gy) than reference vs input (16.96 Gy) leading to an improvement in signal-to-noise ratio (ISNR) by 18.06 dB. Moreover, the inference time of our model for a dose distribution is less than 10 s vs 100 min (MC simulation using 1 × 10 9 particles). CONCLUSIONS: We propose an end-to-end fully convolutional network that can denoise Monte Carlo dose distributions. The networks provide comparable qualitative and quantitative results as the MC dose distribution simulated with 1 × 10 9 particles, offering a significant reduction in computation time.


Assuntos
Método de Monte Carlo , Doses de Radiação , Planejamento da Radioterapia Assistida por Computador/métodos , Razão Sinal-Ruído , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Incerteza
13.
Med Phys ; 46(10): 4676-4684, 2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-31376305

RESUMO

INTRODUCTION: Proton therapy is very sensitive to treatment uncertainties. These uncertainties can induce proton range variations and may lead to severe dose distortions. However, most commercial tools only offer a limited integration of these uncertainties during treatment planning. In order to verify the robustness of a treatment plan, this study aims at developing a comprehensive Monte Carlo simulation of the treatment delivery, including the simulation of setup and range errors, variation of the breathing motion, and interplay effect. METHOD: Most clinically relevant uncertainties have been modeled and implemented in the fast Monte Carlo dose engine MCsquare. Especially, variation of the breathing motion is taken into account by deforming the initial Four-dimensional computed tomography (4DCT) series and generating multiple new 4DCT series with scaled motion. Systematic and random errors are randomly sampled, following a Monte Carlo approach, to generate individual erroneous treatment scenarios. The robustness of treatment plans is analyzed and reported with dose-volume histogram (DVH) bands. The statistical uncertainty coming from the Monte Carlo scenario sampling is studied. RESULTS: A validation demonstrated the ability of the motion model to generate new 4DCT series with scaled motion amplitude and improved image quality in comparison to the initial 4DCT. The robustness analysis is applied to a lung tumor treatment. Considering the proposed uncertainty model, the simulation of 300 treatment scenarios was necessary to reach an acceptable level of statistical uncertainty on the DVH band. CONCLUSION: A comprehensive and statistically sound method of treatment plan robustness verification is proposed. The uncertainty model presented in this paper is not specific to protons and can also be applied to photon treatments. Moreover, the generated 4DCT series, with scaled motion, can be imported in commercial TPSs.


Assuntos
Tomografia Computadorizada Quadridimensional , Método de Monte Carlo , Terapia com Prótons/métodos , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/radioterapia , Dosagem Radioterapêutica , Planejamento da Radioterapia Assistida por Computador , Incerteza
15.
BMC Emerg Med ; 18(1): 10, 2018 03 14.
Artigo em Inglês | MEDLINE | ID: mdl-29540151

RESUMO

BACKGROUND: Approximately 80% of patients presenting to emergency departments (ED) with chest pain do not have any true cardiopulmonary emergency such as acute coronary syndrome (ACS). However, psychological contributors such as anxiety are thought to be present in up to 58%, but often remain undiagnosed leading to chronic chest pain and ED recidivism. METHODS: To evaluate ED provider beliefs and their usual practices regarding the approach and disposition of patients with low risk chest pain associated with anxiety, we constructed a 22-item survey using a modified Delphi technique. The survey was administered to a convenience sample of ED providers attending the 2016 American College of Emergency Physicians Scientific Assembly in Las Vegas. RESULTS: Surveys were completed by 409 emergency medicine providers from 46 states and 7 countries with a wide range of years of experience and primary practice environment (academic versus community centers). Respondents estimated that 30% of patients presenting to the ED with chest pain thought to be low risk for ACS have anxiety or panic as the primary cause but they directly communicate this belief to only 42% of these patients and provide discharge instructions to 48%. Only 39% of respondents reported adequate hospital resources to ensure follow-up. Community-based providers reported more adequate follow-up for these patients than their academic center colleagues (46% vs. 34%; p = 0.015). Most providers (82%) indicated that they wanted to have referral resources available to a specific clinic for further outpatient evaluation. CONCLUSION: Emergency Department providers believe approximately 30% of patients seeking emergency care for chest pain at low risk for ACS have anxiety as a primary problem, yet fewer than half discuss this concern or provide information to help the patient manage anxiety. This highlights an opportunity for patient centered communication.


Assuntos
Ansiedade/etiologia , Ansiedade/psicologia , Dor no Peito/complicações , Dor no Peito/psicologia , Serviço Hospitalar de Emergência/estatística & dados numéricos , Ansiolíticos/administração & dosagem , Ansiedade/diagnóstico , Ansiedade/tratamento farmacológico , Técnica Delfos , Feminino , Humanos , Masculino , Padrões de Prática Médica
16.
Adv Neural Inf Process Syst ; 31: 4093-4103, 2018 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34376963

RESUMO

Collecting the large datasets needed to train deep neural networks can be very difficult, particularly for the many applications for which sharing and pooling data is complicated by practical, ethical, or legal concerns. However, it may be the case that derivative datasets or predictive models developed within individual sites can be shared and combined with fewer restrictions. Training on distributed data and combining the resulting networks is often viewed as continual learning, but these methods require networks to be trained sequentially. In this paper, we introduce distributed weight consolidation (DWC), a continual learning method to consolidate the weights of separate neural networks, each trained on an independent dataset. We evaluated DWC with a brain segmentation case study, where we consolidated dilated convolutional neural networks trained on independent structural magnetic resonance imaging (sMRI) datasets from different sites. We found that DWC led to increased performance on test sets from the different sites, while maintaining generalization performance for a very large and completely independent multi-site dataset, compared to an ensemble baseline.

17.
Radiother Oncol ; 128(1): 161-166, 2018 07.
Artigo em Inglês | MEDLINE | ID: mdl-28951008

RESUMO

BACKGROUND & PURPOSE: Intensity-modulated proton therapy (IMPT) of superficial lesions requires pre-absorbing range shifter (RS) to deliver the more shallow spots. RS air gap minimization is important to avoid spot size degradation, but remains challenging in complex geometries such as in head-and-neck cancer (HNC). In this study, clinical endpoints were investigated for patient-specific bolus and for conventional RS solutions, making use of a Monte Carlo (MC) dose engine for IMPT optimization. METHODS AND MATERIALS: For 5 oropharyngeal cancer patients, IMPT spot maps were generated using beamlets calculated with MC. The plans were optimized for three different RS configurations: 3D printed on-skin bolus, snout- and nozzle-mounted RS. Organ-at-risk (OAR) doses and late toxicity probabilities were compared between all configuration-specific optimized plans. RESULTS: The use of bolus reduced the mean dose to all OARs compared to snout and nozzle-mounted RS. The contralateral parotid gland and supraglottic larynx received on average 2.9Gy and 4.2Gy less dose compared to the snout RS. Bolus reduced the average probability for xerostomia by 3.0%. For dysphagia, bolus reduced the probability by 2.7%. CONCLUSIONS: Quantification of the dosimetric advantage of patient-specific bolus shows significant reductions compared to conventional RS solutions for xerostomia and dysphagia probability. These results motivate the development of a patient-specific bolus solution in IMPT for HNC.


Assuntos
Tratamentos com Preservação do Órgão/métodos , Neoplasias Orofaríngeas/radioterapia , Terapia com Prótons/métodos , Planejamento da Radioterapia Assistida por Computador/métodos , Radioterapia de Intensidade Modulada/métodos , Transtornos de Deglutição/prevenção & controle , Humanos , Doenças da Laringe/prevenção & controle , Método de Monte Carlo , Doenças Parotídeas/prevenção & controle , Probabilidade , Dosagem Radioterapêutica , Xerostomia/prevenção & controle
18.
Med Phys ; 44(8): 4098-4111, 2017 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-28474819

RESUMO

PURPOSE: The aim of this paper is to define the requirements and describe the design and implementation of a standard benchmark tool for evaluation and validation of PET-auto-segmentation (PET-AS) algorithms. This work follows the recommendations of Task Group 211 (TG211) appointed by the American Association of Physicists in Medicine (AAPM). METHODS: The recommendations published in the AAPM TG211 report were used to derive a set of required features and to guide the design and structure of a benchmarking software tool. These items included the selection of appropriate representative data and reference contours obtained from established approaches and the description of available metrics. The benchmark was designed in a way that it could be extendable by inclusion of bespoke segmentation methods, while maintaining its main purpose of being a standard testing platform for newly developed PET-AS methods. An example of implementation of the proposed framework, named PETASset, was built. In this work, a selection of PET-AS methods representing common approaches to PET image segmentation was evaluated within PETASset for the purpose of testing and demonstrating the capabilities of the software as a benchmark platform. RESULTS: A selection of clinical, physical, and simulated phantom data, including "best estimates" reference contours from macroscopic specimens, simulation template, and CT scans was built into the PETASset application database. Specific metrics such as Dice Similarity Coefficient (DSC), Positive Predictive Value (PPV), and Sensitivity (S), were included to allow the user to compare the results of any given PET-AS algorithm to the reference contours. In addition, a tool to generate structured reports on the evaluation of the performance of PET-AS algorithms against the reference contours was built. The variation of the metric agreement values with the reference contours across the PET-AS methods evaluated for demonstration were between 0.51 and 0.83, 0.44 and 0.86, and 0.61 and 1.00 for DSC, PPV, and the S metric, respectively. Examples of agreement limits were provided to show how the software could be used to evaluate a new algorithm against the existing state-of-the art. CONCLUSIONS: PETASset provides a platform that allows standardizing the evaluation and comparison of different PET-AS methods on a wide range of PET datasets. The developed platform will be available to users willing to evaluate their PET-AS methods and contribute with more evaluation datasets.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Humanos , Imagens de Fantasmas , Software , Tomografia Computadorizada por Raios X
19.
Acta Oncol ; 56(9): 1181-1188, 2017 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-28537761

RESUMO

BACKGROUND: PET-guided dose painting (DP) aims to target radioresistant tumour regions in order to improve radiotherapy (RT) outcome. Besides the well-known [18F]fluorodeoxyglucose (FDG), the hypoxia positron emission tomography (PET) tracer [18F]fluoroazomycin arabinoside (FAZA) could provide further useful information to guide the radiation dose prescription. In this study, we compare the spatial distributions of FDG and FAZA PET uptakes in lung tumours. MATERIAL AND METHODS: Fourteen patients with unresectable lung cancer underwent FDG and FAZA 4D-PET/CT on consecutive days at three time-points: prior to RT (pre), and during the second (w2), and the third (w3) weeks of RT. All PET/CT were reconstructed in their time-averaged midposition (MidP). The metabolic tumour volume (MTV: FDG standardised uptake value (SUV) > 50% SUVmax), and the hypoxic volume (HV: FAZA SUV > 1.4) were delineated within the gross tumour volume (GTVCT). FDG and FAZA intratumoral PET uptake distributions were subsequently pairwise compared, using both volume-, and voxel-based analyses. RESULTS: Volume-based analysis showed large overlap between MTV and HV: median overlapping fraction was 0.90, 0.94 and 0.94, at the pre, w2 and w3 time-points, respectively. Voxel-wise analysis between FDG and FAZA intratumoral PET uptake distributions showed high correlation: median Spearman's rank correlation coefficient was 0.76, 0.77 and 0.76, at the pre, w2 and w3 time-points, respectively. Interestingly, tumours with high FAZA uptake tended to show more similarity between FDG and FAZA intratumoral uptake distributions than those with low FAZA uptake. CONCLUSIONS: In unresectable lung carcinomas, FDG and FAZA PET uptake distributions displayed unexpectedly strong similarity, despite the distinct pathways targeted by these tracers. Hypoxia PET with FAZA brought very little added value over FDG from the perspective of DP in this population.


Assuntos
Adenocarcinoma/metabolismo , Carcinoma de Células Escamosas/metabolismo , Fluordesoxiglucose F18/metabolismo , Neoplasias Pulmonares/metabolismo , Nitroimidazóis/metabolismo , Compostos Radiofarmacêuticos/metabolismo , Carcinoma de Pequenas Células do Pulmão/metabolismo , Adenocarcinoma/diagnóstico por imagem , Adenocarcinoma/patologia , Adenocarcinoma/radioterapia , Idoso , Carcinoma de Células Escamosas/diagnóstico por imagem , Carcinoma de Células Escamosas/patologia , Carcinoma de Células Escamosas/radioterapia , Feminino , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/patologia , Neoplasias Pulmonares/radioterapia , Masculino , Pessoa de Meia-Idade , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Prognóstico , Dosagem Radioterapêutica , Planejamento da Radioterapia Assistida por Computador , Carcinoma de Pequenas Células do Pulmão/diagnóstico por imagem , Carcinoma de Pequenas Células do Pulmão/patologia , Carcinoma de Pequenas Células do Pulmão/radioterapia
20.
Acta Oncol ; 56(4): 516-524, 2017 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-28358668

RESUMO

BACKGROUND: Dose painting (DP) aims to improve radiation therapy (RT) outcome by targeting radioresistant tumour regions identified through functional imaging, e.g., positron emission tomography (PET). Importantly, the expected benefit of DP relies on the ability of PET imaging to identify tumour areas which could be consistently targeted throughout the treatment. In this study, we analysed the spatial stability of two potential DP targets in lung cancer patients undergoing RT: the tumour burden surrogate [18F]fluorodeoxyglucose (FDG) and the hypoxia surrogate [18F]fluoroazomycin arabinoside (FAZA). MATERIALS AND METHODS: Thirteen patients with unresectable lung tumours underwent FDG and FAZA 4D-PET/CT before (pre), and during the second (w2) and third (w3) weeks of RT. All PET/CT were reconstructed in their time-averaged midposition (MidP) for further analysis. The metabolic tumour volume (MTV: FDG standardised uptake value (SUV) > 50% SUVmax) and the hypoxic volume (HV: FAZA SUV >1.4) were delineated within the gross tumour volume (GTVCT). The stability of FDG and FAZA PET uptake distributions during RT was subsequently assessed through volume-overlap analysis and voxel-based correlation analysis. RESULTS: The volume-overlap analysis yielded median overlapping fraction (OF) of 0.86 between MTVpre and MTVw2 and 0.82 between MTVpre and MTVw3. In patients with a detectable HV, median OF was 0.82 between HVpre and HVw2 and 0.90 between HVpre and HVw3. The voxel-based correlation analysis yielded median Spearman's correlation coefficient (rS) of 0.87 between FDGpre and FDGw2 and 0.83 between FDGpre and FDGw3. Median rS was 0.78 between FAZApre and FAZAw2 and 0.79 between FAZApre and FAZAw3. CONCLUSIONS: FDG and FAZA PET uptake distributions were spatially stable during the 3 first weeks of RT in patients with unresectable lung cancer, both based on volume- and voxel-based indicators. This might allow for a consistent targeting of high FDG or FAZA PET uptake regions as part of a DP strategy.


Assuntos
Fluordesoxiglucose F18/farmacocinética , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/radioterapia , Nitroimidazóis/farmacocinética , Compostos Radiofarmacêuticos/farmacocinética , Idoso , Feminino , Humanos , Interpretação de Imagem Assistida por Computador , Masculino , Pessoa de Meia-Idade , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Radioterapia/métodos , Dosagem Radioterapêutica
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...