Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
1.
Sci Rep ; 12(1): 3183, 2022 02 24.
Artigo em Inglês | MEDLINE | ID: mdl-35210482

RESUMO

In radiation oncology, predicting patient risk stratification allows specialization of therapy intensification as well as selecting between systemic and regional treatments, all of which helps to improve patient outcome and quality of life. Deep learning offers an advantage over traditional radiomics for medical image processing by learning salient features from training data originating from multiple datasets. However, while their large capacity allows to combine high-level medical imaging data for outcome prediction, they lack generalization to be used across institutions. In this work, a pseudo-volumetric convolutional neural network with a deep preprocessor module and self-attention (PreSANet) is proposed for the prediction of distant metastasis, locoregional recurrence, and overall survival occurrence probabilities within the 10 year follow-up time frame for head and neck cancer patients with squamous cell carcinoma. The model is capable of processing multi-modal inputs of variable scan length, as well as integrating patient data in the prediction model. These proposed architectural features and additional modalities all serve to extract additional information from the available data when availability to additional samples is limited. This model was trained on the public Cancer Imaging Archive Head-Neck-PET-CT dataset consisting of 298 patients undergoing curative radio/chemo-radiotherapy and acquired from 4 different institutions. The model was further validated on an internal retrospective dataset with 371 patients acquired from one of the institutions in the training dataset. An extensive set of ablation experiments were performed to test the utility of the proposed model characteristics, achieving an AUROC of [Formula: see text], [Formula: see text] and [Formula: see text] for DM, LR and OS respectively on the public TCIA Head-Neck-PET-CT dataset. External validation was performed on a retrospective dataset with 371 patients, achieving [Formula: see text] AUROC in all outcomes. To test for model generalization across sites, a validation scheme consisting of single site-holdout and cross-validation combining both datasets was used. The mean accuracy across 4 institutions obtained was [Formula: see text], [Formula: see text] and [Formula: see text] for DM, LR and OS respectively. The proposed model demonstrates an effective method for tumor outcome prediction for multi-site, multi-modal combining both volumetric data and structured patient clinical data.


Assuntos
Carcinoma de Células Escamosas/diagnóstico por imagem , Diagnóstico por Computador/métodos , Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Adulto , Idoso , Idoso de 80 Anos ou mais , Atenção , Biomarcadores Tumorais , Carcinoma de Células Escamosas/terapia , Aprendizado Profundo , Feminino , Neoplasias de Cabeça e Pescoço/terapia , Humanos , Masculino , Pessoa de Meia-Idade , Recidiva Local de Neoplasia/diagnóstico por imagem , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Prognóstico , Qualidade de Vida , Estudos Retrospectivos
2.
Radiother Oncol ; 166: 154-161, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34861267

RESUMO

BACKGROUND AND PURPOSE: Advances in high-dose-rate brachytherapy to treat prostate cancer hinge on improved accuracy in navigation and targeting while optimizing a streamlined workflow. Multimodal image registration and electromagnetic (EM) tracking are two technologies integrated into a prototype system in the early phase of clinical evaluation. We aim to report on the system's accuracy and workflow performance in support of tumor-targeted procedures. MATERIALS AND METHODS: In a prospective study, we evaluated the system in 43 consecutive procedures after clinical deployment. We measured workflow efficiency and EM catheter reconstruction accuracy. We also evaluated the system's MRI-TRUS registration accuracy with/without deformation, and with/without y-axis rotation for urethral alignment at initialization. RESULTS: The cohort included 32 focal brachytherapy and 11 integrated boost whole-gland implants. Mean procedure time excluding dose delivery was 38 min (range: 21-83) for focal, and 56 min (range: 38-89) for whole-gland implants; stable over time. EM catheter reconstructions achieved a mean difference between computed and measured free-length of 0.8 mm (SD 0.8, no corrections performed), and mean axial manual corrections 1.3 mm (SD 0.7). EM also enabled the clinical use of a non or partially visible catheter in 21% of procedures. Registration accuracy improved with y-axis rotation for urethral alignment at initialization and with the elastic registration (mTRE 3.42 mm, SD 1.49). CONCLUSION: The system supported tumor-targeting and was implemented with no demonstrable learning curve. EM reconstruction errors were small, correctable, and improved with calibration and control of external distortion sources; increasing confidence in the use of partially visible catheters. Image registration errors remained despite rotational alignment and deformation, and should be carefully considered.


Assuntos
Braquiterapia , Neoplasias da Próstata , Braquiterapia/métodos , Humanos , Masculino , Imagens de Fantasmas , Estudos Prospectivos , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/radioterapia , Dosagem Radioterapêutica
3.
Phys Med Biol ; 66(9)2021 04 23.
Artigo em Inglês | MEDLINE | ID: mdl-33761478

RESUMO

With the emergence of online MRI radiotherapy treatments, MR-based workflows have increased in importance in the clinical workflow. However proper dose planning still requires CT images to calculate dose attenuation due to bony structures. In this paper, we present a novel deep image synthesis model that generates in an unsupervised manner CT images from diagnostic MRI for radiotherapy planning. The proposed model based on a generative adversarial network (GAN) consists of learning a new invariant representation to generate synthetic CT (sCT) images based on high frequency and appearance patterns. This new representation encodes each convolutional feature map of the convolutional GAN discriminator, leading the training of the proposed model to be particularly robust in terms of image synthesis quality. Our model includes an analysis of common histogram features in the training process, thus reinforcing the generator such that the output sCT image exhibits a histogram matching that of the ground-truth CT. This CT-matched histogram is embedded then in a multi-resolution framework by assessing the evaluation over all layers of the discriminator network, which then allows the model to robustly classify the output synthetic image. Experiments were conducted on head and neck images of 56 cancer patients with a wide range of shape sizes and spatial image resolutions. The obtained results confirm the efficiency of the proposed model compared to other generative models, where the mean absolute error yielded by our model was 26.44(0.62), with a Hounsfield unit error of 45.3(1.87), and an overall Dice coefficient of 0.74(0.05), demonstrating the potential of the synthesis model for radiotherapy planning applications.


Assuntos
Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Tomografia Computadorizada por Raios X , Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Humanos
4.
Neuroimaging Clin N Am ; 30(4): 417-431, 2020 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-33038993

RESUMO

Deep learning has contributed to solving complex problems in science and engineering. This article provides the fundamental background required to understand and develop deep learning models for medical imaging applications. The authors review the main deep learning architectures such as multilayer perceptron, convolutional neural networks, autoencoders, recurrent neural networks, and generative adversarial neural networks. They also discuss the strategies for training deep learning models when the available datasets are imbalanced or of limited size and conclude with a discussion of the obstacles and challenges hindering the deployment of deep learning solutions in clinical settings.


Assuntos
Interpretação de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Neuroimagem/métodos , Aprendizado Profundo , Humanos
5.
Neuroimaging Clin N Am ; 30(4): 517-529, 2020 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-33039001

RESUMO

The head and neck (HN) consists of a large number of vital anatomic structures within a compact area. Imaging plays a central role in the diagnosis and management of major disorders affecting the HN. This article reviews the recent applications of machine learning (ML) in HN imaging with a focus on deep learning approaches. It categorizes ML applications in HN imaging into deep learning and traditional ML applications and provides examples of each category. It also discusses the main challenges facing the successful deployment of ML-based applications in the clinical setting and provides suggestions for addressing these challenges.


Assuntos
Diagnóstico por Imagem/métodos , Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA