Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
1.
BMC Med Imaging ; 23(1): 197, 2023 11 29.
Artigo em Inglês | MEDLINE | ID: mdl-38031032

RESUMO

BACKGROUND: In recent years, there has been a growing trend towards utilizing Artificial Intelligence (AI) and machine learning techniques in medical imaging, including for the purpose of automating quality assurance. In this research, we aimed to develop and evaluate various deep learning-based approaches for automatic quality assurance of Magnetic Resonance (MR) images using the American College of Radiology (ACR) standards. METHODS: The study involved the development, optimization, and testing of custom convolutional neural network (CNN) models. Additionally, popular pre-trained models such as VGG16, VGG19, ResNet50, InceptionV3, EfficientNetB0, and EfficientNetB5 were trained and tested. The use of pre-trained models, particularly those trained on the ImageNet dataset, for transfer learning was also explored. Two-class classification models were employed for assessing spatial resolution and geometric distortion, while an approach classifying the image into 10 classes representing the number of visible spokes was used for the low contrast. RESULTS: Our results showed that deep learning-based methods can be effectively used for MR image quality assurance and can improve the performance of these models. The low contrast test was one of the most challenging tests within the ACR phantom. CONCLUSIONS: Overall, for geometric distortion and spatial resolution, all of the deep learning models tested produced prediction accuracy of 80% or higher. The study also revealed that training the models from scratch performed slightly better compared to transfer learning. For the low contrast, our investigation emphasized the adaptability and potential of deep learning models. The custom CNN models excelled in predicting the number of visible spokes, achieving commendable accuracy, recall, precision, and F1 scores.


Assuntos
Inteligência Artificial , Aprendizado Profundo , Humanos , Imagens de Fantasmas , Aprendizado de Máquina , Imageamento por Ressonância Magnética
2.
J Appl Clin Med Phys ; 22(12): 149-157, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34719100

RESUMO

One of the main challenges to using magnetic resonance imaging (MRI) in radiotherapy is the existence of system-related geometric inaccuracies caused mainly by the inhomogeneity in the main magnetic field and the nonlinearities of the gradient coils. Several physical phantoms, with fixed configuration, have been developed and commercialized for the assessment of the MRI geometric distortion. In this study, we propose a new design of a customizable phantom that can fit any type of radio frequency (RF) coil. It is composed of 3D printed plastic blocks containing holes that can hold glass tubes which can be filled with any liquid. The blocks can be assembled to construct phantoms with any dimension. The feasibility of this design has been demonstrated by assembling four phantoms with high robustness allowing the assessment of the geometric distortion for the GE split head coil, the head and neck array coil, the anterior array coil, and the body coil. Phantom reproducibility was evaluated by analyzing the geometric distortion on CT acquisition of five independent assemblages of the phantom. This solution meets all expectations in terms of having a robust, lightweight, modular, and practical tool for measuring distortion in three dimensions. Mean error in the position of the tubes was less than 0.2 mm. For the geometric distortion, our results showed that for all typical MRI sequences used for radiotherapy, the mean geometric distortion was less than 1 mm and less than 2.5 mm over radial distances of 150 mm and 250 mm, respectively. These tools will be part of a quality assurance program aimed at monitoring the image quality of MRI scanners used to guide radiation therapy.


Assuntos
Imageamento Tridimensional , Imageamento por Ressonância Magnética , Humanos , Campos Magnéticos , Imagens de Fantasmas , Reprodutibilidade dos Testes
3.
J Appl Clin Med Phys ; 19(2): 168-175, 2018 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-29388320

RESUMO

Magnetic Resonance Imaging (MRI) is increasingly being used for improving tumor delineation and tumor tracking in the presence of respiratory motion. The purpose of this work is to design and build an MR compatible motion platform and to use it for evaluating the geometric accuracy of MR imaging techniques during respiratory motion. The motion platform presented in this work is composed of a mobile base made up of a flat plate and four wheels. The mobile base is attached from one end and through a rigid rod to a synchrony motion table by Accuray® placed at the end of the MRI table and from the other end to an elastic rod. The geometric accuracy was measured by placing a control point-based phantom on top of the mobile base. In-house software module was used to automatically assess the geometric distortion. The blurring artifact was also assessed by measuring the Full Width Half Maximum (FWHM) of each control point. Our results were assessed for 50, 100, and 150 mm radial distances, with a mean geometric distortion during the superior-inferior motion of 0.27, 0.41, and 0.55 mm, respectively. Adding the anterior-posterior motion, the mean geometric distortions increased to 0.4, 0.6, and 0.8 mm. Blurring was observed during motion causing an increase in the FWHM of ≈30%. The platform presented in this work provides a valuable tool for the assessment of the geometric accuracy and blurring artifact for MR during motion. Although the main objective was to test the spatial accuracy of an MR system during motion, the modular aspect of the presented platform enables the use of any commercially available phantom for a full quality control of the MR system during motion.


Assuntos
Imageamento por Ressonância Magnética/instrumentação , Imageamento por Ressonância Magnética/métodos , Movimento , Imagens de Fantasmas , Planejamento da Radioterapia Assistida por Computador/métodos , Software , Humanos , Dosagem Radioterapêutica
4.
Bioengineering (Basel) ; 11(5)2024 Apr 23.
Artigo em Inglês | MEDLINE | ID: mdl-38790279

RESUMO

Brain cancer is a life-threatening disease requiring close attention. Early and accurate diagnosis using non-invasive medical imaging is critical for successful treatment and patient survival. However, manual diagnosis by radiologist experts is time-consuming and has limitations in processing large datasets efficiently. Therefore, efficient systems capable of analyzing vast amounts of medical data for early tumor detection are urgently needed. Deep learning (DL) with deep convolutional neural networks (DCNNs) emerges as a promising tool for understanding diseases like brain cancer through medical imaging modalities, especially MRI, which provides detailed soft tissue contrast for visualizing tumors and organs. DL techniques have become more and more popular in current research on brain tumor detection. Unlike traditional machine learning methods requiring manual feature extraction, DL models are adept at handling complex data like MRIs and excel in classification tasks, making them well-suited for medical image analysis applications. This study presents a novel Dual DCNN model that can accurately classify cancerous and non-cancerous MRI samples. Our Dual DCNN model uses two well-performed DL models, i.e., inceptionV3 and denseNet121. Features are extracted from these models by appending a global max pooling layer. The extracted features are then utilized to train the model with the addition of five fully connected layers and finally accurately classify MRI samples as cancerous or non-cancerous. The fully connected layers are retrained to learn the extracted features for better accuracy. The technique achieves 99%, 99%, 98%, and 99% of accuracy, precision, recall, and f1-scores, respectively. Furthermore, this study compares the Dual DCNN's performance against various well-known DL models, including DenseNet121, InceptionV3, ResNet architectures, EfficientNetB2, SqueezeNet, VGG16, AlexNet, and LeNet-5, with different learning rates. This study indicates that our proposed approach outperforms these established models in terms of performance.

5.
Biomed Phys Eng Express ; 10(4)2024 06 10.
Artigo em Inglês | MEDLINE | ID: mdl-38815562

RESUMO

Purpose. This study aims to introduce an innovative noninvasive method that leverages a single image for both grading and staging prediction. The grade and the stage of cervix cancer (CC) are determined from diffusion-weighted imaging (DWI) in particular apparent diffusion coefficient (ADC) maps using deep convolutional neural networks (DCNN).Methods. datasets composed of 85 patients having annotated tumor stage (I, II, III, and IV), out of this, 66 were with grade (II and III) and the remaining patients with no reported grade were retrospectively collected. The study was IRB approved. For each patient, sagittal and axial slices containing the gross tumor volume (GTV) were extracted from ADC maps. These were computed using the mono exponential model from diffusion weighted images (b-values = 0, 100, 1000) that were acquired prior to radiotherapy treatment. Balanced training sets were created using the Synthetic Minority Oversampling Technique (SMOTE) and fed to the DCNN. EfficientNetB0 and EfficientNetB3 were transferred from the ImageNet application to binary and four-class classification tasks. Five-fold stratified cross validation was performed for the assessment of the networks. Multiple evaluation metrics were computed including the area under the receiver operating characteristic curve (AUC). Comparisons with Resnet50, Xception, and radiomic analysis were performed.Results. for grade prediction, EfficientNetB3 gave the best performance with AUC = 0.924. For stage prediction, EfficientNetB0 was the best with AUC = 0.931. The difference between both models was, however, small and not statistically significant EfficientNetB0-B3 outperformed ResNet50 (AUC = 0.71) and Xception (AUC = 0.89) in stage prediction, and demonstrated comparable results in grade classification, where AUCs of 0.89 and 0.90 were achieved by ResNet50 and Xception, respectively. DCNN outperformed radiomic analysis that gave AUC = 0.67 (grade) and AUC = 0.66 (stage).Conclusion.the prediction of CC grade and stage from ADC maps is feasible by adapting EfficientNet approaches to the medical context.


Assuntos
Imagem de Difusão por Ressonância Magnética , Gradação de Tumores , Estadiamento de Neoplasias , Redes Neurais de Computação , Neoplasias do Colo do Útero , Humanos , Neoplasias do Colo do Útero/diagnóstico por imagem , Neoplasias do Colo do Útero/patologia , Feminino , Imagem de Difusão por Ressonância Magnética/métodos , Estudos Retrospectivos , Pessoa de Meia-Idade , Processamento de Imagem Assistida por Computador/métodos , Curva ROC , Adulto , Algoritmos
6.
Biomed Phys Eng Express ; 9(3)2023 03 23.
Artigo em Inglês | MEDLINE | ID: mdl-36898146

RESUMO

Purpose.To determine glioma grading by applying radiomic analysis or deep convolutional neural networks (DCNN) and to benchmark both approaches on broader validation sets.Methods.Seven public datasets were considered: (1) low-grade glioma or high-grade glioma (369 patients, BraTS'20) (2) well-differentiated liposarcoma or lipoma (115, LIPO); (3) desmoid-type fibromatosis or extremity soft-tissue sarcomas (203, Desmoid); (4) primary solid liver tumors, either malignant or benign (186, LIVER); (5) gastrointestinal stromal tumors (GISTs) or intra-abdominal gastrointestinal tumors radiologically resembling GISTs (246, GIST); (6) colorectal liver metastases (77, CRLM); and (7) lung metastases of metastatic melanoma (103, Melanoma). Radiomic analysis was performed on 464 (2016) radiomic features for the BraTS'20 (others) datasets respectively. Random forests (RF), Extreme Gradient Boosting (XGBOOST) and a voting algorithm comprising both classifiers were tested. The parameters of the classifiers were optimized using a repeated nested stratified cross-validation process. The feature importance of each classifier was computed using the Gini index or permutation feature importance. DCNN was performed on 2D axial and sagittal slices encompassing the tumor. A balanced database was created, when necessary, using smart slices selection. ResNet50, Xception, EficientNetB0, and EfficientNetB3 were transferred from the ImageNet application to the tumor classification and were fine-tuned. Five-fold stratified cross-validation was performed to evaluate the models. The classification performance of the models was measured using multiple indices including area under the receiver operating characteristic curve (AUC).Results.The best radiomic approach was based on XGBOOST for all datasets; AUC was 0.934 (BraTS'20), 0.86 (LIPO), 0.73 (LIVER), (0.844) Desmoid, 0.76 (GIST), 0.664 (CRLM), and 0.577 (Melanoma) respectively. The best DCNN was based on EfficientNetB0; AUC was 0.99 (BraTS'20), 0.982 (LIPO), 0.977 (LIVER), (0.961) Desmoid, 0.926 (GIST), 0.901 (CRLM), and 0.89 (Melanoma) respectively.Conclusion.Tumor classification can be accurately determined by adapting state-of-the-art machine learning algorithms to the medical context.


Assuntos
Aprendizado Profundo , Glioma , Radiômica , Glioma/diagnóstico por imagem , Glioma/patologia , Gradação de Tumores , Humanos , Conjuntos de Dados como Assunto
7.
Biomed Phys Eng Express ; 9(5)2023 08 04.
Artigo em Inglês | MEDLINE | ID: mdl-37489854

RESUMO

Purpose.To create a synthetic CT (sCT) from daily CBCT using either deep residual U-Net (DRUnet), or conditional generative adversarial network (cGAN) for adaptive radiotherapy planning (ART).Methods.First fraction CBCT and planning CT (pCT) were collected from 93 Head and Neck patients who underwent external beam radiotherapy. The dataset was divided into training, validation, and test sets of 58, 10 and 25 patients respectively. Three methods were used to generate sCT, 1. Nonlocal means patch based method was modified to include multiscale patches defining the multiscale patch based method (MPBM), 2. An encoder decoder 2D Unet with imbricated deep residual units was implemented, 3. DRUnet was integrated to the generator part of cGAN whereas a convolutional PatchGAN classifier was used as the discriminator. The accuracy of sCT was evaluated geometrically using Mean Absolute Error (MAE). Clinical Volumetric Modulated Arc Therapy (VMAT) plans were copied from pCT to registered CBCT and sCT and dosimetric analysis was performed by comparing Dose Volume Histogram (DVH) parameters of planning target volumes (PTVs) and organs at risk (OARs). Furthermore, 3D Gamma analysis (2%/2mm, global) between the dose on the sCT or CBCT and that on the pCT was performed.Results. The average MAE calculated between pCT and CBCT was 180.82 ± 27.37HU. Overall, all approaches significantly reduced the uncertainties in CBCT. Deep learning approaches outperformed patch-based methods with MAE = 67.88 ± 8.39HU (DRUnet) and MAE = 72.52 ± 8.43HU (cGAN) compared to MAE = 90.69 ± 14.3HU (MPBM). The percentages of DVH metric deviations were below 0.55% for PTVs and 1.17% for OARs using DRUnet. The average Gamma pass rate was 99.45 ± 1.86% for sCT generated using DRUnet.Conclusion.DL approaches outperformed MPBM. Specifically, DRUnet could be used for the generation of sCT with accurate intensities and realistic description of patient anatomy. This could be beneficial for CBCT based ART.


Assuntos
Aprendizado Profundo , Neoplasias de Cabeça e Pescoço , Tomografia Computadorizada de Feixe Cônico Espiral , Humanos , Dosagem Radioterapêutica , Planejamento da Radioterapia Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/métodos , Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Neoplasias de Cabeça e Pescoço/radioterapia
8.
Med Phys ; 50(12): 7891-7903, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37379068

RESUMO

BACKGROUND: Automatic patient-specific quality assurance (PSQA) is recently explored using artificial intelligence approaches, and several studies reported the development of machine learning models for predicting the gamma pass rate (GPR) index only. PURPOSE: To develop a novel deep learning approach using a generative adversarial network (GAN) to predict the synthetic measured fluence. METHODS AND MATERIALS: A novel training method called "dual training," which involves the training of the encoder and decoder separately, was proposed and evaluated for cycle GAN (cycle-GAN) and conditional GAN (c-GAN). A total of 164 VMAT treatment plans, including 344 arcs (training data: 262, validation data: 30, and testing data: 52) from various treatment sites, were selected for prediction model development. For each patient, portal-dose-image-prediction fluence from TPS was used as input, and measured fluence from EPID was used as output/response for model training. Predicted GPR was derived by comparing the TPS fluence with the synthetic measured fluence generated by the DL models using gamma evaluation of criteria 2%/2 mm. The performance of dual training was compared against the traditional single-training approach. In addition, we also developed a separate classification model specifically designed to detect automatically three types of errors (rotational, translational, and MU-scale) in the synthetic EPID-measured fluence. RESULTS: Overall, the dual training improved the prediction accuracy of both cycle-GAN and c-GAN. Predicted GPR results of single training were within 3% for 71.2% and 78.8% of test cases for cycle-GAN and c-GAN, respectively. Moreover, similar results for dual training were 82.7% and 88.5% for cycle-GAN and c-GAN, respectively. The error detection model showed high classification accuracy (>98%) for detecting errors related to rotational and translational errors. However, it struggled to differentiate the fluences with "MU scale error" from "error-free" fluences. CONCLUSION: We developed a method to automatically generate the synthetic measured fluence and identify errors within them. The proposed dual training improved the PSQA prediction accuracy of both the GAN models, with c-GAN demonstrating superior performance over the cycle-GAN. Our results indicate that the c-GAN with dual training approach combined with error detection model, can accurately generate the synthetic measured fluence for VMAT PSQA and identify the errors. This approach has the potential to pave the way for virtual patient-specific QA of VMAT treatments.


Assuntos
Aprendizado Profundo , Radioterapia de Intensidade Modulada , Humanos , Radioterapia de Intensidade Modulada/métodos , Inteligência Artificial , Aprendizado de Máquina , Planejamento da Radioterapia Assistida por Computador/métodos , Dosagem Radioterapêutica
9.
Phys Imaging Radiat Oncol ; 28: 100512, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-38111501

RESUMO

Background and purpose: Accurate CT numbers in Cone Beam CT (CBCT) are crucial for precise dose calculations in adaptive radiotherapy (ART). This study aimed to generate synthetic CT (sCT) from CBCT using deep learning (DL) models in head and neck (HN) radiotherapy. Materials and methods: A novel DL model, the 'self-attention-residual-UNet' (ResUNet), was developed for accurate sCT generation. ResUNet incorporates a self-attention mechanism in its long skip connections to enhance information transfer between the encoder and decoder. Data from 93 HN patients, each with planning CT (pCT) and first-day CBCT images were used. Model performance was evaluated using two DL approaches (non-adversarial and adversarial training) and two model types (2D axial only vs. 2.5D axial, sagittal, and coronal). ResUNet was compared with the traditional UNet through image quality assessment (Mean Absolute Error (MAE), Peak-Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM)) and dose calculation accuracy evaluation (DVH deviation and gamma evaluation (1 %/1mm)). Results: Image similarity evaluation results for the 2.5D-ResUNet and 2.5D-UNet models were: MAE: 46±7 HU vs. 51±9 HU, PSNR: 66.6±2.0 dB vs. 65.8±1.8 dB, and SSIM: 0.81±0.04 vs. 0.79±0.05. There were no significant differences in dose calculation accuracy between DL models. Both models demonstrated DVH deviation below 0.5 % and a gamma-pass-rate (1 %/1mm) exceeding 97 %. Conclusions: ResUNet enhanced CT number accuracy and image quality of sCT and outperformed UNet in sCT generation from CBCT. This method holds promise for generating precise sCT for HN ART.

10.
Biomed Phys Eng Express ; 8(6)2022 09 29.
Artigo em Inglês | MEDLINE | ID: mdl-36130525

RESUMO

Real-time tracking of a target volume is a promising solution for reducing the planning margins and both dosimetric and geometric uncertainties in the treatment of thoracic and upper-abdomen cancers. Respiratory motion prediction is an integral part of real-time tracking to compensate for the latency of tracking systems. The purpose of this work was to develop a novel method for accurate respiratory motion prediction using dual deep recurrent neural networks (RNNs). The respiratory motion data of 111 patients were used to train and evaluate the method. For each patient, two models (Network1 and Network2) were trained on 80% of the respiratory wave, and the remaining 20% was used for evaluation. The first network (Network 1) is a 'coarse resolution' prediction of future points and second network (Network 2) provides a 'fine resolution' prediction to interpolate between the future predictions. The performance of the method was tested using two types of RNN algorithms : Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU). The accuracy of each model was evaluated using the root mean square error (RMSE) and mean absolute error (MAE). Overall, the RNN model with GRU- function had better accuracy than the RNN model with LSTM-function (RMSE (mm): 0.4 ± 0.2 versus 0.6 ± 0.3; MAE (mm): 0.4 ± 0.2 versus 0.6 ± 0.2). The GRU was able to predict the respiratory motion accurately (<1 mm) up to the latency period of 440 ms, and LSTM's accuracy was acceptable only up to 240 ms. The proposed method using GRU function can be used for respiratory-motion prediction up to a latency period of 440 ms.


Assuntos
Algoritmos , Redes Neurais de Computação , Previsões , Humanos , Movimento (Física) , Taxa Respiratória
11.
Phys Med ; 42: 174-184, 2017 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-29173912

RESUMO

PURPOSE: To create a synthetic CT (sCT) from conventional brain MRI using a patch-based method for MRI-only radiotherapy planning and verification. METHODS: Conventional T1 and T2-weighted MRI and CT datasets from 13 patients who underwent brain radiotherapy were included in a retrospective study whereas 6 patients were tested prospectively. A new contribution to the Non-local Means Patch-Based Method (NMPBM) framework was done with the use of novel multi-scale and dual-contrast patches. Furthermore, the training dataset was improved by pre-selecting the closest database patients to the target patient for computation time/accuracy balance. sCT and derived DRRs were assessed visually and quantitatively. VMAT planning was performed on CT and sCT for hypothetical PTVs in homogeneous and heterogeneous regions. Dosimetric analysis was done by comparing Dose Volume Histogram (DVH) parameters of PTVs and organs at risk (OARs). Positional accuracy of MRI-only image-guided radiation therapy based on CBCT or kV images was evaluated. RESULTS: The retrospective (respectively prospective) evaluation of the proposed Multi-scale and Dual-contrast Patch-Based Method (MDPBM) gave a mean absolute error MAE=99.69±11.07HU (98.95±8.35HU), and a Dice in bones DIbone=83±0.03 (0.82±0.03). Good agreement with conventional planning techniques was obtained; the highest percentage of DVH metric deviations was 0.43% (0.53%) for PTVs and 0.59% (0.75%) for OARs. The accuracy of sCT/CBCT or DRRsCT/kV images registration parameters was <2mm and <2°. Improvements with MDPBM, compared to NMPBM, were significant. CONCLUSION: We presented a novel method for sCT generation from T1 and T2-weighted MRI potentially suitable for MRI-only external beam radiotherapy in brain sites.


Assuntos
Encéfalo/efeitos dos fármacos , Encéfalo/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Radioterapia Guiada por Imagem/métodos , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Humanos , Órgãos em Risco , Estudos Prospectivos , Radiometria , Radiocirurgia , Dosagem Radioterapêutica , Planejamento da Radioterapia Assistida por Computador/métodos , Radioterapia de Intensidade Modulada , Estudos Retrospectivos
12.
Magn Reson Imaging ; 34(5): 645-53, 2016 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-26795695

RESUMO

OBJECTIVE: To develop a method for the assessment and characterization of 3D geometric distortion as part of routine quality assurance for MRI scanners commissioned for Radiation Therapy planning. MATERIALS AND METHODS: In this study, the in-plane and through-plane geometric distortions on a 1.5T GE MRI-SIM unit are characterized and the 2D and 3D correction algorithms provided by the vendor are evaluated. We used a phantom developed by GE Healthcare that covers a large field of view of 500mm, and consists of layers of foam embedded with a matrix of ellipsoidal markers. An in-house Java-based software module was developed to automatically assess the geometric distortion by calculating the center of each marker using the center of mass method, correcting of gross rotation errors and comparing the corrected positions with a CT gold standard data set. Spatial accuracy of typical pulse sequences used in RT planning was assessed (2D T1/T2 FSE, 3D CUBE, T1 SPGR) using the software. The accuracy of vendor specific geometric distortion correction (GDC) algorithms was quantified by measuring distortions before and after the application of the 2D and 3D correction algorithms. RESULTS: Our algorithm was able to accurately calculate geometric distortion with sub-pixel precision. For all typical MR sequences used in Radiotherapy, the vendor's GDC was able to substantially reduce the distortions. Our results showed also that the impact of the acquisition produced a maximum variation of 0.2mm over a radial distance of 200mm. It has been shown that while the 2D correction algorithm remarkably reduces the in-plane geometric distortion, 3D geometric distortion further reduced the geometric distortion by correcting both in-plane and through-plane distortions in all acquisitions. CONCLUSION: The presented methods represent a valuable tool for routine quality assurance of MR applications that require stringent spatial accuracy assessment such as radiotherapy. The phantom used in this study provides three dimensional arrays of control points. These tools and the detailed results can be also used for developing new geometric distortion correction algorithms or improving the existing ones.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Imageamento por Ressonância Magnética/métodos , Planejamento da Radioterapia Assistida por Computador/métodos , Algoritmos , Imagens de Fantasmas , Reprodutibilidade dos Testes
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA