Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
1.
J Appl Clin Med Phys ; : e14499, 2024 Sep 26.
Artigo em Inglês | MEDLINE | ID: mdl-39325781

RESUMO

BACKGROUND: Magnetic resonance imaging (MRI) and Computed tomography (CT) are crucial imaging techniques in both diagnostic imaging and radiation therapy. MRI provides excellent soft tissue contrast but lacks the direct electron density data needed to calculate dosage. CT, on the other hand, remains the gold standard due to its accurate electron density information in radiation therapy planning (RTP) but it exposes patients to ionizing radiation. Synthetic CT (sCT) generation from MRI has been a focused study field in the last few years due to cost effectiveness as well as for the objective of minimizing side-effects of using more than one imaging modality for treatment simulation. It offers significant time and cost efficiencies, bypassing the complexities of co-registration, and potentially improving treatment accuracy by minimizing registration-related errors. In an effort to navigate the quickly developing field of precision medicine, this paper investigates recent advancements in sCT generation techniques, particularly those using machine learning (ML) and deep learning (DL). The review highlights the potential of these techniques to improve the efficiency and accuracy of sCT generation for use in RTP by improving patient care and reducing healthcare costs. The intricate web of sCT generation techniques is scrutinized critically, with clinical implications and technical underpinnings for enhanced patient care revealed. PURPOSE: This review aims to provide an overview of the most recent advancements in sCT generation from MRI with a particular focus of its use within RTP, emphasizing on techniques, performance evaluation, clinical applications, future research trends and open challenges in the field. METHODS: A thorough search strategy was employed to conduct a systematic literature review across major scientific databases. Focusing on the past decade's advancements, this review critically examines emerging approaches introduced from 2013 to 2023 for generating sCT from MRI, providing a comprehensive analysis of their methodologies, ultimately fostering further advancement in the field. This study highlighted significant contributions, identified challenges, and provided an overview of successes within RTP. Classifying the identified approaches, contrasting their advantages and disadvantages, and identifying broad trends were all part of the review's synthesis process. RESULTS: The review identifies various sCT generation approaches, consisting atlas-based, segmentation-based, multi-modal fusion, hybrid approaches, ML and DL-based techniques. These approaches are evaluated for image quality, dosimetric accuracy, and clinical acceptability. They are used for MRI-only radiation treatment, adaptive radiotherapy, and MR/PET attenuation correction. The review also highlights the diversity of methodologies for sCT generation, each with its own advantages and limitations. Emerging trends incorporate the integration of advanced imaging modalities including various MRI sequences like Dixon sequences, T1-weighted (T1W), T2-weighted (T2W), as well as hybrid approaches for enhanced accuracy. CONCLUSIONS: The study examines MRI-based sCT generation, to minimize negative effects of acquiring both modalities. The study reviews 2013-2023 studies on MRI to sCT generation methods, aiming to revolutionize RTP by reducing use of ionizing radiation and improving patient outcomes. The review provides insights for researchers and practitioners, emphasizing the need for standardized validation procedures and collaborative efforts to refine methods and address limitations. It anticipates the continued evolution of techniques to improve the precision of sCT in RTP.

2.
Med Image Anal ; 97: 103276, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39068830

RESUMO

Radiation therapy plays a crucial role in cancer treatment, necessitating precise delivery of radiation to tumors while sparing healthy tissues over multiple days. Computed tomography (CT) is integral for treatment planning, offering electron density data crucial for accurate dose calculations. However, accurately representing patient anatomy is challenging, especially in adaptive radiotherapy, where CT is not acquired daily. Magnetic resonance imaging (MRI) provides superior soft-tissue contrast. Still, it lacks electron density information, while cone beam CT (CBCT) lacks direct electron density calibration and is mainly used for patient positioning. Adopting MRI-only or CBCT-based adaptive radiotherapy eliminates the need for CT planning but presents challenges. Synthetic CT (sCT) generation techniques aim to address these challenges by using image synthesis to bridge the gap between MRI, CBCT, and CT. The SynthRAD2023 challenge was organized to compare synthetic CT generation methods using multi-center ground truth data from 1080 patients, divided into two tasks: (1) MRI-to-CT and (2) CBCT-to-CT. The evaluation included image similarity and dose-based metrics from proton and photon plans. The challenge attracted significant participation, with 617 registrations and 22/17 valid submissions for tasks 1/2. Top-performing teams achieved high structural similarity indices (≥0.87/0.90) and gamma pass rates for photon (≥98.1%/99.0%) and proton (≥97.3%/97.0%) plans. However, no significant correlation was found between image similarity metrics and dose accuracy, emphasizing the need for dose evaluation when assessing the clinical applicability of sCT. SynthRAD2023 facilitated the investigation and benchmarking of sCT generation techniques, providing insights for developing MRI-only and CBCT-based adaptive radiotherapy. It showcased the growing capacity of deep learning to produce high-quality sCT, reducing reliance on conventional CT for treatment planning.


Assuntos
Tomografia Computadorizada de Feixe Cônico , Imageamento por Ressonância Magnética , Planejamento da Radioterapia Assistida por Computador , Humanos , Tomografia Computadorizada de Feixe Cônico/métodos , Planejamento da Radioterapia Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Tomografia Computadorizada por Raios X/métodos , Dosagem Radioterapêutica , Neoplasias/radioterapia , Neoplasias/diagnóstico por imagem , Radioterapia Guiada por Imagem/métodos
3.
Phys Med Biol ; 69(20)2024 Oct 04.
Artigo em Inglês | MEDLINE | ID: mdl-39299273

RESUMO

Objective.Cachexia is a devastating condition, characterized by involuntary loss of muscle mass with or without loss of adipose tissue mass. It affects more than half of patients with lung cancer, diminishing treatment effects and increasing mortality. Cone-beam computed tomography (CBCT) images, routinely acquired during radiotherapy treatment, might contain valuable anatomical information for monitoring body composition changes associated with cachexia. For this purpose, we propose an automatic artificial intelligence (AI)-based workflow, consisting of CBCT to CT conversion, followed by segmentation of pectoralis muscles.Approach.Data from 140 stage III non-small cell lung cancer patients was used. Two deep learning models, cycle-consistent generative adversarial network (CycleGAN) and contrastive unpaired translation (CUT), were used for unpaired training of CBCT to CT conversion, to generate synthetic CT (sCT) images. The no-new U-Net (nnU-Net) model was used for automatic pectoralis muscle segmentation. To evaluate tissue segmentation performance in the absence of ground truth labels, an uncertainty metric (UM) based on Monte Carlo dropout was developed and validated.Main results.Both CycleGAN and CUT restored the Hounsfield unit fidelity of the CBCT images compared to the planning CT (pCT) images and visually reduced streaking artefacts. The nnU-Net model achieved a Dice similarity coefficient (DSC) of 0.93, 0.94, 0.92 for the CT, sCT and CBCT images, respectively, on an independent test set. The UM showed a high correlation with DSC with a correlation coefficient of -0.84 for the pCT dataset and -0.89 for the sCT dataset.Significance.This paper shows a proof-of-concept for automatic AI-based monitoring of the pectoralis muscle area of lung cancer patients during radiotherapy treatment based on CBCT images, which provides an unprecedented time resolution of muscle mass loss during cachexia progression. Ultimately, the proposed workflow could provide valuable information for early intervention of cachexia, ideally resulting in improved cancer treatment outcome.


Assuntos
Automação , Caquexia , Tomografia Computadorizada de Feixe Cônico , Neoplasias Pulmonares , Fluxo de Trabalho , Humanos , Neoplasias Pulmonares/radioterapia , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/complicações , Caquexia/diagnóstico por imagem , Caquexia/radioterapia , Caquexia/etiologia , Processamento de Imagem Assistida por Computador/métodos , Aprendizado Profundo , Carcinoma Pulmonar de Células não Pequenas/radioterapia , Carcinoma Pulmonar de Células não Pequenas/diagnóstico por imagem , Carcinoma Pulmonar de Células não Pequenas/complicações
4.
Z Med Phys ; 2023 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-37537099

RESUMO

The use of synthetic CT (sCT) in the radiotherapy workflow would reduce costs and scan time while removing the uncertainties around working with both MR and CT modalities. The performance of deep learning (DL) solutions for sCT generation is steadily increasing, however most proposed methods were trained and validated on private datasets of a single contrast from a single scanner. Such solutions might not perform equally well on other datasets, limiting their general usability and therefore value. Additionally, functional evaluations of sCTs such as dosimetric comparisons with CT-based dose calculations better show the impact of the methods, but the evaluations are more labor intensive than pixel-wise metrics. To improve the generalization of an sCT model, we propose to incorporate a pre-trained DL model to pre-process the input MR images by generating artificial proton density, T1 and T2 maps (i.e. contrast-independent quantitative maps), which are then used for sCT generation. Using a dataset of only T2w MR images, the robustness towards input MR contrasts of this approach is compared to a model that was trained using the MR images directly. We evaluate the generated sCTs using pixel-wise metrics and calculating mean radiological depths, as an approximation of the mean delivered dose. On T2w images acquired with the same settings as the training dataset, there was no significant difference between the performance of the models. However, when evaluated on T1w images, and a wide range of other contrasts and scanners from both public and private datasets, our approach outperforms the baseline model. Using a dataset of T2w MR images, our proposed model implements synthetic quantitative maps to generate sCT images, improving the generalization towards other contrasts. Our code and trained models are publicly available.

5.
Comput Med Imaging Graph ; 107: 102227, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-37167815

RESUMO

Generation of computed tomography (CT) images from magnetic resonance (MR) images using deep learning methods has recently demonstrated promise in improving MR-guided radiotherapy and PET/MR imaging. PURPOSE: To investigate the performance of unsupervised training using a large number of unpaired data sets as well as the potential gain in performance after fine-tuning with supervised training using spatially registered data sets in generation of synthetic computed tomography (sCT) from magnetic resonance (MR) images. MATERIALS AND METHODS: A cycleGAN method consisting of two generators (residual U-Net) and two discriminators (patchGAN) was used for unsupervised training. Unsupervised training utilized unpaired T1-weighted MR and CT images (2061 sets for each modality). Five supervised models were then fine-tuned starting with the generator of the unsupervised model for 1, 10, 25, 50, and 100 pairs of spatially registered MR and CT images. Four supervised training models were also trained from scratch for 10, 25, 50, and 100 pairs of spatially registered MR and CT images using only the residual U-Net generator. All models were evaluated on a holdout test set of spatially registered images from 253 patients, including 30 with significant pathology. sCT images were compared against the acquired CT images using mean absolute error (MAE), Dice coefficient, and structural similarity index (SSIM). sCT images from 60 test subjects generated by the unsupervised, and most accurate of the fine-tuned and supervised models were qualitatively evaluated by a radiologist. RESULTS: While unsupervised training produced realistic-appearing sCT images, addition of even one set of registered images improved quantitative metrics. Addition of more paired data sets to the training further improved image quality, with the best results obtained using the highest number of paired data sets (n=100). Supervised training was found to be superior to unsupervised training, while fine-tuned training showed no clear benefit over supervised learning, regardless of the training sample size. CONCLUSION: Supervised learning (using either fine tuning or full supervision) leads to significantly higher quantitative accuracy in the generation of sCT from MR images. However, fine-tuned training using both a large number of unpaired image sets was generally no better than supervised learning using registered image sets alone, suggesting the importance of well registered paired data set for training compared to a large set of unpaired data.


Assuntos
Processamento de Imagem Assistida por Computador , Radioterapia Guiada por Imagem , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Tomografia Computadorizada por Raios X , Espectroscopia de Ressonância Magnética
6.
Comput Biol Med ; 162: 107054, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37290389

RESUMO

Synthesizing computed tomography (CT) images from magnetic resonance imaging (MRI) data can provide the necessary electron density information for accurate dose calculation in the treatment planning of MRI-guided radiation therapy (MRIgRT). Inputting multimodality MRI data can provide sufficient information for accurate CT synthesis: however, obtaining the necessary number of MRI modalities is clinically expensive and time-consuming. In this study, we propose a multimodality MRI synchronous construction based deep learning framework from a single T1-weight (T1) image for MRIgRT synthetic CT (sCT) image generation. The network is mainly based on a generative adversarial network with sequential subtasks of intermediately generating synthetic MRIs and jointly generating the sCT image from the single T1 MRI. It contains a multitask generator and a multibranch discriminator, where the generator consists of a shared encoder and a splitted multibranch decoder. Specific attention modules are designed within the generator for feasible high-dimensional feature representation and fusion. Fifty patients with nasopharyngeal carcinoma who had undergone radiotherapy and had CT and sufficient MRI modalities scanned (5550 image slices for each modality) were used in the experiment. Results showed that our proposed network outperforms state-of-the-art sCT generation methods well with the least MAE, NRMSE, and comparable PSNR and SSIM index measure. Our proposed network exhibits comparable or even superior performance than the multimodality MRI-based generation method although it only takes a single T1 MRI image as input, thereby providing a more effective and economic solution for the laborious and high-cost generation of sCT images in clinical applications.


Assuntos
Aprendizado Profundo , Humanos , Tomografia Computadorizada por Raios X/métodos , Imageamento por Ressonância Magnética/métodos , Imagem Multimodal , Planejamento da Radioterapia Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/métodos
7.
Phys Med Biol ; 66(11)2021 05 31.
Artigo em Inglês | MEDLINE | ID: mdl-34061043

RESUMO

Adaptive-radiation-therapy (ART) is applied to account for anatomical variations observed over the treatment course. Daily or weekly cone-beam computed tomography (CBCT) is commonly used in clinic for patient positioning, but CBCT's inaccuracy in Hounsfield units (HU) prevents its application to dose calculation and treatment planning. Adaptive re-planning can be performed by deformably registering planning CT (pCT) to CBCT. However, scattering artifacts and noise in CBCT decrease the accuracy of deformable registration and induce uncertainty in treatment plan. Hence, generating from CBCT a synthetic CT (sCT) that has the same anatomical structure as CBCT but accurate HU values is desirable for ART. We proposed an unsupervised style-transfer-based approach to generate sCT based on CBCT and pCT. Unsupervised learning was desired because exactly matched CBCT and CT are rarely available, even when they are taken a few minutes apart. In the proposed model, CBCT and pCT are two inputs that provide anatomical structure and accurate HU information, respectively. The training objective function is designed to simultaneously minimize (1) contextual loss between sCT and CBCT to maintain the content and structure of CBCT in sCT and (2) style loss between sCT and pCT to achieve pCT-like image quality in sCT. We used CBCT and pCT images of 114 patients to train and validate the designed model, and another 29 independent patient cases to test the model's effectiveness. We quantitatively compared the resulting sCT with the original CBCT using the deformed same-day pCT as reference. Structure-similarity-index, peak-signal-to-noise-ratio, and mean-absolute-error in HU of sCT were 0.9723, 33.68, and 28.52, respectively, while those of CBCT were 0.9182, 29.67, and 49.90, respectively. We have demonstrated the effectiveness of the proposed model in using CBCT and pCT to synthesize CT-quality images. This model may permit using CBCT for advanced applications such as adaptive treatment planning.


Assuntos
Aprendizado Profundo , Tomografia Computadorizada de Feixe Cônico Espiral , Tomografia Computadorizada de Feixe Cônico , Humanos , Planejamento da Radioterapia Assistida por Computador
8.
Quant Imaging Med Surg ; 11(12): 4820-4834, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34888192

RESUMO

BACKGROUND: Cone-beam computed tomography (CBCT) plays a key role in image-guided radiotherapy (IGRT), however its poor image quality limited its clinical application. In this study, we developed a deep-learning based approach to translate CBCT image to synthetic CT (sCT) image that preserves both CT image quality and CBCT anatomical structures. METHODS: A novel synthetic CT generative adversarial network (sCTGAN) was proposed for CBCT-to-CT translation via disentangled representation. The approach of disentangled representation was employed to extract the anatomical information shared by CBCT and CT image domains. Both on-board CBCT and planning CT of 40 patients were used for network learning and those of another 12 patients were used for testing. Accuracy of our network was quantitatively evaluated using a series of statistical metrics, including the peak signal-to-noise ratio (PSNR), mean structural similarity index (SSIM), mean absolute error (MAE), and root-mean-square error (RMSE). Effectiveness of our network was compared against three state-of-the-art CycleGAN-based methods. RESULTS: The PSNR, SSIM, MAE, and RMSE between sCT generated by sCTGAN and deformed planning CT (dpCT) were 34.12 dB, 0.86, 32.70 HU, and 60.53 HU, while the corresponding values between original CBCT and dpCT were 28.67 dB, 0.64, 70.56 HU, and 112.13 HU. The RMSE (60.53±14.38 HU) of sCT generated by sCTGAN was less than that of sCT generated by all the three comparing methods (72.40±16.03 HU by CycleGAN, 71.60±15.09 HU by CycleGAN-Unet512, 64.93±14.33 HU by CycleGAN-AG). CONCLUSIONS: The sCT generated by our sCTGAN network was closer to the ground truth (dpCT), in comparison to all the three comparing CycleGAN-based methods. It provides an effective way to generate high-quality sCT which has a wide application in IGRT and adaptive radiotherapy.

9.
Med Phys ; 47(3): 1115-1125, 2020 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-31853974

RESUMO

PURPOSE: Cone-beam computed tomography (CBCT) scanning is used daily or weekly (i.e., on-treatment CBCT) for accurate patient setup in image-guided radiotherapy. However, inaccuracy of CT numbers prevents CBCT from performing advanced tasks such as dose calculation and treatment planning. Motivated by the promising performance of deep learning in medical imaging, we propose a deep U-net-based approach that synthesizes CT-like images with accurate numbers from planning CT, while keeping the same anatomical structure as on-treatment CBCT. METHODS: We formulated the CT synthesis problem under a deep learning framework, where a deep U-net architecture was used to take advantage of the anatomical structure of on-treatment CBCT and image intensity information of planning CT. U-net was chosen because it exploits both global and local features in the image spatial domain, matching our task to suppress global scattering artifacts and local artifacts such as noise in CBCT. To train the synthetic CT generation U-net (sCTU-net), we include on-treatment CBCT and initial planning CT of 37 patients (30 for training, seven for validation) as the input. Additional replanning CT images acquired on the same day as CBCT after deformable registration are utilized as the corresponding reference. To demonstrate the effectiveness of the proposed sCTU-net, we use another seven independent patient cases (560 slices) for testing. RESULTS: We quantitatively compared the resulting synthetic CT (sCT) with the original CBCT image using deformed same-day pCT images as reference. The averaged accuracy measured by mean absolute error (MAE) between sCT and reference CT (rCT) on testing data is 18.98 HU, while MAE between CBCT and rCT is 44.38 HU. CONCLUSIONS: The proposed sCTU-net can synthesize CT-quality images with accurate CT numbers from on-treatment CBCT and planning CT. This potentially enables advanced CBCT applications for adaptive treatment planning.


Assuntos
Tomografia Computadorizada de Feixe Cônico , Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Humanos
10.
Med Phys ; 47(6): 2472-2483, 2020 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-32141618

RESUMO

PURPOSE: Current clinical application of cone-beam CT (CBCT) is limited to patient setup. Imaging artifacts and Hounsfield unit (HU) inaccuracy make the process of CBCT-based adaptive planning presently impractical. In this study, we developed a deep-learning-based approach to improve CBCT image quality and HU accuracy for potential extended clinical use in CBCT-guided pancreatic adaptive radiotherapy. METHODS: Thirty patients previously treated with pancreas SBRT were included. The CBCT acquired prior to the first fraction of treatment was registered to the planning CT for training and generation of synthetic CT (sCT). A self-attention cycle generative adversarial network (cycleGAN) was used to generate CBCT-based sCT. For the cohort of 30 patients, the CT-based contours and treatment plans were transferred to the first fraction CBCTs and sCTs for dosimetric comparison. RESULTS: At the site of abdomen, mean absolute error (MAE) between CT and sCT was 56.89 ± 13.84 HU, comparing to 81.06 ± 15.86 HU between CT and the raw CBCT. No significant differences (P > 0.05) were observed in the PTV and OAR dose-volume-histogram (DVH) metrics between the CT- and sCT-based plans, while significant differences (P < 0.05) were found between the CT- and the CBCT-based plans. CONCLUSIONS: The image similarity and dosimetric agreement between the CT and sCT-based plans validated the dose calculation accuracy carried by sCT. The CBCT-based sCT approach can potentially increase treatment precision and thus minimize gastrointestinal toxicity.


Assuntos
Tomografia Computadorizada de Feixe Cônico Espiral , Atenção , Tomografia Computadorizada de Feixe Cônico , Humanos , Pâncreas , Dosagem Radioterapêutica , Planejamento da Radioterapia Assistida por Computador
11.
Med Phys ; 47(2): 626-642, 2020 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-31733164

RESUMO

PURPOSE: To evaluate pix2pix and CycleGAN and to assess the effects of multiple combination strategies on accuracy for patch-based synthetic computed tomography (sCT) generation for magnetic resonance (MR)-only treatment planning in head and neck (HN) cancer patients. MATERIALS AND METHODS: Twenty-three deformably registered pairs of CT and mDixon FFE MR datasets from HN cancer patients treated at our institution were retrospectively analyzed to evaluate patch-based sCT accuracy via the pix2pix and CycleGAN models. To test effects of overlapping sCT patches on estimations, we (a) trained the models for three orthogonal views to observe the effects of spatial context, (b) we increased effective set size by using per-epoch data augmentation, and (c) we evaluated the performance of three different approaches for combining overlapping Hounsfield unit (HU) estimations for varied patch overlap parameters. Twelve of twenty-three cases corresponded to a curated dataset previously used for atlas-based sCT generation and were used for training with leave-two-out cross-validation. Eight cases were used for independent testing and included previously unseen image features such as fused vertebrae, a small protruding bone, and tumors large enough to deform normal body contours. We analyzed the impact of MR image preprocessing including histogram standardization and intensity clipping on sCT generation accuracy. Effects of mDixon contrast (in-phase vs water) differences were tested with three additional cases. The sCT generation accuracy was evaluated using mean absolute error (MAE) and mean error (ME) in HU between the plan CT and sCT images. Dosimetric accuracy was evaluated for all clinically relevant structures in the independent testing set and digitally reconstructed radiographs (DRRs) were evaluated with respect to the plan CT images. RESULTS: The cross-validated MAEs for the whole-HN region using pix2pix and CycleGAN were 66.9 ± 7.3 vs 82.3 ± 6.4 HU, respectively. On the independent testing set with additional artifacts and previously unseen image features, whole-HN region MAEs were 94.0 ± 10.6 and 102.9 ± 14.7 HU for pix2pix and CycleGAN, respectively. For patients with different tissue contrast (water mDixon MR images), the MAEs increased to 122.1 ± 6.3 and 132.8 ± 5.5 HU for pix2pix and CycleGAN, respectively. Our results suggest that combining overlapping sCT estimations at each voxel reduced both MAE and ME compared to single-view non-overlapping patch results. Absolute percent mean/max dose errors were 2% or less for the PTV and all clinically relevant structures in our independent testing set, including structures with image artifacts. Quantitative DRR comparison between planning CTs and sCTs showed agreement of bony region positions to <1 mm. CONCLUSIONS: The dosimetric and MAE based accuracy, along with the similarity between DRRs from sCTs, indicate that pix2pix and CycleGAN are promising methods for MR-only treatment planning for HN cancer. Our methods investigated for overlapping patch-based HU estimations also indicate that combining transformation estimations of overlapping patches is a potential method to reduce generation errors while also providing a tool to potentially estimate the MR to CT aleatoric model transformation uncertainty. However, because of small patient sample sizes, further studies are required.


Assuntos
Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Adulto , Idoso , Aprendizado Profundo , Feminino , Humanos , Pessoa de Meia-Idade , Modelos Teóricos , Gravidez , Estudos Retrospectivos , Tomografia Computadorizada por Raios X
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA