Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
1.
Front Neurol ; 14: 1039693, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36895903

RESUMO

Collateral circulation results from specialized anastomotic channels which are capable of providing oxygenated blood to regions with compromised blood flow caused by arterial obstruction. The quality of collateral circulation has been established as a key factor in determining the likelihood of a favorable clinical outcome and goes a long way to determining the choice of a stroke care model. Though many imaging and grading methods exist for quantifying collateral blood flow, the actual grading is mostly done through manual inspection. This approach is associated with a number of challenges. First, it is time-consuming. Second, there is a high tendency for bias and inconsistency in the final grade assigned to a patient depending on the experience level of the clinician. We present a multi-stage deep learning approach to predict collateral flow grading in stroke patients based on radiomic features extracted from MR perfusion data. First, we formulate a region of interest detection task as a reinforcement learning problem and train a deep learning network to automatically detect the occluded region within the 3D MR perfusion volumes. Second, we extract radiomic features from the obtained region of interest through local image descriptors and denoising auto-encoders. Finally, we apply a convolutional neural network and other machine learning classifiers to the extracted radiomic features to automatically predict the collateral flow grading of the given patient volume as one of three severity classes - no flow (0), moderate flow (1), and good flow (2). Results from our experiments show an overall accuracy of 72% in the three-class prediction task. With an inter-observer agreement of 16% and a maximum intra-observer agreement of 74% in a similar experiment, our automated deep learning approach demonstrates a performance comparable to expert grading, is faster than visual inspection, and eliminates the problem of grading bias.

2.
Eur J Nucl Med Mol Imaging ; 49(12): 4064-4072, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35771265

RESUMO

PURPOSE: Although treatment planning and individualized dose application for emerging prostate-specific membrane antigen (PSMA)-targeted radioligand therapy (RLT) are generally recommended, it is still difficult to implement in practice at the moment. In this study, we aimed to prove the concept of pretherapeutic prediction of dosimetry based on imaging and laboratory measurements before the RLT treatment. METHODS: Twenty-three patients with metastatic castration-resistant prostate cancer (mCRPC) treated with 177Lu-PSMA I&T RLT were included retrospectively. They had available pre-therapy 68 Ga-PSMA-HEBD-CC PET/CT and at least 3 planar and 1 SPECT/CT imaging for dosimetry. Overall, 43 cycles of 177Lu-PSMA I&T RLT were applied. Organ-based standard uptake values (SUVs) were obtained from pre-therapy PET/CT scans. Patient dosimetry was calculated for the kidney, liver, spleen, and salivary glands using Hermes Hybrid Dosimetry 4.0 from the planar and SPECT/CT images. Machine learning methods were explored for dose prediction from organ SUVs and laboratory measurements. The uncertainty of these dose predictions was compared with the population-based dosimetry estimates. Mean absolute percentage error (MAPE) was used to assess the prediction uncertainty of estimated dosimetry. RESULTS: An optimal machine learning method achieved a dosimetry prediction MAPE of 15.8 ± 13.2% for the kidney, 29.6% ± 13.7% for the liver, 23.8% ± 13.1% for the salivary glands, and 32.1 ± 31.4% for the spleen. In contrast, the prediction based on literature population mean has significantly larger MAPE (p < 0.01), 25.5 ± 17.3% for the kidney, 139.1% ± 111.5% for the liver, 67.0 ± 58.3% for the salivary glands, and 54.1 ± 215.3% for the spleen. CONCLUSION: The preliminary results confirmed the feasibility of pretherapeutic estimation of treatment dosimetry and its added value to empirical population-based estimation. The exploration of dose prediction may support the implementation of treatment planning for RLT.


Assuntos
Lutécio , Neoplasias de Próstata Resistentes à Castração , Dipeptídeos/uso terapêutico , Compostos Heterocíclicos com 1 Anel/uso terapêutico , Humanos , Lutécio/uso terapêutico , Aprendizado de Máquina , Masculino , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos , Antígeno Prostático Específico , Neoplasias de Próstata Resistentes à Castração/diagnóstico por imagem , Neoplasias de Próstata Resistentes à Castração/tratamento farmacológico , Neoplasias de Próstata Resistentes à Castração/radioterapia , Estudos Retrospectivos , Ureia/análogos & derivados
3.
Sci Data ; 8(1): 284, 2021 10 28.
Artigo em Inglês | MEDLINE | ID: mdl-34711848

RESUMO

With the advent of deep learning algorithms, fully automated radiological image analysis is within reach. In spine imaging, several atlas- and shape-based as well as deep learning segmentation algorithms have been proposed, allowing for subsequent automated analysis of morphology and pathology. The first "Large Scale Vertebrae Segmentation Challenge" (VerSe 2019) showed that these perform well on normal anatomy, but fail in variants not frequently present in the training dataset. Building on that experience, we report on the largely increased VerSe 2020 dataset and results from the second iteration of the VerSe challenge (MICCAI 2020, Lima, Peru). VerSe 2020 comprises annotated spine computed tomography (CT) images from 300 subjects with 4142 fully visualized and annotated vertebrae, collected across multiple centres from four different scanner manufacturers, enriched with cases that exhibit anatomical variants such as enumeration abnormalities (n = 77) and transitional vertebrae (n = 161). Metadata includes vertebral labelling information, voxel-level segmentation masks obtained with a human-machine hybrid algorithm and anatomical ratings, to enable the development and benchmarking of robust and accurate segmentation algorithms.


Assuntos
Coluna Vertebral/anatomia & histologia , Tomografia Computadorizada por Raios X , Adulto , Idoso , Algoritmos , Humanos , Processamento de Imagem Assistida por Computador , Masculino , Pessoa de Meia-Idade , Coluna Vertebral/diagnóstico por imagem , Tomografia Computadorizada por Raios X/instrumentação
4.
Med Image Anal ; 73: 102166, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34340104

RESUMO

Vertebral labelling and segmentation are two fundamental tasks in an automated spine processing pipeline. Reliable and accurate processing of spine images is expected to benefit clinical decision support systems for diagnosis, surgery planning, and population-based analysis of spine and bone health. However, designing automated algorithms for spine processing is challenging predominantly due to considerable variations in anatomy and acquisition protocols and due to a severe shortage of publicly available data. Addressing these limitations, the Large Scale Vertebrae Segmentation Challenge (VerSe) was organised in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2019 and 2020, with a call for algorithms tackling the labelling and segmentation of vertebrae. Two datasets containing a total of 374 multi-detector CT scans from 355 patients were prepared and 4505 vertebrae have individually been annotated at voxel level by a human-machine hybrid algorithm (https://osf.io/nqjyw/, https://osf.io/t98fz/). A total of 25 algorithms were benchmarked on these datasets. In this work, we present the results of this evaluation and further investigate the performance variation at the vertebra level, scan level, and different fields of view. We also evaluate the generalisability of the approaches to an implicit domain shift in data by evaluating the top-performing algorithms of one challenge iteration on data from the other iteration. The principal takeaway from VerSe: the performance of an algorithm in labelling and segmenting a spine scan hinges on its ability to correctly identify vertebrae in cases of rare anatomical variations. The VerSe content and code can be accessed at: https://github.com/anjany/verse.


Assuntos
Benchmarking , Tomografia Computadorizada por Raios X , Algoritmos , Humanos , Processamento de Imagem Assistida por Computador , Coluna Vertebral/diagnóstico por imagem
5.
Eur Radiol ; 31(8): 6069-6077, 2021 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-33507353

RESUMO

OBJECTIVES: To compare spinal bone measures derived from automatic and manual assessment in routine CT with dual energy X-ray absorptiometry (DXA) in their association with prevalent osteoporotic vertebral fractures using our fully automated framework ( https://anduin.bonescreen.de ) to assess various bone measures in clinical CT. METHODS: We included 192 patients (141 women, 51 men; age 70.2 ± 9.7 years) who had lumbar DXA and CT available (within 1 year). Automatic assessment of spinal bone measures in CT included segmentation of vertebrae using a convolutional neural network (CNN), reduction to the vertebral body, and extraction of bone mineral content (BMC), trabecular and integral volumetric bone mineral density (vBMD), and CT-based areal BMD (aBMD) using asynchronous calibration. Moreover, trabecular bone was manually sampled (manual vBMD). RESULTS: A total of 148 patients (77%) had vertebral fractures and significantly lower values in all bone measures compared to patients without fractures (p ≤ 0.001). Except for BMC, all CT-based measures performed significantly better as predictors for vertebral fractures compared to DXA (e.g., AUC = 0.885 for trabecular vBMD and AUC = 0.86 for integral vBMD vs. AUC = 0.668 for DXA aBMD, respectively; both p < 0.001). Age- and sex-adjusted associations with fracture status were strongest for manual vBMD (OR = 7.3, [95%] CI 3.8-14.3) followed by automatically assessed trabecular vBMD (OR = 6.9, CI 3.5-13.4) and integral vBMD (OR = 4.3, CI 2.5-7.6). Diagnostic cutoffs of integral vBMD for osteoporosis (< 160 mg/cm3) or low bone mass (160 ≤ BMD < 190 mg/cm3) had sensitivity (84%/41%) and specificity (78%/95%) similar to trabecular vBMD. CONCLUSIONS: Fully automatic osteoporosis screening in routine CT of the spine is feasible. CT-based measures can better identify individuals with reduced bone mass who suffered from vertebral fractures than DXA. KEY POINTS: • Opportunistic osteoporosis screening of spinal bone measures derived from clinical routine CT is feasible in a fully automatic fashion using a deep learning-driven framework ( https://anduin.bonescreen.de ). • Manually sampled volumetric BMD (vBMD) and automatically assessed trabecular and integral vBMD were the best predictors for prevalent vertebral fractures. • Except for bone mineral content, all CT-based bone measures performed significantly better than DXA-based measures. • We introduce diagnostic thresholds of integral vBMD for osteoporosis (< 160 mg/cm3) and low bone mass (160 ≤ BMD < 190 mg/cm3) with almost equal sensitivity and specificity compared to conventional thresholds of quantitative CT as proposed by the American College of Radiology (osteoporosis < 80 mg/cm3).


Assuntos
Osteoporose , Fraturas da Coluna Vertebral , Absorciometria de Fóton , Idoso , Densidade Óssea , Feminino , Humanos , Vértebras Lombares/diagnóstico por imagem , Vértebras Lombares/lesões , Masculino , Pessoa de Meia-Idade , Osteoporose/complicações , Osteoporose/diagnóstico por imagem , Osteoporose/epidemiologia , Fraturas da Coluna Vertebral/diagnóstico por imagem , Fraturas da Coluna Vertebral/epidemiologia , Tomografia Computadorizada por Raios X
6.
Front Neurosci ; 14: 592352, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33363452

RESUMO

We present DeepVesselNet, an architecture tailored to the challenges faced when extracting vessel trees and networks and corresponding features in 3-D angiographic volumes using deep learning. We discuss the problems of low execution speed and high memory requirements associated with full 3-D networks, high-class imbalance arising from the low percentage (<3%) of vessel voxels, and unavailability of accurately annotated 3-D training data-and offer solutions as the building blocks of DeepVesselNet. First, we formulate 2-D orthogonal cross-hair filters which make use of 3-D context information at a reduced computational burden. Second, we introduce a class balancing cross-entropy loss function with false-positive rate correction to handle the high-class imbalance and high false positive rate problems associated with existing loss functions. Finally, we generate a synthetic dataset using a computational angiogenesis model capable of simulating vascular tree growth under physiological constraints on local network structure and topology and use these data for transfer learning. We demonstrate the performance on a range of angiographic volumes at different spatial scales including clinical MRA data of the human brain, as well as CTA microscopy scans of the rat brain. Our results show that cross-hair filters achieve over 23% improvement in speed, lower memory footprint, lower network complexity which prevents overfitting and comparable accuracy that does not differ from full 3-D filters. Our class balancing metric is crucial for training the network, and transfer learning with synthetic data is an efficient, robust, and very generalizable approach leading to a network that excels in a variety of angiography segmentation tasks. We observe that sub-sampling and max pooling layers may lead to a drop in performance in tasks that involve voxel-sized structures. To this end, the DeepVesselNet architecture does not use any form of sub-sampling layer and works well for vessel segmentation, centerline prediction, and bifurcation detection. We make our synthetic training data publicly available, fostering future research, and serving as one of the first public datasets for brain vessel tree segmentation and analysis.

7.
Front Neurosci ; 14: 125, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32410929

RESUMO

Despite great advances in brain tumor segmentation and clear clinical need, translation of state-of-the-art computational methods into clinical routine and scientific practice remains a major challenge. Several factors impede successful implementations, including data standardization and preprocessing. However, these steps are pivotal for the deployment of state-of-the-art image segmentation algorithms. To overcome these issues, we present BraTS Toolkit. BraTS Toolkit is a holistic approach to brain tumor segmentation and consists of three components: First, the BraTS Preprocessor facilitates data standardization and preprocessing for researchers and clinicians alike. It covers the entire image analysis workflow prior to tumor segmentation, from image conversion and registration to brain extraction. Second, BraTS Segmentor enables orchestration of BraTS brain tumor segmentation algorithms for generation of fully-automated segmentations. Finally, Brats Fusionator can combine the resulting candidate segmentations into consensus segmentations using fusion methods such as majority voting and iterative SIMPLE fusion. The capabilities of our tools are illustrated with a practical example to enable easy translation to clinical and scientific practice.

8.
Nat Methods ; 17(4): 442-449, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-32161395

RESUMO

Tissue clearing methods enable the imaging of biological specimens without sectioning. However, reliable and scalable analysis of large imaging datasets in three dimensions remains a challenge. Here we developed a deep learning-based framework to quantify and analyze brain vasculature, named Vessel Segmentation & Analysis Pipeline (VesSAP). Our pipeline uses a convolutional neural network (CNN) with a transfer learning approach for segmentation and achieves human-level accuracy. By using VesSAP, we analyzed the vascular features of whole C57BL/6J, CD1 and BALB/c mouse brains at the micrometer scale after registering them to the Allen mouse brain atlas. We report evidence of secondary intracranial collateral vascularization in CD1 mice and find reduced vascularization of the brainstem in comparison to the cerebrum. Thus, VesSAP enables unbiased and scalable quantifications of the angioarchitecture of cleared mouse brains and yields biological insights into the vascular function of the brain.


Assuntos
Encéfalo/irrigação sanguínea , Aprendizado de Máquina , Animais , Imageamento Tridimensional , Camundongos
9.
Eur J Nucl Med Mol Imaging ; 47(3): 603-613, 2020 03.
Artigo em Inglês | MEDLINE | ID: mdl-31813050

RESUMO

PURPOSE: This study proposes an automated prostate cancer (PC) lesion characterization method based on the deep neural network to determine tumor burden on 68Ga-PSMA-11 PET/CT to potentially facilitate the optimization of PSMA-directed radionuclide therapy. METHODS: We collected 68Ga-PSMA-11 PET/CT images from 193 patients with metastatic PC at three medical centers. For proof-of-concept, we focused on the detection of pelvis bone and lymph node lesions. A deep neural network (triple-combining 2.5D U-Net) was developed for the automated characterization of these lesions. The proposed method simultaneously extracts features from axial, coronal, and sagittal planes, which mimics the workflow of physicians and reduces computational and memory requirements. RESULTS: Among all the labeled lesions, the network achieved 99% precision, 99% recall, and an F1 score of 99% on bone lesion detection and 94%, precision 89% recall, and an F1 score of 92% on lymph node lesion detection. The segmentation accuracy is lower than the detection. The performance of the network was correlated with the amount of training data. CONCLUSION: We developed a deep neural network to characterize automatically the PC lesions on 68Ga-PSMA-11 PET/CT. The preliminary test within the pelvic area confirms the potential of deep learning methods. Increasing the amount of training data should further enhance the performance of the proposed method and may ultimately allow whole-body assessments.


Assuntos
Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Neoplasias da Próstata , Ácido Edético/análogos & derivados , Isótopos de Gálio , Radioisótopos de Gálio , Humanos , Masculino , Redes Neurais de Computação , Oligopeptídeos , Neoplasias da Próstata/diagnóstico por imagem
10.
J Nucl Med ; 60(9): 1277-1283, 2019 09.
Artigo em Inglês | MEDLINE | ID: mdl-30850484

RESUMO

Our aim was to introduce and validate qPSMA, a semiautomatic software package for whole-body tumor burden assessment in prostate cancer patients using 68Ga-prostate-specific membrane antigen (PSMA) 11 PET/CT. Methods: qPSMA reads hybrid PET/CT images in DICOM format. Its pipeline was written using Python and C++ languages. A bone mask based on CT and a normal-uptake mask including organs with physiologic 68Ga-PSMA11 uptake are automatically computed. An SUV threshold of 3 and a liver-based threshold are used to segment bone and soft-tissue lesions, respectively. Manual corrections can be applied using different tools. Multiple output parameters are computed, that is, PSMA ligand-positive tumor volume (PSMA-TV), PSMA ligand-positive total lesion (PSMA-TL), PSMA SUVmean, and PSMA SUVmax Twenty 68Ga-PSMA11 PET/CT data sets were used to validate and evaluate the performance characteristics of qPSMA. Four analyses were performed: validation of the semiautomatic algorithm for liver background activity determination, assessment of intra- and interobserver variability, validation of data from qPSMA by comparison with Syngo.via, and assessment of computational time and comparison of PSMA PET-derived parameters with serum prostate-specific antigen. Results: Automatic liver background calculation resulted in a mean relative difference of 0.74% (intraclass correlation coefficient [ICC], 0.996; 95%CI, 0.989;0.998) compared with METAVOL. Intra- and interobserver variability analyses showed high agreement (all ICCs > 0.990). Quantitative output parameters were compared for 68 lesions. Paired t testing showed no significant differences between the values obtained with the 2 software packages. The ICC estimates obtained for PSMA-TV, PSMA-TL, SUVmean, and SUVmax were 1.000 (95%CI, 1.000;1.000), 1.000 (95%CI, 1.000;1.000), 0.995 (95%CI, 0.992;0.997), and 0.999 (95%CI, 0.999;1.000), respectively. The first and second reads for intraobserver variability resulted in mean computational times of 13.63 min (range, 8.22-25.45 min) and 9.27 min (range, 8.10-12.15 min), respectively (P = 0.001). Highly significant correlations were found between serum prostate-specific antigen value and both PSMA-TV (r = 0.72, P < 0.001) and PSMA-TL (r = 0.66, P = 0.002). Conclusion: Semiautomatic analyses of whole-body tumor burden in 68Ga-PSMA11 PET/CT is feasible. qPSMA is a robust software package that can help physicians quantify tumor load in heavily metastasized prostate cancer patients.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Glicoproteínas de Membrana/química , Compostos Organometálicos/química , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Neoplasias da Próstata/diagnóstico por imagem , Carga Tumoral , Imagem Corporal Total , Algoritmos , Biomarcadores/metabolismo , Osso e Ossos/diagnóstico por imagem , Isótopos de Gálio , Radioisótopos de Gálio , Humanos , Ligantes , Fígado/diagnóstico por imagem , Masculino , Variações Dependentes do Observador , Reconhecimento Automatizado de Padrão , Linguagens de Programação , Reprodutibilidade dos Testes , Software , Fluxo de Trabalho
11.
IEEE J Biomed Health Inform ; 23(4): 1363-1373, 2019 07.
Artigo em Inglês | MEDLINE | ID: mdl-30629519

RESUMO

Accurate and automatic organ segmentation is critical for computer-aided analysis towards clinical decision support and treatment planning. State-of-the-art approaches have achieved remarkable segmentation accuracy on large organs, such as the liver and kidneys. However, most of these methods do not perform well on small organs, such as the pancreas, gallbladder, and adrenal glands, especially when lacking sufficient training data. This paper presents an automatic approach for small organ segmentation with limited training data using two cascaded steps-localization and segmentation. The localization stage involves the extraction of the region of interest after the registration of images to a common template and during the segmentation stage, a voxel-wise label map of the extracted region of interest is obtained and then transformed back to the original space. In the localization step, we propose to utilize a graph-based groupwise image registration method to build the template for registration so as to minimize the potential bias and avoid getting a fuzzy template. More importantly, a novel knowledge-aided convolutional neural network is proposed to improve segmentation accuracy in the second stage. This proposed network is flexible and can combine the effort of both deep learning and traditional methods, consequently achieving better segmentation relative to either of individual methods. The ISBI 2015 VISCERAL challenge dataset is used to evaluate the presented approach. Experimental results demonstrate that the proposed method outperforms cutting-edge deep learning approaches, traditional forest-based approaches, and multi-atlas approaches in the segmentation of small organs.


Assuntos
Interpretação de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Glândulas Suprarrenais/diagnóstico por imagem , Algoritmos , Lógica Fuzzy , Vesícula Biliar/diagnóstico por imagem , Humanos , Pâncreas/diagnóstico por imagem , Tomografia Computadorizada por Raios X
12.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 951-954, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31946051

RESUMO

The emerging PSMA-targeted radionuclide therapy provides an effective method for the treatment of advanced metastatic prostate cancer. To optimize the therapeutic effect and maximize the theranostic benefit, there is a need to identify and quantify target lesions prior to treatment. However, this is extremely challenging considering that a high number of lesions of heterogeneous size and uptake may distribute in a variety of anatomical context with different backgrounds. This study proposes an end-to-end deep neural network to characterize the prostate cancer lesions on PSMA imaging automatically. A 68Ga-PSMA-11 PET/CT image dataset including 71 patients with metastatic prostate cancer was collected from three medical centres for training and evaluating the proposed network. For proof-of-concept, we focus on the detection of bone and lymph node lesions in the pelvic area suggestive for metastases of prostate cancer. The preliminary test on pelvic area confirms the potential of deep learning methods. Increasing the amount of training data may further enhance the performance of the proposed deep learning method.


Assuntos
Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Automação Laboratorial , Ácido Edético , Isótopos de Gálio , Radioisótopos de Gálio , Humanos , Masculino , Glicoproteínas de Membrana , Redes Neurais de Computação , Compostos Organometálicos , Neoplasias da Próstata
13.
Contrast Media Mol Imaging ; 2018: 2391925, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29531504

RESUMO

The identification of bone lesions is crucial in the diagnostic assessment of multiple myeloma (MM). 68Ga-Pentixafor PET/CT can capture the abnormal molecular expression of CXCR-4 in addition to anatomical changes. However, whole-body detection of dozens of lesions on hybrid imaging is tedious and error prone. It is even more difficult to identify lesions with a large heterogeneity. This study employed deep learning methods to automatically combine characteristics of PET and CT for whole-body MM bone lesion detection in a 3D manner. Two convolutional neural networks (CNNs), V-Net and W-Net, were adopted to segment and detect the lesions. The feasibility of deep learning for lesion detection on 68Ga-Pentixafor PET/CT was first verified on digital phantoms generated using realistic PET simulation methods. Then the proposed methods were evaluated on real 68Ga-Pentixafor PET/CT scans of MM patients. The preliminary results showed that deep learning method can leverage multimodal information for spatial feature representation, and W-Net obtained the best result for segmentation and lesion detection. It also outperformed traditional machine learning methods such as random forest classifier (RF), k-Nearest Neighbors (k-NN), and support vector machine (SVM). The proof-of-concept study encourages further development of deep learning approach for MM lesion detection in population study.


Assuntos
Complexos de Coordenação/farmacocinética , Aprendizado Profundo , Mieloma Múltiplo/diagnóstico por imagem , Peptídeos Cíclicos/farmacocinética , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos , Neoplasias Ósseas/diagnóstico por imagem , Radioisótopos de Gálio , Humanos , Mieloma Múltiplo/complicações , Redes Neurais de Computação , Imagens de Fantasmas , Receptores CXCR4/análise , Imagem Corporal Total
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA