Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 36
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Sensors (Basel) ; 22(19)2022 Oct 04.
Artigo em Inglês | MEDLINE | ID: mdl-36236632

RESUMO

Light Detection and Ranging (LiDAR) systems are novel sensors that provide robust distance and reflection strength by active pulsed laser beams. They have significant advantages over visual cameras by providing active depth and intensity measurements that are robust to ambient illumination. However, the systemsstill pay limited attention to intensity measurements since the output intensity maps of LiDAR sensors are different from conventional cameras and are too sparse. In this work, we propose exploiting the information from both intensity and depth measurements simultaneously to complete the LiDAR intensity maps. With the completed intensity maps, mature computer vision techniques can work well on the LiDAR data without any specific adjustment. We propose an end-to-end convolutional neural network named LiDAR-Net to jointly complete the sparse intensity and depth measurements by exploiting their correlations. For network training, an intensity fusion method is proposed to generate the ground truth. Experiment results indicate that intensity-depth fusion can benefit the task and improve performance. We further apply an off-the-shelf object (lane) segmentation algorithm to the completed intensity maps, which delivers consistent robust to ambient illumination performance. We believe that the intensity completion method allows LiDAR sensors to cope with a broader range of practice applications.

2.
Neuroimage ; 245: 118709, 2021 12 15.
Artigo em Inglês | MEDLINE | ID: mdl-34848300

RESUMO

BACKGROUND: The ratio of T1-weighted (T1w) and T2-weighted (T2w) magnetic resonance imaging (MRI) images is often used as a proxy measure of cortical myelin. However, the T1w/T2w-ratio is based on signal intensities that are inherently non-quantitative and known to be affected by extrinsic factors. To account for this a variety of processing methods have been proposed, but a systematic evaluation of their efficacy is lacking. Given the dependence of the T1w/T2w-ratio on scanner hardware and T1w and T2w protocols, it is important to ensure that processing pipelines perform well also across different sites. METHODS: We assessed a variety of processing methods for computing cortical T1w/T2w-ratio maps, including correction methods for nonlinear field inhomogeneities, local outliers, and partial volume effects as well as intensity normalisation. These were implemented in 33 processing pipelines which were applied to four test-retest datasets, with a total of 170 pairs of T1w and T2w images acquired on four different MRI scanners. We assessed processing pipelines across datasets in terms of their reproducibility of expected regional distributions of cortical myelin, lateral intensity biases, and test-retest reliability regionally and across the cortex. Regional distributions were compared both qualitatively with histology and quantitatively with two reference datasets, YA-BC and YA-B1+, from the Human Connectome Project. RESULTS: Reproducibility of raw T1w/T2w-ratio distributions was overall high with the exception of one dataset. For this dataset, Spearman rank correlations increased from 0.27 to 0.70 after N3 bias correction relative to the YA-BC reference and from -0.04 to 0.66 after N4ITK bias correction relative to the YA-B1+ reference. Partial volume and outlier corrections had only marginal effects on the reproducibility of T1w/T2w-ratio maps and test-retest reliability. Before intensity normalisation, we found large coefficients of variation (CVs) and low intraclass correlation coefficients (ICCs), with total whole-cortex CV of 10.13% and whole-cortex ICC of 0.58 for the raw T1w/T2w-ratio. Intensity normalisation with WhiteStripe, RAVEL, and Z-Score improved total whole-cortex CVs to 5.91%, 5.68%, and 5.19% respectively, whereas Z-Score and Least Squares improved whole-cortex ICCs to 0.96 and 0.97 respectively. CONCLUSIONS: In the presence of large intensity nonuniformities, bias field correction is necessary to achieve acceptable correspondence with known distributions of cortical myelin, but it can be detrimental in datasets with less intensity inhomogeneity. Intensity normalisation can improve test-retest reliability and inter-subject comparability. However, both bias field correction and intensity normalisation methods vary greatly in their efficacy and may affect the interpretation of results. The choice of T1w/T2w-ratio processing method must therefore be informed by both scanner and acquisition protocol as well as the given study objective. Our results highlight limitations of the T1w/T2w-ratio, but also suggest concrete ways to enhance its usefulness in future studies.


Assuntos
Conectoma , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Adulto , Conjuntos de Dados como Assunto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Reprodutibilidade dos Testes
3.
Neuroimage ; 222: 117229, 2020 11 15.
Artigo em Inglês | MEDLINE | ID: mdl-32771619

RESUMO

BACKGROUND: The lack of standardization of intensity normalization methods and its unknown effect on the quantification output is recognized as a major drawback for the harmonization of brain FDG-PET quantification protocols. The aim of this work is the ground truth-based evaluation of different intensity normalization methods on brain FDG-PET quantification output. METHODS: Realistic FDG-PET images were generated using Monte Carlo simulation from activity and attenuation maps directly derived from 25 healthy subjects (adding theoretical relative hypometabolisms on 6 regions of interest and for 5 hypometabolism levels). Single-subject statistical parametric mapping (SPM) was applied to compare each simulated FDG-PET image with a healthy database after intensity normalization based on reference regions methods such as the brain stem (RRBS), cerebellum (RRC) and the temporal lobe contralateral to the lesion (RRTL), and data-driven methods, such as proportional scaling (PS), histogram-based method (HN) and iterative versions of both methods (iPS and iHN). The performance of these methods was evaluated in terms of the recovery of the introduced theoretical hypometabolic pattern and the appearance of unspecific hypometabolic and hypermetabolic findings. RESULTS: Detected hypometabolic patterns had significantly lower volumes than the introduced hypometabolisms for all intensity normalization methods particularly for slighter reductions in metabolism . Among the intensity normalization methods, RRC and HN provided the largest recovered hypometabolic volumes, while the RRBS showed the smallest recovery. In general, data-driven methods overcame reference regions and among them, the iterative methods overcame the non-iterative ones. Unspecific hypermetabolic volumes were similar for all methods, with the exception of PS, where it became a major limitation (up to 250 cm3) for extended and intense hypometabolism. On the other hand, unspecific hypometabolism was similar far all methods, and usually solved with appropriate clustering. CONCLUSIONS: Our findings showed that the inappropriate use of intensity normalization methods can provide remarkable bias in the detected hypometabolism and it represents a serious concern in terms of false positives. Based on our findings, we recommend the use of histogram-based intensity normalization methods. Reference region methods performance was equivalent to data-driven methods only when the selected reference region is large and stable.


Assuntos
Mapeamento Encefálico , Encéfalo/patologia , Processamento de Imagem Assistida por Computador , Tomografia por Emissão de Pósitrons , Idoso , Mapeamento Encefálico/métodos , Simulação por Computador , Feminino , Fluordesoxiglucose F18/metabolismo , Humanos , Processamento de Imagem Assistida por Computador/métodos , Masculino , Pessoa de Meia-Idade , Tomografia por Emissão de Pósitrons/métodos , Compostos Radiofarmacêuticos/metabolismo , Lobo Temporal/patologia
4.
Neuroimage ; 223: 117242, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-32798678

RESUMO

In multisite neuroimaging studies there is often unwanted technical variation across scanners and sites. These "scanner effects" can hinder detection of biological features of interest, produce inconsistent results, and lead to spurious associations. We propose mica (multisite image harmonization by cumulative distribution function alignment), a tool to harmonize images taken on different scanners by identifying and removing within-subject scanner effects. Our goals in the present study were to (1) establish a method that removes scanner effects by leveraging multiple scans collected on the same subject, and, building on this, (2) develop a technique to quantify scanner effects in large multisite studies so these can be reduced as a preprocessing step. We illustrate scanner effects in a brain MRI study in which the same subject was measured twice on seven scanners, and assess our method's performance in a second study in which ten subjects were scanned on two machines. We found that unharmonized images were highly variable across site and scanner type, and our method effectively removed this variability by aligning intensity distributions. We further studied the ability to predict image harmonization results for a scan taken on an existing subject at a new site using cross-validation.


Assuntos
Mapeamento Encefálico/métodos , Encéfalo/anatomia & histologia , Encéfalo/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética , Algoritmos , Artefatos , Humanos , Masculino , Pessoa de Meia-Idade , Reprodutibilidade dos Testes
5.
Anal Bioanal Chem ; 411(26): 6983-6994, 2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-31463516

RESUMO

This study investigated the optimal inter-batch normalization method for gas chromatography/tandem mass spectrometry (GC/MS/MS)-based targeted metabolome analysis of rodent blood samples. The effect of centrifugal concentration on inter-batch variation was also investigated. Six serum samples prepared from a mouse and 2 quality control (QC) samples from pooled mouse serum were assigned to each batch, and the 3 batches were analyzed by GC/MS/MS at different days. The following inter-batch normalization methods were applied to metabolome data: QC-based methods with quadratic (QUAD)- or cubic spline (CS)-fitting, total signal intensity (TI)-based method, median signal intensity (MI)-based method, and isotope labeled internal standard (IS)-based method. We revealed that centrifugal concentration was a critical factor to cause inter-batch variation. Unexpectedly, neither the QC-based normalization methods nor the IS-based method was able to normalize inter-batch variation, though MI- or TI-based normalization methods were effective in normalizing inter-batch variation. For further validation, 6 disease model rat and 6 control rat plasma were evenly divided into 3 batches, and analyzed as different batches. Same as the results above, MI- or TI-based methods were able to normalize inter-batch variation. In particular, the data normalized by TI-based method showed similar metabolic profiles obtained from their intra-batch analysis. In conclusion, the TI-based normalization method is the most effective to normalize inter-batch variation for GC/MS/MS-based metabolome analysis. Graphical abstract.


Assuntos
Metaboloma , Metabolômica/métodos , Plasma/metabolismo , Soro/metabolismo , Animais , Centrifugação/métodos , Cromatografia Gasosa-Espectrometria de Massas/métodos , Masculino , Camundongos Endogâmicos ICR , Controle de Qualidade , Ratos , Síndrome da Serotonina/sangue , Síndrome da Serotonina/metabolismo , Espectrometria de Massas em Tandem/métodos
6.
Neuroimage ; 146: 589-599, 2017 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-27693611

RESUMO

OBJECTIVES: In brain 18F-FDG PET data intensity normalization is usually applied to control for unwanted factors confounding brain metabolism. However, it can be difficult to determine a proper intensity normalization region as a reference for the identification of abnormal metabolism in diseased brains. In neurodegenerative disorders, differentiating disease-related changes in brain metabolism from age-associated natural changes remains challenging. This study proposes a new data-driven method to identify proper intensity normalization regions in order to improve separation of age-associated natural changes from disease related changes in brain metabolism. METHODS: 127 female and 128 male healthy subjects (age: 20 to 79) with brain18F-FDG PET/CT in the course of a whole body cancer screening were included. Brain PET images were processed using SPM8 and were parcellated into 116 anatomical regions according to the AAL template. It is assumed that normal brain 18F-FDG metabolism has longitudinal coherency and this coherency leads to better model fitting. The coefficient of determination R2 was proposed as the coherence coefficient, and the total coherence coefficient (overall fitting quality) was employed as an index to assess proper intensity normalization strategies on single subjects and age-cohort averaged data. Age-associated longitudinal changes of normal subjects were derived using the identified intensity normalization method correspondingly. In addition, 15 subjects with clinically diagnosed Parkinson's disease were assessed to evaluate the clinical potential of the proposed new method. RESULTS: Intensity normalizations by paracentral lobule and cerebellar tonsil, both regions derived from the new data-driven coherency method, showed significantly better coherence coefficients than other intensity normalization regions, and especially better than the most widely used global mean normalization. Intensity normalization by paracentral lobule was the most consistent method within both analysis strategies (subject-based and age-cohort averaging). In addition, the proposed new intensity normalization method using the paracentral lobule generates significantly higher differentiation from the age-associated changes than other intensity normalization methods. CONCLUSION: Proper intensity normalization can enhance the longitudinal coherency of normal brain glucose metabolism. The paracentral lobule followed by the cerebellar tonsil are shown to be the two most stable intensity normalization regions concerning age-dependent brain metabolism. This may provide the potential to better differentiate disease-related changes from age-related changes in brain metabolism, which is of relevance in the diagnosis of neurodegenerative disorders.


Assuntos
Mapeamento Encefálico/métodos , Encéfalo/metabolismo , Fluordesoxiglucose F18/farmacocinética , Tomografia por Emissão de Pósitrons , Adulto , Idoso , Encéfalo/diagnóstico por imagem , Feminino , Fluordesoxiglucose F18/metabolismo , Humanos , Processamento de Imagem Assistida por Computador , Masculino , Pessoa de Meia-Idade , Doença de Parkinson/diagnóstico por imagem , Doença de Parkinson/metabolismo , Processamento de Sinais Assistido por Computador , Adulto Jovem
7.
Hum Brain Mapp ; 38(7): 3615-3622, 2017 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-28432780

RESUMO

Non-quantitative MRI is prone to intersubject intensity variation rendering signal intensity level based analyses limited. Here, we propose a method that fuses non-quantitative routine T1-weighted (T1w), T2w, and T2w fluid-saturated inversion recovery sequences using independent component analysis and validate it on age and sex matched healthy controls. The proposed method leads to consistent and independent components with a significantly reduced coefficient-of-variation across subjects, suggesting potential to serve as automatic intensity normalization and thus to enhance the power of intensity based statistical analyses. To exemplify this, we show that voxelwise statistical testing on single-subject independent components reveals in particular a widespread sex difference in white matter, which was previously shown using, for example, diffusion tensor imaging but unobservable in the native MRI contrasts. In conclusion, our study shows that single-subject independent component analysis can be applied to routine sequences, thereby enhancing comparability in-between subjects. Unlike quantitative MRI, which requires specific sequences during acquisition, our method is applicable to existing MRI data. Hum Brain Mapp 38:3615-3622, 2017. © 2017 Wiley Periodicals, Inc.

8.
Phys Imaging Radiat Oncol ; 30: 100585, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38799810

RESUMO

Background and purpose: Magnetic resonance imaging (MRI) scans are highly sensitive to acquisition and reconstruction parameters which affect feature stability and model generalizability in radiomic research. This work aims to investigate the effect of image pre-processing and harmonization methods on the stability of brain MRI radiomic features and the prediction performance of radiomic models in patients with brain metastases (BMs). Materials and methods: Two T1 contrast enhanced brain MRI data-sets were used in this study. The first contained 25 BMs patients with scans at two different time points and was used for features stability analysis. The effect of gray level discretization (GLD), intensity normalization (Z-score, Nyul, WhiteStripe, and in house-developed method named N-Peaks), and ComBat harmonization on features stability was investigated and features with intraclass correlation coefficient >0.8 were considered as stable. The second data-set containing 64 BMs patients was used for a classification task to investigate the informativeness of stable features and the effects of harmonization methods on radiomic model performance. Results: Applying fixed bin number (FBN) GLD, resulted in higher number of stable features compare to fixed bin size (FBS) discretization (10 ± 5.5 % higher). `Harmonization in feature domain improved the stability for non-normalized and normalized images with Z-score and WhiteStripe methods. For the classification task, keeping the stable features resulted in good performance only for normalized images with N-Peaks along with FBS discretization. Conclusions: To develop a robust MRI based radiomic model we recommend using an intensity normalization method based on a reference tissue (e.g N-Peaks) and then using FBS discretization.

9.
Curr Top Dev Biol ; 153: 61-93, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36967202

RESUMO

WNT/CTNNB1 signaling plays a critical role in the development of all multicellular animals. Here, we include both the embryonic stages, during which tissue morphogenesis takes place, and the postnatal stages of development, during which tissue homeostasis occurs. Thus, embryonic development concerns lineage development and cell fate specification, while postnatal development involves tissue maintenance and regeneration. Multiple tools are available to researchers who want to investigate, and ideally visualize, the dynamic and pleiotropic involvement of WNT/CTNNB1 signaling in these processes. Here, we discuss and evaluate the decisions that researchers need to make in identifying the experimental system and appropriate tools for the specific question they want to address, covering different types of WNT/CTNNB1 reporters in cells and mice. At a molecular level, advanced quantitative imaging techniques can provide spatio-temporal information that cannot be provided by traditional biochemical assays. We therefore also highlight some recent studies to show their potential in deciphering the complex and dynamic mechanisms that drive WNT/CTNNB1 signaling.


Assuntos
Via de Sinalização Wnt , beta Catenina , Animais , Camundongos , beta Catenina/metabolismo , Diferenciação Celular , Mamíferos/metabolismo
10.
Phys Med Biol ; 67(24)2022 12 13.
Artigo em Inglês | MEDLINE | ID: mdl-36223780

RESUMO

Objective. Multi-parametric magnetic resonance imaging (mpMRI) has become an important tool for the detection of prostate cancer in the past two decades. Despite the high sensitivity of MRI for tissue characterization, it often suffers from a lack of specificity. Several well-established pre-processing tools are publicly available for improving image quality and removing both intra- and inter-patient variability in order to increase the diagnostic accuracy of MRI. To date, most of these pre-processing tools have largely been assessed individually. In this study we present a systematic evaluation of a multi-step mpMRI pre-processing pipeline to automate tumor localization within the prostate using a previously trained model.Approach. The study was conducted on 31 treatment-naïve prostate cancer patients with a PI-RADS-v2 compliant mpMRI examination. Multiple methods were compared for each pre-processing step: (1) bias field correction, (2) normalization, and (3) deformable multi-modal registration. Optimal parameter values were estimated for each step on the basis of relevant individual metrics. Tumor localization was then carried out via a model-based approach that takes both mpMRI and prior clinical knowledge features as input. A sequential optimization approach was adopted for determining the optimal parameters and techniques in each step of the pipeline.Main results. The application of bias field correction alone increased the accuracy of tumor localization (area under the curve (AUC) = 0.77;p-value = 0.004) over unprocessed data (AUC = 0.74). Adding normalization to the pre-processing pipeline further improved diagnostic accuracy of the model to an AUC of 0.85 (p-value = 0.000 12). Multi-modal registration of apparent diffusion coefficient images to T2-weighted images improved the alignment of tumor locations in all but one patient, resulting in a slight decrease in accuracy (AUC = 0.84;p-value = 0.30).Significance. Overall, our findings suggest that the combined effect of multiple pre-processing steps with optimal values has the ability to improve the quantitative classification of prostate cancer using mpMRI. Clinical trials: NCT03378856 and NCT03367702.


Assuntos
Imageamento por Ressonância Magnética Multiparamétrica , Neoplasias da Próstata , Masculino , Humanos , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/patologia , Imageamento por Ressonância Magnética/métodos , Imageamento por Ressonância Magnética Multiparamétrica/métodos , Próstata/patologia , Probabilidade , Estudos Retrospectivos
11.
Diagnostics (Basel) ; 11(5)2021 Apr 30.
Artigo em Inglês | MEDLINE | ID: mdl-33946436

RESUMO

This study aimed to facilitate pseudo-CT synthesis from MRI by normalizing MRI intensity of the same tissue type to a similar intensity level. MRI intensity normalization was conducted through dividing MRI by a shading map, which is a smoothed ratio image between MRI and a three-intensity mask. Regarding pseudo-CT synthesis from MRI, a conversion model based on a three-layer convolutional neural network was trained and validated. Before MRI intensity normalization, the mean value ± standard deviation of fat tissue in 0.35 T chest MRI was 297 ± 73 (coefficient of variation (CV) = 24.58%), which was 533 ± 91 (CV = 17.07%) in 1.5 T abdominal MRI. The corresponding results were 149 ± 32 (CV = 21.48%) and 148 ± 28 (CV = 18.92%) after intensity normalization. With regards to pseudo-CT synthesis from MRI, the differences in mean values between pseudo-CT and real CT were 3, 15, and 12 HU for soft tissue, fat, and lung/air in 0.35 T chest imaging, respectively, while the corresponding results were 3, 14, and 15 HU in 1.5 T abdominal imaging. Overall, the proposed workflow is reliable in pseudo-CT synthesis from MRI and is more practicable in clinical routine practice compared with deep learning methods, which demand a high level of resources for building a conversion model.

12.
Cancers (Basel) ; 13(12)2021 Jun 15.
Artigo em Inglês | MEDLINE | ID: mdl-34203896

RESUMO

In brain MRI radiomics studies, the non-biological variations introduced by different image acquisition settings, namely scanner effects, affect the reliability and reproducibility of the radiomics results. This paper assesses how the preprocessing methods (including N4 bias field correction and image resampling) and the harmonization methods (either the six intensity normalization methods working on brain MRI images or the ComBat method working on radiomic features) help to remove the scanner effects and improve the radiomic feature reproducibility in brain MRI radiomics. The analyses were based on in vitro datasets (homogeneous and heterogeneous phantom data) and in vivo datasets (brain MRI images collected from healthy volunteers and clinical patients with brain tumors). The results show that the ComBat method is essential and vital to remove scanner effects in brain MRI radiomic studies. Moreover, the intensity normalization methods, while not able to remove scanner effects at the radiomic feature level, still yield more comparable MRI images and improve the robustness of the harmonized features to the choice among ComBat implementations.

13.
Comput Methods Programs Biomed ; 208: 106225, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-34198016

RESUMO

OBJECTIVES: Dynamic Contrast Enhanced-Magnetic Resonance Imaging (DCE-MRI) is widely used to complement ultrasound examinations and x-ray mammography for early detection and diagnosis of breast cancer. However, images generated by various MRI scanners (e.g., GE Healthcare, and Siemens) differ both in intensity and noise distribution, preventing algorithms trained on MRIs from one scanner to generalize to data from other scanners. In this work, we propose a method to solve this problem by normalizing images between various scanners. METHODS: MRI normalization is challenging because it requires normalizing intensity values and mapping noise distributions between scanners. We utilize a cycle-consistent generative adversarial network to learn a bidirectional mapping and perform normalization between MRIs produced by GE Healthcare and Siemens scanners in an unpaired setting. Initial experiments demonstrate that the traditional CycleGAN architecture struggles to preserve the anatomical structures of the breast during normalization. Thus, we propose two technical innovations in order to preserve both the shape of the breast as well as the tissue structures within the breast. First, we incorporate mutual information loss during training in order to ensure anatomical consistency. Second, we propose a modified discriminator architecture that utilizes a smaller field-of-view to ensure the preservation of finer details in the breast tissue. RESULTS: Quantitative and qualitative evaluations show that the second innovation consistently preserves the breast shape and tissue structures while also performing the proper intensity normalization and noise distribution mapping. CONCLUSION: Our results demonstrate that the proposed model can successfully learn a bidirectional mapping and perform normalization between MRIs produced by different vendors, potentially enabling improved diagnosis and detection of breast cancer. All the data used in this study are publicly available at https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=70226903.


Assuntos
Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Algoritmos , Humanos , Mamografia , Raios X
14.
Front Med (Lausanne) ; 8: 744157, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34746179

RESUMO

Introduction: [18F]-FDG PET is a widely used imaging modality that visualizes cellular glucose uptake and provides functional information on the metabolic state of different tissues in vivo. Various quantification methods can be used to evaluate glucose metabolism in the brain, including the cerebral metabolic rate of glucose (CMRglc) and standard uptake values (SUVs). Especially in the brain, these (semi-)quantitative measures can be affected by several physiological factors, such as blood glucose level, age, gender, and stress. Next to this inter- and intra-subject variability, the use of different PET acquisition protocols across studies has created a need for the standardization and harmonization of brain PET evaluation. In this study we present a framework for statistical voxel-based analysis of glucose uptake in the rat brain using histogram-based intensity normalization. Methods: [18F]-FDG PET images of 28 normal rat brains were coregistered and voxel-wisely averaged. Ratio images were generated by voxel-wisely dividing each of these images with the group average. The most prevalent value in the ratio image was used as normalization factor. The normalized PET images were voxel-wisely averaged to generate a normal rat brain atlas. The variability of voxel intensities across the normalized PET images was compared to images that were either normalized by whole brain normalization, or not normalized. To illustrate the added value of this normal rat brain atlas, 9 animals with a striatal hemorrhagic lesion and 9 control animals were intravenously injected with [18F]-FDG and the PET images of these animals were voxel-wisely compared to the normal atlas by group- and individual analyses. Results: The average coefficient of variation of the voxel intensities in the brain across normal [18F]-FDG PET images was 6.7% for the histogram-based normalized images, 11.6% for whole brain normalized images, and 31.2% when no normalization was applied. Statistical voxel-based analysis, using the normal template, indicated regions of significantly decreased glucose uptake at the site of the ICH lesion in the ICH animals, but not in control animals. Conclusion: In summary, histogram-based intensity normalization of [18F]-FDG uptake in the brain is a suitable data-driven approach for standardized voxel-based comparison of brain PET images.

15.
EJNMMI Res ; 11(1): 31, 2021 Mar 24.
Artigo em Inglês | MEDLINE | ID: mdl-33761019

RESUMO

BACKGROUND: The objective of the study is to define the most appropriate region for intensity normalization in brain 18FDG PET semi-quantitative analysis. The best option could be based on previous absolute quantification studies, which showed that the metabolic changes related to ageing affect the quasi-totality of brain regions in healthy subjects. Consequently, brain metabolic changes related to ageing were evaluated in two populations of healthy controls who underwent conventional (n = 56) or digital (n = 78) 18FDG PET/CT. The median correlation coefficients between age and the metabolism of each 120 atlas brain region were reported for 120 distinct intensity normalizations (according to the 120 regions). SPM linear regression analyses with age were performed on most significant normalizations (FWE, p < 0.05). RESULTS: The cerebellum and pons were the two sole regions showing median coefficients of correlation with age less than - 0.5. With SPM, the intensity normalization by the pons provided at least 1.7- and 2.5-fold more significant cluster volumes than other normalizations for conventional and digital PET, respectively. CONCLUSIONS: The pons is the most appropriate area for brain 18FDG PET intensity normalization for examining the metabolic changes through ageing.

16.
Med Image Anal ; 74: 102191, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34509168

RESUMO

Image normalization is a building block in medical image analysis. Conventional approaches are customarily employed on a per-dataset basis. This strategy, however, prevents the current normalization algorithms from fully exploiting the complex joint information available across multiple datasets. Consequently, ignoring such joint information has a direct impact on the processing of segmentation algorithms. This paper proposes to revisit the conventional image normalization approach by, instead, learning a common normalizing function across multiple datasets. Jointly normalizing multiple datasets is shown to yield consistent normalized images as well as an improved image segmentation when intensity shifts are large. To do so, a fully automated adversarial and task-driven normalization approach is employed as it facilitates the training of realistic and interpretable images while keeping performance on par with the state-of-the-art. The adversarial training of our network aims at finding the optimal transfer function to improve both, jointly, the segmentation accuracy and the generation of realistic images. We have evaluated the performance of our normalizer on both infant and adult brain images from the iSEG, MRBrainS and ABIDE datasets. The results indicate that our contribution does provide an improved realism to the normalized images, while retaining a segmentation accuracy at par with the state-of-the-art learnable normalization approaches.


Assuntos
Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Adulto , Algoritmos , Humanos
17.
Neuroimage ; 50(2): 516-23, 2010 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-20034579

RESUMO

We describe an improved method of measuring brain atrophy rates from serial MRI for multi-site imaging studies of Alzheimer's disease (AD). The method (referred to as KN-BSI) improves an existing brain atrophy measurement technique-the boundary shift integral (classic-BSI), by performing tissue-specific intensity normalization and parameter selection. We applied KN-BSI to measure brain atrophy rates of 200 normal and 141 AD subjects using baseline and 1-year MRI scans downloaded from the Alzheimer's Disease Neuroimaging Initiative database. Baseline and repeat images were reviewed as pairs by expert raters and given quality scores. Including all image pairs, regardless of quality score, mean KN-BSI atrophy rates were 0.09% higher (95% CI 0.03% to 0.16%, p=0.007) than classic-BSI rates in controls and 0.07% higher (-0.01% to 0.16%, p=0.07) higher in ADs. The SD of the KN-BSI rates was 22% lower (15% to 29%, p<0.001) in controls and 13% lower (6% to 20%, p=0.001) in ADs, compared to classic-BSI. Using these results, the estimated sample size (needed per treatment arm) for a hypothetical trial of a treatment for AD (80% power, 5% significance to detect a 25% reduction in atrophy rate) would be reduced from 120 to 81 (a 32% reduction, 95% CI=18% to 45%, p<0.001) when using KN-BSI instead of classic-BSI. We concluded that KN-BSI offers more robust brain atrophy measurement than classic-BSI and substantially reduces sample sizes needed in clinical trials.


Assuntos
Doença de Alzheimer/patologia , Encéfalo/patologia , Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Idoso , Atrofia/patologia , Humanos
18.
Med Phys ; 47(4): 1680-1691, 2020 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-31971614

RESUMO

PURPOSE: Despite its increasing application, radiomics has not yet demonstrated a solid reliability, due to the difficulty in replicating analyses. The extraction of radiomic features from clinical MRI (T1w/T2w) presents even more challenges because of the absence of well-defined units (e.g. HU). Some preprocessing steps are required before the estimation of radiomic features and one of this is the intensity normalization, that can be performed using different methods. The aim of this work was to evaluate the effect of three different normalization techniques, applied on T2w-MRI images of the pelvic region, on radiomic features reproducibility. METHODS: T2w-MRI acquired before (MRI1) and 12 months after radiotherapy (MRI2) from 14 patients treated for prostate cancer were considered. Four different conditions were analyzed: (a) the original MRI (No_Norm); (b) MRI normalized by the mean image value (Norm_Mean); (c) MRI normalized by the mean value of the urine in the bladder (Norm_ROI); (d) MRI normalized by the histogram-matching method (Norm_HM). Ninety-one radiomic features were extracted from three organs of interest (prostate, internal obturator muscles and bulb) at both time-points and on each image discretized using a fixed bin-width approach and the difference between the two time-points was calculated (Δfeature). To estimate the effect of normalization methods on the reproducibility of radiomic features, ICC was calculated in three analyses: (a) considering the features extracted on MRI2 in the four conditions together and considering the influence of each method separately, with respect to No_Norm; (b) considering the features extracted on MRI2 in the four conditions with respect to the inter-observer variability in region of interest (ROI) contouring, considering also the effect of the discretization approach; (c) considering Δfeature to evaluate if some indices can recover some consistency when differences are calculated. RESULTS: Nearly 60% of the features have shown poor reproducibility (ICC < 0.5) on MRI2 and the method that most affected features reliability was Norm_ROI (average ICC of 0.45). The other two methods were similar, except for first-order features, where Norm_HM outperformed Norm_Mean (average ICC = 0.33 and 0.76 for Norm_Mean and Norm_HM, respectively). In the inter-observer setting, the number of reproducible features varied in the three structures, being higher in the prostate than in the penile bulb and in the obturators. The analysis on Δfeature highlighted that more than 60% of the features were not consistent with respect to the normalization method and confirmed the high reproducibility of the features between Norm_Mean and Norm_HM, whereas Norm_ROI was the less reproducible method. CONCLUSIONS: The normalization process impacts the reproducibility of radiomic features, both in terms of changes in the image information content and in the inter-observer setting. Among the considered methods, Norm_Mean and Norm_HM seem to provide the most reproducible features with respect to the original image and also between themselves, whereas Norm_ROI generates less reproducible features. Only a very small subset of feature remained reproducible and independent in any tested condition, regardless the ROI and the adopted algorithm: skewness or kurtosis, correlation and one among Imc2, Idmn and Idn from GLCM group.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética
19.
J Mass Spectrom ; 56(4): e4589, 2020 Jun 11.
Artigo em Inglês | MEDLINE | ID: mdl-32639693

RESUMO

Metabolomics study of a biological system often involves the analysis of many comparative samples over a period of several days or weeks. This process of long-term sample runs can encounter unexpected instrument drifts such as small leaks in liquid chromatography-mass spectrometry (LC-MS), degradation of column performance, and MS signal intensity change. A robust analytical method should ideally tolerate these instrumental drifts as much as possible. In this work, we report a case study to demonstrate the high tolerance of differential chemical isotope labeling (CIL) LC-MS method for quantitative metabolome analysis. In a study of using a rat model to examine the metabolome changes during rheumatoid arthritis (RA) disease development and treatment, over 468 samples were analyzed over a period of 15 days in three batches. During the sample runs, a small leak in LC was discovered after a batch of analyses was completed. Reanalysis of these samples was not an option as sample amounts were limited. To overcome the problem caused by the small leak, we applied a method of retention time correction to the LC-MS data to align peak pairs from different runs with different degrees of leak, followed by peak ratio calculation and analysis. Herein, we illustrate that using 12 C-/13 C-peak pair intensity values in CIL LC-MS as a measurement of concentration changes in different samples could tolerate the signal drifts, while using the absolute intensity values (ie, 12 C-peak as in conventional LC-MS) was not as reliable. We hope that the case study illustrated and the method of overcoming the small-leak-caused signal drifts can be helpful to others who may encounter this kind of situation in long-term sample runs.

20.
Artigo em Inglês | MEDLINE | ID: mdl-31551645

RESUMO

Image synthesis learns a transformation from the intensity features of an input image to yield a different tissue contrast of the output image. This process has been shown to have application in many medical image analysis tasks including imputation, registration, and segmentation. To carry out synthesis, the intensities of the input images are typically scaled-i.e., normalized-both in training to learn the transformation and in testing when applying the transformation, but it is not presently known what type of input scaling is optimal. In this paper, we consider seven different intensity normalization algorithms and three different synthesis methods to evaluate the impact of normalization. Our experiments demonstrate that intensity normalization as a preprocessing step improves the synthesis results across all investigated synthesis algorithms. Furthermore, we show evidence that suggests intensity normalization is vital for successful deep learning-based MR image synthesis.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA