Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 90
Filter
1.
Anal Chim Acta ; 1287: 341808, 2024 Jan 25.
Article in English | MEDLINE | ID: mdl-38182331

ABSTRACT

BACKGROUND: Low resolution nuclear magnetic resonance (LR-NMR) is a common technique to identify the constituents of complex materials (such as food and biological samples). The output of LR-NMR experiments is a relaxation signal which can be modelled as a type of convolution of an unknown density of relaxation times with decaying exponential functions, plus random Gaussian noise. The challenge is to estimate that density, a severely ill-posed problem. A complication is that non-negativity constraints need to be imposed in order to obtain valid results. SIGNIFICANCE AND NOVELTY: We present a smooth deconvolution model for solution of the inverse estimation problem in LR-NMR relaxometry experiments. We model the logarithm of the relaxation time density as a smooth function using (adaptive) P-splines while matching the expected residual magnetisations with the observed ones. The roughness penalty removes the singularity of the deconvolution problem, and the estimated density is positive by design (since we model its logarithm). The model is non-linear, but it can be linearized easily. The penalty has to be tuned for each given sample. We describe an efficient EM-type algorithm to optimize the smoothing parameter(s). RESULTS: We analyze a set of food samples (potato tubers). The relaxation spectra extracted using our method are similar to the ones described in the previous experiments but present sharper peaks. Using penalized signal regression we are able to accurately predict dry matter content of the samples using the estimated spectra as covariates.

2.
iScience ; 26(1): 105760, 2023 Jan 20.
Article in English | MEDLINE | ID: mdl-36590163

ABSTRACT

Spatial transcriptomics is a novel technique that provides RNA-expression data with tissue-contextual annotations. Quality assessments of such techniques using end-user generated data are often lacking. Here, we evaluated data from the NanoString GeoMx Digital Spatial Profiling (DSP) platform and standard processing pipelines. We queried 72 ROIs from 12 glioma samples, performed replicate experiments of eight samples for validation, and evaluated five external datasets. The data consistently showed vastly different signal intensities between samples and experimental conditions that resulted in biased analysis. We evaluated the performance of alternative normalization strategies and show that quantile normalization can adequately address the technical issues related to the differences in data distributions. Compared to bulk RNA sequencing, NanoString DSP data show a limited dynamic range which underestimates differences between conditions. Weighted gene co-expression network analysis allowed extraction of gene signatures associated with tissue phenotypes from ROI annotations. Nanostring GeoMx DSP data therefore require alternative normalization methods and analysis pipelines.

3.
Biometrics ; 79(3): 1972-1985, 2023 09.
Article in English | MEDLINE | ID: mdl-36062852

ABSTRACT

The receptive field (RF) of a visual neuron is the region of the space that elicits neuronal responses. It can be mapped using different techniques that allow inferring its spatial and temporal properties. Raw RF maps (RFmaps) are usually noisy, making it difficult to obtain and study important features of the RF. A possible solution is to smooth them using P-splines. Yet, raw RFmaps are characterized by sharp transitions in both space and time. Their analysis thus asks for spatiotemporal adaptive P-spline models, where smoothness can be locally adapted to the data. However, the literature lacks proposals for adaptive P-splines in more than two dimensions. Furthermore, the extra flexibility afforded by adaptive P-spline models is obtained at the cost of a high computational burden, especially in a multidimensional setting. To fill these gaps, this work presents a novel anisotropic locally adaptive P-spline model in two (e.g., space) and three (space and time) dimensions. Estimation is based on the recently proposed SOP (Separation of Overlapping Precision matrices) method, which provides the speed we look for. Besides the spatiotemporal analysis of the neuronal activity data that motivated this work, the practical performance of the proposal is evaluated through simulations, and comparisons with alternative methods are reported.


Subject(s)
Neurons , Neurons/physiology
4.
Sci Rep ; 12(1): 11241, 2022 07 04.
Article in English | MEDLINE | ID: mdl-35787655

ABSTRACT

We present a fast and simple algorithm for super-resolution with single images. It is based on penalized least squares regression and exploits the tensor structure of two-dimensional convolution. A ridge penalty and a difference penalty are combined; the former removes singularities, while the latter eliminates ringing. We exploit the conjugate gradient algorithm to avoid explicit matrix inversion. Large images are handled with ease: zooming a 100 by 100 pixel image to 800 by 800 pixels takes less than a second on an average PC. Several examples, from applications in wide-field fluorescence microscopy, illustrate performance.


Subject(s)
Algorithms , Microscopy, Fluorescence
5.
Sci Rep ; 11(1): 7569, 2021 04 07.
Article in English | MEDLINE | ID: mdl-33828326

ABSTRACT

Sub-diffraction or super-resolution fluorescence imaging allows the visualization of the cellular morphology and interactions at the nanoscale. Statistical analysis methods such as super-resolution optical fluctuation imaging (SOFI) obtain an improved spatial resolution by analyzing fluorophore blinking but can be perturbed by the presence of non-stationary processes such as photodestruction or fluctuations in the illumination. In this work, we propose to use Whittaker smoothing to remove these smooth signal trends and retain only the information associated to independent blinking of the emitters, thus enhancing the SOFI signals. We find that our method works well to correct photodestruction, especially when it occurs quickly. The resulting images show a much higher contrast, strongly suppressed background and a more detailed visualization of cellular structures. Our method is parameter-free and computationally efficient, and can be readily applied on both two-dimensional and three-dimensional data.

6.
IEEE J Biomed Health Inform ; 24(3): 825-834, 2020 03.
Article in English | MEDLINE | ID: mdl-31283491

ABSTRACT

Shape analysis is increasingly becoming important to study changes in brain structures in relation to clinical neurological outcomes. This is a challenging task due to the high dimensionality of shape representations and the often limited number of available shapes. Current techniques counter the poor ratio between dimensions and sample size by using regularization in shape space, but do not take into account the spatial relations within the shapes. This can lead to models that are biologically implausible and difficult to interpret. We propose to use P-spline based regression, which combines a generalized linear model (GLM) with the coefficients described as B-splines and a penalty term that constrains the regression coefficients to be spatially smooth. Owing to the GLM, this method can naturally predict both continuous and discrete outcomes and can include non-spatial covariates without penalization. We evaluated our method on hippocampus shapes extracted from magnetic resonance (MR) images of 510 non-demented, elderly people. We related the hippocampal shape to age, memory score, and sex. The proposed method retained the good performance of current techniques, such as ridge regression, but produced smoother coefficient fields that are easier to interpret.


Subject(s)
Hippocampus/diagnostic imaging , Image Processing, Computer-Assisted/methods , Aged , Aged, 80 and over , Algorithms , Female , Humans , Magnetic Resonance Imaging , Male , Middle Aged , Phantoms, Imaging
7.
Epidemiology ; 30(5): 737-745, 2019 09.
Article in English | MEDLINE | ID: mdl-31205290

ABSTRACT

During an infectious disease outbreak, timely information on the number of new symptomatic cases is crucial. However, the reporting of new cases is usually subject to delay due to the incubation period, time to seek care, and diagnosis. This results in a downward bias in the numbers of new cases by the times of symptoms onset towards the current day. The real-time assessment of the current situation while correcting for underreporting is called nowcasting. We present a nowcasting method based on bivariate P-spline smoothing of the number of reported cases by time of symptoms onset and delay. Our objective is to predict the number of symptomatic-but-not-yet-reported cases and combine these with the already reported symptomatic cases into a nowcast. We assume the underlying two-dimensional reporting intensity surface to be smooth. We include prior information on the reporting process as additional constraints: the smooth surface is unimodal in the reporting delay dimension, is (almost) zero at a predefined maximum delay and has a prescribed shape at the beginning of the outbreak. Parameter estimation is done efficiently by penalized iterative weighted least squares. We illustrate our method on a large measles outbreak in the Netherlands. We show that even with very limited information the method is able to accurately predict the number of symptomatic-but-not-yet-reported cases. This results in substantially improved monitoring of new symptomatic cases in real time.


Subject(s)
Data Interpretation, Statistical , Disease Notification , Disease Outbreaks/prevention & control , Models, Statistical , Public Health Surveillance/methods , Child , Disease Notification/methods , Disease Notification/statistics & numerical data , Humans , Incidence , Measles/epidemiology , Measles/prevention & control , Netherlands/epidemiology , Retrospective Studies , Time Factors
8.
Sci Rep ; 8(1): 6815, 2018 05 01.
Article in English | MEDLINE | ID: mdl-29717146

ABSTRACT

Genome-wide association studies (GWAS) with longitudinal phenotypes provide opportunities to identify genetic variations associated with changes in human traits over time. Mixed models are used to correct for the correlated nature of longitudinal data. GWA studies are notorious for their computational challenges, which are considerable when mixed models for thousands of individuals are fitted to millions of SNPs. We present a new algorithm that speeds up a genome-wide analysis of longitudinal data by several orders of magnitude. It solves the equivalent penalized least squares problem efficiently, computing variances in an initial step. Factorizations and transformations are used to avoid inversion of large matrices. Because the system of equations is bordered, we can re-use components, which can be precomputed for the mixed model without a SNP. Two SNP effects (main and its interaction with time) are obtained. Our method completes the analysis a thousand times faster than the R package lme4, providing an almost identical solution for the coefficients and p-values. We provide an R implementation of our algorithm.


Subject(s)
Algorithms , Genome-Wide Association Study/methods , Models, Genetic , Computer Simulation , Cross-Sectional Studies , Data Accuracy , Humans , Least-Squares Analysis , Linear Models , Longitudinal Studies , Phenotype , Polymorphism, Single Nucleotide , Software
9.
Anal Chim Acta ; 1019: 1-13, 2018 Aug 17.
Article in English | MEDLINE | ID: mdl-29625674

ABSTRACT

In the analysis of biological samples, control over experimental design and data acquisition procedures alone cannot ensure well-conditioned 1H NMR spectra with maximal information recovery for data analysis. A third major element affects the accuracy and robustness of results: the data pre-processing/pre-treatment for which not enough attention is usually devoted, in particular in metabolomic studies. The usual approach is to use proprietary software provided by the analytical instruments' manufacturers to conduct the entire pre-processing strategy. This widespread practice has a number of advantages such as a user-friendly interface with graphical facilities, but it involves non-negligible drawbacks: a lack of methodological information and automation, a dependency of subjective human choices, only standard processing possibilities and an absence of objective quality criteria to evaluate pre-processing quality. This paper introduces PepsNMR to meet these needs, an R package dedicated to the whole processing chain prior to multivariate data analysis, including, among other tools, solvent signal suppression, internal calibration, phase, baseline and misalignment corrections, bucketing and normalisation. Methodological aspects are discussed and the package is compared to the gold standard procedure with two metabolomic case studies. The use of PepsNMR on these data shows better information recovery and predictive power based on objective and quantitative quality criteria. Other key assets of the package are workflow processing speed, reproducibility, reporting and flexibility, graphical outputs and documented routines.


Subject(s)
Metabolomics , Proton Magnetic Resonance Spectroscopy , Software
10.
Reprod Biomed Online ; 36(5): 576-583, 2018 May.
Article in English | MEDLINE | ID: mdl-29503210

ABSTRACT

Embryonic growth is often impaired in miscarriages. It is postulated that derangements in embryonic growth result in abnormalities of the embryonic curvature. This study aims to create first trimester reference charts of the human embryonic curvature and investigate differences between ongoing pregnancies and miscarriages. Weekly ultrasonographic scans from ongoing pregnancies and miscarriages were used from the Rotterdam periconceptional cohort and a cohort of recurrent miscarriages. In 202 ongoing pregnancies and 33 miscarriages, first trimester crown rump length and total arch length were measured to assess the embryonic curvature. The results show that the total arch length increases and shows more variation with advanced gestation. The crown rump length/total arch length ratio shows a strong increase from 8+0 to 10+0 weeks and flattening thereafter. No significant difference was observed between the curvature of embryos of ongoing pregnancies and miscarriages. The majority of miscarried embryos could not be measured. Therefore, this technique is too limited to recommend the measurement of the embryonic curvature in clinical practice.


Subject(s)
Embryo, Mammalian/diagnostic imaging , Embryonic Development , Abortion, Spontaneous , Adult , Cohort Studies , Crown-Rump Length , Female , Gestational Age , Humans , Imaging, Three-Dimensional , Pregnancy , Pregnancy Trimester, First , Ultrasonography, Prenatal
SELECTION OF CITATIONS
SEARCH DETAIL