Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 90
Filtrar
1.
Anal Chim Acta ; 1287: 341808, 2024 Jan 25.
Artículo en Inglés | MEDLINE | ID: mdl-38182331

RESUMEN

BACKGROUND: Low resolution nuclear magnetic resonance (LR-NMR) is a common technique to identify the constituents of complex materials (such as food and biological samples). The output of LR-NMR experiments is a relaxation signal which can be modelled as a type of convolution of an unknown density of relaxation times with decaying exponential functions, plus random Gaussian noise. The challenge is to estimate that density, a severely ill-posed problem. A complication is that non-negativity constraints need to be imposed in order to obtain valid results. SIGNIFICANCE AND NOVELTY: We present a smooth deconvolution model for solution of the inverse estimation problem in LR-NMR relaxometry experiments. We model the logarithm of the relaxation time density as a smooth function using (adaptive) P-splines while matching the expected residual magnetisations with the observed ones. The roughness penalty removes the singularity of the deconvolution problem, and the estimated density is positive by design (since we model its logarithm). The model is non-linear, but it can be linearized easily. The penalty has to be tuned for each given sample. We describe an efficient EM-type algorithm to optimize the smoothing parameter(s). RESULTS: We analyze a set of food samples (potato tubers). The relaxation spectra extracted using our method are similar to the ones described in the previous experiments but present sharper peaks. Using penalized signal regression we are able to accurately predict dry matter content of the samples using the estimated spectra as covariates.

2.
iScience ; 26(1): 105760, 2023 Jan 20.
Artículo en Inglés | MEDLINE | ID: mdl-36590163

RESUMEN

Spatial transcriptomics is a novel technique that provides RNA-expression data with tissue-contextual annotations. Quality assessments of such techniques using end-user generated data are often lacking. Here, we evaluated data from the NanoString GeoMx Digital Spatial Profiling (DSP) platform and standard processing pipelines. We queried 72 ROIs from 12 glioma samples, performed replicate experiments of eight samples for validation, and evaluated five external datasets. The data consistently showed vastly different signal intensities between samples and experimental conditions that resulted in biased analysis. We evaluated the performance of alternative normalization strategies and show that quantile normalization can adequately address the technical issues related to the differences in data distributions. Compared to bulk RNA sequencing, NanoString DSP data show a limited dynamic range which underestimates differences between conditions. Weighted gene co-expression network analysis allowed extraction of gene signatures associated with tissue phenotypes from ROI annotations. Nanostring GeoMx DSP data therefore require alternative normalization methods and analysis pipelines.

3.
Biometrics ; 79(3): 1972-1985, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-36062852

RESUMEN

The receptive field (RF) of a visual neuron is the region of the space that elicits neuronal responses. It can be mapped using different techniques that allow inferring its spatial and temporal properties. Raw RF maps (RFmaps) are usually noisy, making it difficult to obtain and study important features of the RF. A possible solution is to smooth them using P-splines. Yet, raw RFmaps are characterized by sharp transitions in both space and time. Their analysis thus asks for spatiotemporal adaptive P-spline models, where smoothness can be locally adapted to the data. However, the literature lacks proposals for adaptive P-splines in more than two dimensions. Furthermore, the extra flexibility afforded by adaptive P-spline models is obtained at the cost of a high computational burden, especially in a multidimensional setting. To fill these gaps, this work presents a novel anisotropic locally adaptive P-spline model in two (e.g., space) and three (space and time) dimensions. Estimation is based on the recently proposed SOP (Separation of Overlapping Precision matrices) method, which provides the speed we look for. Besides the spatiotemporal analysis of the neuronal activity data that motivated this work, the practical performance of the proposal is evaluated through simulations, and comparisons with alternative methods are reported.


Asunto(s)
Neuronas , Neuronas/fisiología
4.
Sci Rep ; 12(1): 11241, 2022 07 04.
Artículo en Inglés | MEDLINE | ID: mdl-35787655

RESUMEN

We present a fast and simple algorithm for super-resolution with single images. It is based on penalized least squares regression and exploits the tensor structure of two-dimensional convolution. A ridge penalty and a difference penalty are combined; the former removes singularities, while the latter eliminates ringing. We exploit the conjugate gradient algorithm to avoid explicit matrix inversion. Large images are handled with ease: zooming a 100 by 100 pixel image to 800 by 800 pixels takes less than a second on an average PC. Several examples, from applications in wide-field fluorescence microscopy, illustrate performance.


Asunto(s)
Algoritmos , Microscopía Fluorescente
5.
Sci Rep ; 11(1): 7569, 2021 04 07.
Artículo en Inglés | MEDLINE | ID: mdl-33828326

RESUMEN

Sub-diffraction or super-resolution fluorescence imaging allows the visualization of the cellular morphology and interactions at the nanoscale. Statistical analysis methods such as super-resolution optical fluctuation imaging (SOFI) obtain an improved spatial resolution by analyzing fluorophore blinking but can be perturbed by the presence of non-stationary processes such as photodestruction or fluctuations in the illumination. In this work, we propose to use Whittaker smoothing to remove these smooth signal trends and retain only the information associated to independent blinking of the emitters, thus enhancing the SOFI signals. We find that our method works well to correct photodestruction, especially when it occurs quickly. The resulting images show a much higher contrast, strongly suppressed background and a more detailed visualization of cellular structures. Our method is parameter-free and computationally efficient, and can be readily applied on both two-dimensional and three-dimensional data.

6.
IEEE J Biomed Health Inform ; 24(3): 825-834, 2020 03.
Artículo en Inglés | MEDLINE | ID: mdl-31283491

RESUMEN

Shape analysis is increasingly becoming important to study changes in brain structures in relation to clinical neurological outcomes. This is a challenging task due to the high dimensionality of shape representations and the often limited number of available shapes. Current techniques counter the poor ratio between dimensions and sample size by using regularization in shape space, but do not take into account the spatial relations within the shapes. This can lead to models that are biologically implausible and difficult to interpret. We propose to use P-spline based regression, which combines a generalized linear model (GLM) with the coefficients described as B-splines and a penalty term that constrains the regression coefficients to be spatially smooth. Owing to the GLM, this method can naturally predict both continuous and discrete outcomes and can include non-spatial covariates without penalization. We evaluated our method on hippocampus shapes extracted from magnetic resonance (MR) images of 510 non-demented, elderly people. We related the hippocampal shape to age, memory score, and sex. The proposed method retained the good performance of current techniques, such as ridge regression, but produced smoother coefficient fields that are easier to interpret.


Asunto(s)
Hipocampo/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Anciano , Anciano de 80 o más Años , Algoritmos , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Persona de Mediana Edad , Fantasmas de Imagen
7.
Epidemiology ; 30(5): 737-745, 2019 09.
Artículo en Inglés | MEDLINE | ID: mdl-31205290

RESUMEN

During an infectious disease outbreak, timely information on the number of new symptomatic cases is crucial. However, the reporting of new cases is usually subject to delay due to the incubation period, time to seek care, and diagnosis. This results in a downward bias in the numbers of new cases by the times of symptoms onset towards the current day. The real-time assessment of the current situation while correcting for underreporting is called nowcasting. We present a nowcasting method based on bivariate P-spline smoothing of the number of reported cases by time of symptoms onset and delay. Our objective is to predict the number of symptomatic-but-not-yet-reported cases and combine these with the already reported symptomatic cases into a nowcast. We assume the underlying two-dimensional reporting intensity surface to be smooth. We include prior information on the reporting process as additional constraints: the smooth surface is unimodal in the reporting delay dimension, is (almost) zero at a predefined maximum delay and has a prescribed shape at the beginning of the outbreak. Parameter estimation is done efficiently by penalized iterative weighted least squares. We illustrate our method on a large measles outbreak in the Netherlands. We show that even with very limited information the method is able to accurately predict the number of symptomatic-but-not-yet-reported cases. This results in substantially improved monitoring of new symptomatic cases in real time.


Asunto(s)
Interpretación Estadística de Datos , Notificación de Enfermedades , Brotes de Enfermedades/prevención & control , Modelos Estadísticos , Vigilancia en Salud Pública/métodos , Niño , Notificación de Enfermedades/métodos , Notificación de Enfermedades/estadística & datos numéricos , Humanos , Incidencia , Sarampión/epidemiología , Sarampión/prevención & control , Países Bajos/epidemiología , Estudios Retrospectivos , Factores de Tiempo
8.
Sci Rep ; 8(1): 6815, 2018 05 01.
Artículo en Inglés | MEDLINE | ID: mdl-29717146

RESUMEN

Genome-wide association studies (GWAS) with longitudinal phenotypes provide opportunities to identify genetic variations associated with changes in human traits over time. Mixed models are used to correct for the correlated nature of longitudinal data. GWA studies are notorious for their computational challenges, which are considerable when mixed models for thousands of individuals are fitted to millions of SNPs. We present a new algorithm that speeds up a genome-wide analysis of longitudinal data by several orders of magnitude. It solves the equivalent penalized least squares problem efficiently, computing variances in an initial step. Factorizations and transformations are used to avoid inversion of large matrices. Because the system of equations is bordered, we can re-use components, which can be precomputed for the mixed model without a SNP. Two SNP effects (main and its interaction with time) are obtained. Our method completes the analysis a thousand times faster than the R package lme4, providing an almost identical solution for the coefficients and p-values. We provide an R implementation of our algorithm.


Asunto(s)
Algoritmos , Estudio de Asociación del Genoma Completo/métodos , Modelos Genéticos , Simulación por Computador , Estudios Transversales , Exactitud de los Datos , Humanos , Análisis de los Mínimos Cuadrados , Modelos Lineales , Estudios Longitudinales , Fenotipo , Polimorfismo de Nucleótido Simple , Programas Informáticos
9.
Anal Chim Acta ; 1019: 1-13, 2018 Aug 17.
Artículo en Inglés | MEDLINE | ID: mdl-29625674

RESUMEN

In the analysis of biological samples, control over experimental design and data acquisition procedures alone cannot ensure well-conditioned 1H NMR spectra with maximal information recovery for data analysis. A third major element affects the accuracy and robustness of results: the data pre-processing/pre-treatment for which not enough attention is usually devoted, in particular in metabolomic studies. The usual approach is to use proprietary software provided by the analytical instruments' manufacturers to conduct the entire pre-processing strategy. This widespread practice has a number of advantages such as a user-friendly interface with graphical facilities, but it involves non-negligible drawbacks: a lack of methodological information and automation, a dependency of subjective human choices, only standard processing possibilities and an absence of objective quality criteria to evaluate pre-processing quality. This paper introduces PepsNMR to meet these needs, an R package dedicated to the whole processing chain prior to multivariate data analysis, including, among other tools, solvent signal suppression, internal calibration, phase, baseline and misalignment corrections, bucketing and normalisation. Methodological aspects are discussed and the package is compared to the gold standard procedure with two metabolomic case studies. The use of PepsNMR on these data shows better information recovery and predictive power based on objective and quantitative quality criteria. Other key assets of the package are workflow processing speed, reproducibility, reporting and flexibility, graphical outputs and documented routines.


Asunto(s)
Metabolómica , Espectroscopía de Protones por Resonancia Magnética , Programas Informáticos
10.
Reprod Biomed Online ; 36(5): 576-583, 2018 May.
Artículo en Inglés | MEDLINE | ID: mdl-29503210

RESUMEN

Embryonic growth is often impaired in miscarriages. It is postulated that derangements in embryonic growth result in abnormalities of the embryonic curvature. This study aims to create first trimester reference charts of the human embryonic curvature and investigate differences between ongoing pregnancies and miscarriages. Weekly ultrasonographic scans from ongoing pregnancies and miscarriages were used from the Rotterdam periconceptional cohort and a cohort of recurrent miscarriages. In 202 ongoing pregnancies and 33 miscarriages, first trimester crown rump length and total arch length were measured to assess the embryonic curvature. The results show that the total arch length increases and shows more variation with advanced gestation. The crown rump length/total arch length ratio shows a strong increase from 8+0 to 10+0 weeks and flattening thereafter. No significant difference was observed between the curvature of embryos of ongoing pregnancies and miscarriages. The majority of miscarried embryos could not be measured. Therefore, this technique is too limited to recommend the measurement of the embryonic curvature in clinical practice.


Asunto(s)
Embrión de Mamíferos/diagnóstico por imagen , Desarrollo Embrionario , Aborto Espontáneo , Adulto , Estudios de Cohortes , Largo Cráneo-Cadera , Femenino , Edad Gestacional , Humanos , Imagenología Tridimensional , Embarazo , Primer Trimestre del Embarazo , Ultrasonografía Prenatal
11.
Biometrics ; 74(2): 685-693, 2018 06.
Artículo en Inglés | MEDLINE | ID: mdl-29092100

RESUMEN

In the field of cardio-thoracic surgery, valve function is monitored over time after surgery. The motivation for our research comes from a study which includes patients who received a human tissue valve in the aortic position. These patients are followed prospectively over time by standardized echocardiographic assessment of valve function. Loss of follow-up could be caused by valve intervention or the death of the patient. One of the main characteristics of the human valve is that its durability is limited. Therefore, it is of interest to obtain a prognostic model in order for the physicians to scan trends in valve function over time and plan their next intervention, accounting for the characteristics of the data. Several authors have focused on deriving predictions under the standard joint modeling of longitudinal and survival data framework that assumes a constant effect for the coefficient that links the longitudinal and survival outcomes. However, in our case, this may be a restrictive assumption. Since the valve degenerates, the association between the biomarker with survival may change over time. To improve dynamic predictions, we propose a Bayesian joint model that allows a time-varying coefficient to link the longitudinal and the survival processes, using P-splines. We evaluate the performance of the model in terms of discrimination and calibration, while accounting for censoring.


Asunto(s)
Estudios Longitudinales , Pronóstico , Análisis de Supervivencia , Cirugía Torácica/métodos , Válvula Aórtica/diagnóstico por imagen , Válvula Aórtica/trasplante , Teorema de Bayes , Calibración , Ecocardiografía , Humanos , Factores de Tiempo
12.
Theor Appl Genet ; 130(7): 1375-1392, 2017 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-28374049

RESUMEN

KEY MESSAGE: A flexible and user-friendly spatial method called SpATS performed comparably to more elaborate and trial-specific spatial models in a series of sorghum breeding trials. Adjustment for spatial trends in plant breeding field trials is essential for efficient evaluation and selection of genotypes. Current mixed model methods of spatial analysis are based on a multi-step modelling process where global and local trends are fitted after trying several candidate spatial models. This paper reports the application of a novel spatial method that accounts for all types of continuous field variation in a single modelling step by fitting a smooth surface. The method uses two-dimensional P-splines with anisotropic smoothing formulated in the mixed model framework, referred to as SpATS model. We applied this methodology to a series of large and partially replicated sorghum breeding trials. The new model was assessed in comparison with the more elaborate standard spatial models that use autoregressive correlation of residuals. The improvements in precision and the predictions of genotypic values produced by the SpATS model were equivalent to those obtained using the best fitting standard spatial models for each trial. One advantage of the approach with SpATS is that all patterns of spatial trend and genetic effects were modelled simultaneously by fitting a single model. Furthermore, we used a flexible model to adequately adjust for field trends. This strategy reduces potential parameter identification problems and simplifies the model selection process. Therefore, the new method should be considered as an efficient and easy-to-use alternative for routine analyses of plant breeding trials.


Asunto(s)
Modelos Genéticos , Fitomejoramiento/métodos , Sorghum/genética , Algoritmos , Genotipo , Análisis Espacial
13.
Stat Med ; 36(11): 1735-1753, 2017 05 20.
Artículo en Inglés | MEDLINE | ID: mdl-28152571

RESUMEN

The Bayesian approach has become increasingly popular because it allows to fit quite complex models to data via Markov chain Monte Carlo sampling. However, it is also recognized nowadays that Markov chain Monte Carlo sampling can become computationally prohibitive when applied to a large data set. We encountered serious computational difficulties when fitting an hierarchical model to longitudinal glaucoma data of patients who participate in an ongoing Dutch study. To overcome this problem, we applied and extended a recently proposed two-stage approach to model these data. Glaucoma is one of the leading causes of blindness in the world. In order to detect deterioration at an early stage, a model for predicting visual fields (VFs) in time is needed. Hence, the true underlying VF progression can be determined, and treatment strategies can then be optimized to prevent further VF loss. Because we were unable to fit these data with the classical one-stage approach upon which the current popular Bayesian software is based, we made use of the two-stage Bayesian approach. The considered hierarchical longitudinal model involves estimating a large number of random effects and deals with censoring and high measurement variability. In addition, we extended the approach with tools for model evaluation. Copyright © 2017 John Wiley & Sons, Ltd.


Asunto(s)
Teorema de Bayes , Glaucoma/patología , Campos Visuales , Adolescente , Adulto , Anciano , Anciano de 80 o más Años , Interpretación Estadística de Datos , Progresión de la Enfermedad , Femenino , Humanos , Estudios Longitudinales , Masculino , Cadenas de Markov , Persona de Mediana Edad , Modelos Estadísticos , Método de Montecarlo , Estudios Prospectivos , Adulto Joven
14.
Placenta ; 49: 72-79, 2017 01.
Artículo en Inglés | MEDLINE | ID: mdl-28012458

RESUMEN

INTRODUCTION: Offspring exposed to preeclampsia (PE) show an increased risk of cardiovascular disease in adulthood. We hypothesize that this is mediated by a disturbed vascular development of the placenta, umbilical cord and fetus. Therefore, we investigated associations between early-onset PE (EOPE), late-onset PE (LOPE) and features of placental and newborn vascular health. METHODS: We performed a nested case-control study in The Rotterdam Periconceptional Cohort, including 30 PE pregnancies (15 EOPE, 15 LOPE) and 218 control pregnancies (164 uncomplicated controls, 54 complicated controls including 28 fetal growth restriction, 26 preterm birth) and assessed macroscopic and histomorphometric outcomes of the placenta and umbilical cord. RESULTS: A significant association was observed between PE and a smaller umbilical vein area and wall thickness, independent of gestational age and birth weight. In EOPE we observed significant associations with a lower weight, length and width of the placenta, length of the umbilical cord, and thickness and wall area of the umbilical vein and artery. These associations attenuated after gestational age and birth weight adjustment. In LOPE a significant association with a larger placental width and smaller umbilical vein wall thickness was shown, independent of gestational age and birth weight. DISCUSSION: Our study suggests that PE is associated with a smaller umbilical cord vein area and wall thickness, independent of gestational age and birth weight, which may serve as a proxy of disturbed cardiovascular development in the newborn. Follow-up studies are needed to ultimately predict and lower the risk of cardiovascular disease in offspring exposed to PE.


Asunto(s)
Peso al Nacer/fisiología , Desarrollo Fetal/fisiología , Retardo del Crecimiento Fetal/patología , Placenta/irrigación sanguínea , Preeclampsia/patología , Estudios de Casos y Controles , Femenino , Retardo del Crecimiento Fetal/fisiopatología , Humanos , Recién Nacido , Tamaño de los Órganos/fisiología , Placenta/patología , Placenta/fisiopatología , Preeclampsia/fisiopatología , Embarazo
16.
Sci Rep ; 6: 21413, 2016 Feb 25.
Artículo en Inglés | MEDLINE | ID: mdl-26912448

RESUMEN

In wide-field super-resolution microscopy, investigating the nanoscale structure of cellular processes, and resolving fast dynamics and morphological changes in cells requires algorithms capable of working with a high-density of emissive fluorophores. Current deconvolution algorithms estimate fluorophore density by using representations of the signal that promote sparsity of the super-resolution images via an L1-norm penalty. This penalty imposes a restriction on the sum of absolute values of the estimates of emitter brightness. By implementing an L0-norm penalty--on the number of fluorophores rather than on their overall brightness--we present a penalized regression approach that can work at high-density and allows fast super-resolution imaging. We validated our approach on simulated images with densities up to 15 emitters per µm(-2) and investigated total internal reflection fluorescence (TIRF) data of mitochondria in a HEK293-T cell labeled with DAKAP-Dronpa. We demonstrated super-resolution imaging of the dynamics with a resolution down to 55 nm and a 0.5 s time sampling.


Asunto(s)
Algoritmos , Colorantes Fluorescentes/química , Colorantes Fluorescentes/metabolismo , Células HEK293 , Humanos , Procesamiento de Imagen Asistido por Computador , Microscopía Fluorescente , Mitocondrias/patología
17.
Clin Epigenetics ; 7: 83, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-26265957

RESUMEN

BACKGROUND: Deleterious effects of prenatal tobacco smoking on fetal growth and newborn weight are well-established. One of the proposed mechanisms underlying this relationship is alterations in epigenetic programming. We selected 506 newborns from a population-based prospective birth cohort in the Netherlands. Prenatal parental tobacco smoking was assessed using self-reporting questionnaires. Information on birth outcomes was obtained from medical records. The deoxyribonucleic acid (DNA) methylation of the growth genes IGF2DMR and H19 was measured in newborn umbilical cord white blood cells. Associations were assessed between parental tobacco smoking and DNA methylation using linear mixed models and adjusted for potential confounders. RESULTS: The DNA methylation levels of IGF2DMR and H19 in the non-smoking group were median (90 % range), 54.0 % (44.6-62.0), and 30.0 % (25.5-34.0), in the first trimester only smoking group 52.2 % (44.5-61.1) and 30.8 % (27.1-34.1), and in the continued smoking group 51.6 % (43.9-61.3) and 30.2 % (23.7-34.8), respectively. Continued prenatal maternal smoking was inversely associated with IGF2DMR methylation (ß = -1.03, 95 % CI -1.76; -0.30) in a dose-dependent manner (P-trend = 0.030). This association seemed to be slightly more profound among newborn girls (ß = -1.38, 95 % CI -2.63; -0.14) than boys (ß = -0.72, 95 % CI -1.68; 0.24). H19 methylation was also inversely associated continued smoking <5 cigarettes/day (ß = -0.96, 95 % CI -1.78; -0.14). Moreover, the association between maternal smoking and newborns small for gestational age seems to be partially explained by IGF2DMR methylation (ß = -0.095, 95 % CI -0.249; -0.018). Among non-smoking mothers, paternal tobacco smoking was not associated with IGF2DMR or H19 methylation. CONCLUSIONS: Maternal smoking is inversely associated with IGF2DMR methylation in newborns, which can be one of the underlying mechanisms through which smoking affects fetal growth.

18.
Invest Ophthalmol Vis Sci ; 56(8): 4283-9, 2015 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-26161990

RESUMEN

PURPOSE: One of the difficulties in modeling visual field (VF) data is the sometimes large and correlated measurement errors in the point-wise sensitivity estimates. As these errors affect all locations of the same VF, we propose to model them as global visit effects (GVE). We evaluate this model and show the effect it has on progression estimation and prediction. METHODS: Visual field series (24-2 Full Threshold; 15 biannual VFs per patient) of 125 patients with primary glaucoma were included in the analysis. The contribution of the GVE was evaluated by comparing the fitting and predictive ability of a conventional model, which does not contain GVE, to such a model that incorporates the GVE. Moreover, the GVE's effect on the estimated slopes was evaluated by determining the absolute difference between the slopes of the models. Finally, the magnitude of the GVE was compared with that of other measurement errors. RESULTS: The GVE model showed a significant improvement in both the model fit and predictive ability over the conventional model, especially when the number of VFs in a series is limited. The average absolute difference in slopes between the models was 0.13 dB/y. Lastly, the magnitude of the GVE was more than three times larger than the measureable factors combined. CONCLUSIONS: By incorporating the GVE in the longitudinal modeling of VF data, better estimates may be obtained of the rate of progression as well as of predicted future sensitivities.


Asunto(s)
Glaucoma/fisiopatología , Modelos Teóricos , Campos Visuales/fisiología , Progresión de la Enfermedad , Glaucoma/diagnóstico , Humanos , Pruebas del Campo Visual
19.
Am J Epidemiol ; 182(2): 138-47, 2015 07 15.
Artículo en Inglés | MEDLINE | ID: mdl-26081676

RESUMEN

Ungrouping binned data can be desirable for many reasons: Bins can be too coarse to allow for accurate analysis; comparisons can be hindered when different grouping approaches are used in different histograms; and the last interval is often wide and open-ended and, thus, covers a lot of information in the tail area. Age group-specific disease incidence rates and abridged life tables are examples of binned data. We propose a versatile method for ungrouping histograms that assumes that only the underlying distribution is smooth. Because of this modest assumption, the approach is suitable for most applications. The method is based on the composite link model, with a penalty added to ensure the smoothness of the target distribution. Estimates are obtained by maximizing a penalized likelihood. This maximization is performed efficiently by a version of the iteratively reweighted least-squares algorithm. Optimal values of the smoothing parameter are chosen by minimizing Akaike's Information Criterion. We demonstrate the performance of this method in a simulation study and provide several examples that illustrate the approach. Wide, open-ended intervals can be handled properly. The method can be extended to the estimation of rates when both the event counts and the exposures to risk are grouped.


Asunto(s)
Métodos Epidemiológicos , Estadística como Asunto
20.
Bioinformatics ; 31(18): 3063-5, 2015 Sep 15.
Artículo en Inglés | MEDLINE | ID: mdl-25971741

RESUMEN

UNLABELLED: Alignment of peaks across samples is a difficult but unavoidable step in the data analysis for all analytical techniques containing a separation step like chromatography. Important application examples are the fields of metabolomics and proteomics. Parametric time warping (PTW) has already shown to be very useful in these fields because of the highly restricted form of the warping functions, avoiding overfitting. Here, we describe a new formulation of PTW, working on peak-picked features rather than on complete profiles. Not only does this allow for a much more smooth integration in existing pipelines, it also speeds up the (already among the fastest) algorithm by orders of magnitude. Using two publicly available datasets we show the potential of the new approach. The first set is a LC-DAD dataset of grape samples, and the second an LC-MS dataset of apple extracts. AVAILABILITY AND IMPLEMENTATION: Parametric time warping of peak lists is implemented in the ptw package, version 1.9.1 and onwards, available from Github (https://github.com/rwehrens/ptw) and CRAN (http://cran.r-project.org). The package also contains a vignette, providing more theoretical details and scripts to reproduce the results below. CONTACT: ron.wehrens@wur.nl.


Asunto(s)
Algoritmos , Carotenoides/análisis , Cromatografía Liquida/métodos , Espectrometría de Masas/métodos , Vitis/química , Metabolómica/métodos , Proteómica/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...