Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 5.788
Filter
Add more filters

Publication year range
1.
Proc Natl Acad Sci U S A ; 121(8): e2320764121, 2024 Feb 20.
Article in English | MEDLINE | ID: mdl-38346192

ABSTRACT

Many animal species rely on the Earth's magnetic field during navigation, but where in the brain magnetic information is processed is still unknown. To unravel this, we manipulated the natural magnetic field at the nest entrance of Cataglyphis desert ants and investigated how this affects relevant brain regions during early compass calibration. We found that manipulating the Earth's magnetic field has profound effects on neuronal plasticity in two sensory integration centers. Magnetic field manipulations interfere with a typical look-back behavior during learning walks of naive ants. Most importantly, structural analyses in the ants' neuronal compass (central complex) and memory centers (mushroom bodies) demonstrate that magnetic information affects neuronal plasticity during early visual learning. This suggests that magnetic information does not only serve as a compass cue for navigation but also as a global reference system crucial for spatial memory formation. We propose a neural circuit for integration of magnetic information into visual guidance networks in the ant brain. Taken together, our results provide an insight into the neural substrate for magnetic navigation in insects.


Subject(s)
Ants , Animals , Ants/physiology , Learning/physiology , Brain , Neuronal Plasticity/physiology , Magnetic Phenomena , Homing Behavior/physiology , Cues , Desert Climate
2.
Proc Natl Acad Sci U S A ; 120(7): e2216415120, 2023 Feb 14.
Article in English | MEDLINE | ID: mdl-36763529

ABSTRACT

Computational models have become a powerful tool in the quantitative sciences to understand the behavior of complex systems that evolve in time. However, they often contain a potentially large number of free parameters whose values cannot be obtained from theory but need to be inferred from data. This is especially the case for models in the social sciences, economics, or computational epidemiology. Yet, many current parameter estimation methods are mathematically involved and computationally slow to run. In this paper, we present a computationally simple and fast method to retrieve accurate probability densities for model parameters using neural differential equations. We present a pipeline comprising multiagent models acting as forward solvers for systems of ordinary or stochastic differential equations and a neural network to then extract parameters from the data generated by the model. The two combined create a powerful tool that can quickly estimate densities on model parameters, even for very large systems. We demonstrate the method on synthetic time series data of the SIR model of the spread of infection and perform an in-depth analysis of the Harris-Wilson model of economic activity on a network, representing a nonconvex problem. For the latter, we apply our method both to synthetic data and to data of economic activity across Greater London. We find that our method calibrates the model orders of magnitude more accurately than a previous study of the same dataset using classical techniques, while running between 195 and 390 times faster.

3.
Am J Hum Genet ; 109(6): 1153-1174, 2022 06 02.
Article in English | MEDLINE | ID: mdl-35659930

ABSTRACT

BRCA1 is a high-risk susceptibility gene for breast and ovarian cancer. Pathogenic protein-truncating variants are scattered across the open reading frame, but all known missense substitutions that are pathogenic because of missense dysfunction are located in either the amino-terminal RING domain or the carboxy-terminal BRCT domain. Heterodimerization of the BRCA1 and BARD1 RING domains is a molecularly defined obligate activity. Hence, we tested every BRCA1 RING domain missense substitution that can be created by a single nucleotide change for heterodimerization with BARD1 in a mammalian two-hybrid assay. Downstream of the laboratory assay, we addressed three additional challenges: assay calibration, validation thereof, and integration of the calibrated results with other available data, such as computational evidence and patient/population observational data to achieve clinically applicable classification. Overall, we found that 15%-20% of BRCA1 RING domain missense substitutions are pathogenic. Using a Bayesian point system for data integration and variant classification, we achieved clinical classification of 89% of observed missense substitutions. Moreover, among missense substitutions not present in the human observational data used here, we find an additional 45 with concordant computational and functional assay evidence in favor of pathogenicity plus 223 with concordant evidence in favor of benignity; these are particularly likely to be classified as likely pathogenic and likely benign, respectively, once human observational data become available.


Subject(s)
Breast Neoplasms , Ovarian Neoplasms , Animals , BRCA1 Protein/genetics , Bayes Theorem , Breast Neoplasms/genetics , Female , Humans , Mammals , Mutation, Missense/genetics , Ovarian Neoplasms/genetics , Protein Domains
4.
Biostatistics ; 25(2): 306-322, 2024 Apr 15.
Article in English | MEDLINE | ID: mdl-37230469

ABSTRACT

Measurement error is common in environmental epidemiologic studies, but methods for correcting measurement error in regression models with multiple environmental exposures as covariates have not been well investigated. We consider a multiple imputation approach, combining external or internal calibration samples that contain information on both true and error-prone exposures with the main study data of multiple exposures measured with error. We propose a constrained chained equations multiple imputation (CEMI) algorithm that places constraints on the imputation model parameters in the chained equations imputation based on the assumptions of strong nondifferential measurement error. We also extend the constrained CEMI method to accommodate nondetects in the error-prone exposures in the main study data. We estimate the variance of the regression coefficients using the bootstrap with two imputations of each bootstrapped sample. The constrained CEMI method is shown by simulations to outperform existing methods, namely the method that ignores measurement error, classical calibration, and regression prediction, yielding estimated regression coefficients with smaller bias and confidence intervals with coverage close to the nominal level. We apply the proposed method to the Neighborhood Asthma and Allergy Study to investigate the associations between the concentrations of multiple indoor allergens and the fractional exhaled nitric oxide level among asthmatic children in New York City. The constrained CEMI method can be implemented by imposing constraints on the imputation matrix using the mice and bootImpute packages in R.


Subject(s)
Algorithms , Environmental Exposure , Child , Humans , Animals , Mice , Environmental Exposure/adverse effects , Epidemiologic Studies , Calibration , Bias
5.
Proc Natl Acad Sci U S A ; 119(47): e2202075119, 2022 11 22.
Article in English | MEDLINE | ID: mdl-36375059

ABSTRACT

Traditional general circulation models, or GCMs-that is, three-dimensional dynamical models with unresolved terms represented in equations with tunable parameters-have been a mainstay of climate research for several decades, and some of the pioneering studies have recently been recognized by a Nobel prize in Physics. Yet, there is considerable debate around their continuing role in the future. Frequently mentioned as limitations of GCMs are the structural error and uncertainty across models with different representations of unresolved scales and the fact that the models are tuned to reproduce certain aspects of the observed Earth. We consider these shortcomings in the context of a future generation of models that may address these issues through substantially higher resolution and detail, or through the use of machine learning techniques to match them better to observations, theory, and process models. It is our contention that calibration, far from being a weakness of models, is an essential element in the simulation of complex systems, and contributes to our understanding of their inner workings. Models can be calibrated to reveal both fine-scale detail and the global response to external perturbations. New methods enable us to articulate and improve the connections between the different levels of abstract representation of climate processes, and our understanding resides in an entire hierarchy of models where GCMs will continue to play a central role for the foreseeable future.


Subject(s)
Climate Change , Climate , Forecasting , Computer Simulation , Physics
6.
Proc Natl Acad Sci U S A ; 119(43): e2209218119, 2022 10 25.
Article in English | MEDLINE | ID: mdl-36252031

ABSTRACT

Optical sensors, with great potential to convert invisible bioanalytical response into readable information, have been envisioned as a powerful platform for biological analysis and early diagnosis of diseases. However, the current extraction of sensing data is basically processed via a series of complicated and time-consuming calibrations between samples and reference, which inevitably introduce extra measurement errors and potentially annihilate small intrinsic responses. Here, we have proposed and experimentally demonstrated a calibration-free sensor for achieving high-precision biosensing detection, based on an optically controlled terahertz (THz) ultrafast metasurface. Photoexcitation of the silicon bridge enables the resonant frequency shifting from 1.385 to 0.825 THz and reaches the maximal phase variation up to 50° at 1.11 THz. The typical environmental measurement errors are completely eliminated in theory by normalizing the Fourier-transformed transmission spectra between ultrashort time delays of 37 ps, resulting in an extremely robust sensing device for monitoring the cancerous process of gastric cells. We believe that our calibration-free sensors with high precision and robust advantages can extend their implementation to study ultrafast biological dynamics and may inspire considerable innovations in the field of medical devices with nondestructive detection.


Subject(s)
Stomach Neoplasms , Humans , Silicon , Stomach Neoplasms/diagnosis
7.
Proc Natl Acad Sci U S A ; 119(4)2022 01 25.
Article in English | MEDLINE | ID: mdl-35042792

ABSTRACT

To rapidly process temporal information at a low metabolic cost, biological neurons integrate inputs as an analog sum, but communicate with spikes, binary events in time. Analog neuromorphic hardware uses the same principles to emulate spiking neural networks with exceptional energy efficiency. However, instantiating high-performing spiking networks on such hardware remains a significant challenge due to device mismatch and the lack of efficient training algorithms. Surrogate gradient learning has emerged as a promising training strategy for spiking networks, but its applicability for analog neuromorphic systems has not been demonstrated. Here, we demonstrate surrogate gradient learning on the BrainScaleS-2 analog neuromorphic system using an in-the-loop approach. We show that learning self-corrects for device mismatch, resulting in competitive spiking network performance on both vision and speech benchmarks. Our networks display sparse spiking activity with, on average, less than one spike per hidden neuron and input, perform inference at rates of up to 85,000 frames per second, and consume less than 200 mW. In summary, our work sets several benchmarks for low-energy spiking network processing on analog neuromorphic hardware and paves the way for future on-chip learning algorithms.


Subject(s)
Neural Networks, Computer , Action Potentials/physiology , Algorithms , Brain/physiology , Computers , Models, Biological , Models, Neurological , Models, Theoretical , Neurons/physiology
8.
BMC Biol ; 22(1): 79, 2024 Apr 10.
Article in English | MEDLINE | ID: mdl-38600528

ABSTRACT

BACKGROUND: Throughout its nearly four-billion-year history, life has undergone evolutionary transitions in which simpler subunits have become integrated to form a more complex whole. Many of these transitions opened the door to innovations that resulted in increased biodiversity and/or organismal efficiency. The evolution of multicellularity from unicellular forms represents one such transition, one that paved the way for cellular differentiation, including differentiation of male and female gametes. A useful model for studying the evolution of multicellularity and cellular differentiation is the volvocine algae, a clade of freshwater green algae whose members range from unicellular to colonial, from undifferentiated to completely differentiated, and whose gamete types can be isogamous, anisogamous, or oogamous. To better understand how multicellularity, differentiation, and gametes evolved in this group, we used comparative genomics and fossil data to establish a geologically calibrated roadmap of when these innovations occurred. RESULTS: Our ancestral-state reconstructions, show that multicellularity arose independently twice in the volvocine algae. Our chronograms indicate multicellularity evolved during the Carboniferous-Triassic periods in Goniaceae + Volvocaceae, and possibly as early as the Cretaceous in Tetrabaenaceae. Using divergence time estimates we inferred when, and in what order, specific developmental changes occurred that led to differentiated multicellularity and oogamy. We find that in the volvocine algae the temporal sequence of developmental changes leading to differentiated multicellularity is much as proposed by David Kirk, and that multicellularity is correlated with the acquisition of anisogamy and oogamy. Lastly, morphological, molecular, and divergence time data suggest the possibility of cryptic species in Tetrabaenaceae. CONCLUSIONS: Large molecular datasets and robust phylogenetic methods are bringing the evolutionary history of the volvocine algae more sharply into focus. Mounting evidence suggests that extant species in this group are the result of two independent origins of multicellularity and multiple independent origins of cell differentiation. Also, the origin of the Tetrabaenaceae-Goniaceae-Volvocaceae clade may be much older than previously thought. Finally, the possibility of cryptic species in the Tetrabaenaceae provides an exciting opportunity to study the recent divergence of lineages adapted to live in very different thermal environments.


Subject(s)
Chlorophyceae , Volvox , Phylogeny , Biological Evolution , Volvox/genetics , Fossils , Plants , Cell Differentiation
9.
Proteomics ; 24(5): e2300145, 2024 Mar.
Article in English | MEDLINE | ID: mdl-37726251

ABSTRACT

Exact p-value (XPV)-based methods for dot product-like score functions-such as the XCorr score implemented in Tide, SEQUEST, Comet or shared peak count-based scoring in MSGF+ and ASPV-provide a fairly good calibration for peptide-spectrum-match (PSM) scoring in database searching-based MS/MS spectrum data identification. Unfortunately, standard XPV methods, in practice, cannot handle high-resolution fragmentation data produced by state-of-the-art mass spectrometers because having smaller bins increases the number of fragment matches that are assigned to incorrect bins and scored improperly. In this article, we present an extension of the XPV method, called the high-resolution exact p-value (HR-XPV) method, which can be used to calibrate PSM scores of high-resolution MS/MS spectra obtained with dot product-like scoring such as the XCorr. The HR-XPV carries remainder masses throughout the fragmentation, allowing them to greatly increase the number of fragments that are properly assigned to the correct bin and, thus, taking advantage of high-resolution data. Using four mass spectrometry data sets, our experimental results demonstrate that HR-XPV produces well-calibrated scores, which in turn results in more trusted spectrum annotations at any false discovery rate level.


Subject(s)
Algorithms , Tandem Mass Spectrometry , Tandem Mass Spectrometry/methods , Software , Peptides/chemistry , Calibration , Databases, Protein
10.
BMC Bioinformatics ; 25(1): 17, 2024 Jan 11.
Article in English | MEDLINE | ID: mdl-38212692

ABSTRACT

BACKGROUND: qPCR is a widely used technique in scientific research as a basic tool in gene expression analysis. Classically, the quantitative endpoint of qPCR is the threshold cycle (CT) that ignores differences in amplification efficiency among many other drawbacks. While other methods have been developed to analyze qPCR results, none has statistically proven to perform better than the CT method. Therefore, we aimed to develop a new qPCR analysis method that overcomes the limitations of the CT method. Our f0% [eff naught percent] method depends on a modified flexible sigmoid function to fit the amplification curve with a linear part to subtract the background noise. Then, the initial fluorescence is estimated and reported as a percentage of the predicted maximum fluorescence (f0%). RESULTS: The performance of the new f0% method was compared against the CT method along with another two outstanding methods-LinRegPCR and Cy0. The comparison regarded absolute and relative quantifications and used 20 dilution curves obtained from 7 different datasets that utilize different DNA-binding dyes. In the case of absolute quantification, f0% reduced CV%, variance, and absolute relative error by 1.66, 2.78, and 1.8 folds relative to CT; and by 1.65, 2.61, and 1.71 folds relative to LinRegPCR, respectively. While, regarding relative quantification, f0% reduced CV% by 1.76, 1.55, and 1.25 folds and variance by 3.13, 2.31, and 1.57 folds regarding CT, LinRegPCR, and Cy0, respectively. Finally, f0% reduced the absolute relative error caused by LinRegPCR by 1.83 folds. CONCLUSIONS: We recommend using the f0% method to analyze and report qPCR results based on its reported advantages. Finally, to simplify the usage of the f0% method, it was implemented in a macro-enabled Excel file with a user manual located on https://github.com/Mahmoud0Gamal/F0-perc/releases .

11.
J Proteome Res ; 23(4): 1519-1530, 2024 Apr 05.
Article in English | MEDLINE | ID: mdl-38538550

ABSTRACT

Most tandem mass spectrometry fragmentation spectra have small calibration errors that can lead to suboptimal interpretation and annotation. We developed SpectiCal, a software tool that can read mzML files from data-dependent acquisition proteomics experiments in parallel, compute m/z calibrations for each file prior to identification analysis based on known low-mass ions, and produce information about frequently observed peaks and their explanations. Using calibration coefficients, the data can be corrected to generate new calibrated mzML files. SpectiCal was tested using five public data sets, creating a table of commonly observed low-mass ions and their identifications. Information about the calibration and individual peaks is written in PDF and TSV files. This includes information for each peak, such as the number of runs in which it appears, the percentage of spectra in which it appears, and a plot of the aggregated region surrounding each peak. SpectiCal can be used to compute MS run calibrations, examine MS runs for artifacts that might hinder downstream analysis, and generate tables of detected low-mass ions for further analysis. SpectiCal is freely available at https://github.com/PlantProteomes/SpectiCal.


Subject(s)
Peptides , Software , Calibration , Peptides/analysis , Tandem Mass Spectrometry/methods , Ions
12.
J Proteome Res ; 23(4): 1351-1359, 2024 Apr 05.
Article in English | MEDLINE | ID: mdl-38445850

ABSTRACT

Targeted mass spectrometry (MS)-based absolute quantitative analysis has been increasingly used in biomarker discovery. The ability to accurately measure the masses by MS enabled the use of isotope-incorporated surrogates having virtually identical physiochemical properties with the target analytes as calibrators. Such a unique capacity allowed for accurate in-sample calibration. Current in-sample calibration uses multiple isotopologues or structural analogues for both the surrogate and the internal standard. Here, we simplified this common practice by using endogenous light peptides as the internal standards and used a mathematical deduction of "heavy matching light, HML" to directly quantify an endogenous analyte. This method provides all necessary assay performance parameters in the authentic matrix, including the lower limit of quantitation (LLOQ) and intercept of the calibration curve, by using only a single isotopologue of the analyte. This method can be applied to the quantitation of proteins, peptides, and small molecules. Using this method, we quantified the efficiency of heart tissue digestion and recovery using sodium deoxycholate as a detergent and two spiked exogenous proteins as mimics of heart proteins. The results demonstrated the robustness of the assay.


Subject(s)
Liquid Chromatography-Mass Spectrometry , Tandem Mass Spectrometry , Tandem Mass Spectrometry/methods , Chromatography, Liquid/methods , Calibration , Proteins , Peptides
13.
J Physiol ; 602(12): 2899-2916, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38734987

ABSTRACT

Low-level proprioceptive judgements involve a single frame of reference, whereas high-level proprioceptive judgements are made across different frames of reference. The present study systematically compared low-level (grasp → $\rightarrow$ grasp) and high-level (vision → $\rightarrow$ grasp, grasp → $\rightarrow$ vision) proprioceptive tasks, and quantified the consistency of grasp → $\rightarrow$ vision and possible reciprocal nature of related high-level proprioceptive tasks. Experiment 1 (n = 30) compared performance across vision → $\rightarrow$ grasp, a grasp → $\rightarrow$ vision and a grasp → $\rightarrow$ grasp tasks. Experiment 2 (n = 30) compared performance on the grasp → $\rightarrow$ vision task between hands and over time. Participants were accurate (mean absolute error 0.27 cm [0.20 to 0.34]; mean [95% CI]) and precise ( R 2 $R^2$ = 0.95 [0.93 to 0.96]) for grasp → $\rightarrow$ grasp judgements, with a strong correlation between outcomes (r = -0.85 [-0.93 to -0.70]). Accuracy and precision decreased in the two high-level tasks ( R 2 $R^2$ = 0.86 and 0.89; mean absolute error = 1.34 and 1.41 cm), with most participants overestimating perceived width for the vision → $\rightarrow$ grasp task and underestimating it for grasp → $\rightarrow$ vision task. There was minimal correlation between accuracy and precision for these two tasks. Converging evidence indicated performance was largely reciprocal (inverse) between the vision → $\rightarrow$ grasp and grasp → $\rightarrow$ vision tasks. Performance on the grasp → $\rightarrow$ vision task was consistent between dominant and non-dominant hands, and across repeated sessions a day or week apart. Overall, there are fundamental differences between low- and high-level proprioceptive judgements that reflect fundamental differences in the cortical processes that underpin these perceptions. Moreover, the central transformations that govern high-level proprioceptive judgements of grasp are personalised, stable and reciprocal for reciprocal tasks. KEY POINTS: Low-level proprioceptive judgements involve a single frame of reference (e.g. indicating the width of a grasped object by selecting from a series of objects of different width), whereas high-level proprioceptive judgements are made across different frames of reference (e.g. indicating the width of a grasped object by selecting from a series of visible lines of different length). We highlight fundamental differences in the precision and accuracy of low- and high-level proprioceptive judgements. We provide converging evidence that the neural transformations between frames of reference that govern high-level proprioceptive judgements of grasp are personalised, stable and reciprocal for reciprocal tasks. This stability is likely key to precise judgements and accurate predictions in high-level proprioception.


Subject(s)
Hand Strength , Judgment , Proprioception , Humans , Proprioception/physiology , Male , Female , Adult , Judgment/physiology , Hand Strength/physiology , Young Adult , Psychomotor Performance/physiology , Visual Perception/physiology , Hand/physiology
14.
Am J Epidemiol ; 2024 Jul 02.
Article in English | MEDLINE | ID: mdl-38957970

ABSTRACT

In longitudinal studies, the devices used to measure exposures can change from visit to visit. Calibration studies, wherein a subset of participants is measured using both devices at follow-up, may be used to assess between-device differences (i.e., errors). Then, statistical methods are needed to adjust for between-device differences and the missing measurement data that often appear in calibration studies. Regression calibration and multiple imputation are two possible methods. We compared both methods in linear regression with a simulation study, considering various real-world scenarios for a longitudinal study of pulse wave velocity. Regression calibration and multiple imputation were both essentially unbiased, but correctly estimating the standard errors posed challenges. Multiple imputation with predicted mean matching produced close agreement with the empirical standard error. Fully stochastic multiple imputation underestimated the standard error by up to 50%, and regression calibration with bootstrapped standard errors performed slightly better than fully stochastic multiple imputation. Regression calibration was slightly more efficient than either multiple imputation method. The results suggest use of multiple imputation with predictive mean matching over fully stochastic imputation or regression calibration in longitudinal studies where a new device at follow-up might be error-prone compared to the device used at baseline.

15.
Am J Epidemiol ; 193(2): 360-369, 2024 Feb 05.
Article in English | MEDLINE | ID: mdl-37759344

ABSTRACT

Conventional propensity score methods encounter challenges when unmeasured confounding is present, as it becomes impossible to accurately estimate the gold-standard propensity score when data on certain confounders are unavailable. Propensity score calibration (PSC) addresses this issue by constructing a surrogate for the gold-standard propensity score under the surrogacy assumption. This assumption posits that the error-prone propensity score, based on observed confounders, is independent of the outcome when conditioned on the gold-standard propensity score and the exposure. However, this assumption implies that confounders cannot directly impact the outcome and that their effects on the outcome are solely mediated through the propensity score. This raises concerns regarding the applicability of PSC in practical settings where confounders can directly affect the outcome. While PSC aims to target a conditional treatment effect by conditioning on a subject's unobservable propensity score, the causal interest in the latter case lies in a conditional treatment effect conditioned on a subject's baseline characteristics. Our analysis reveals that PSC is generally biased unless the effects of confounders on the outcome and treatment are proportional to each other. Furthermore, we identify 2 sources of bias: 1) the noncollapsibility of effect measures, such as the odds ratio or hazard ratio and 2) residual confounding, as the calibrated propensity score may not possess the properties of a valid propensity score.


Subject(s)
Calibration , Humans , Propensity Score , Confounding Factors, Epidemiologic , Bias , Proportional Hazards Models
16.
Am J Epidemiol ; 2024 May 13.
Article in English | MEDLINE | ID: mdl-38751312

ABSTRACT

The Cohort Study of Mobile Phone Use and Health (COSMOS) has repeatedly collected self-reported and operator-recorded data on mobile phone use. Assessing health effects using self-reported information is prone to measurement error, but operator data were available prospectively for only part of the study population and did not cover past mobile phone use. To optimize the available data and reduce bias, we evaluated different statistical approaches for constructing mobile phone exposure histories within COSMOS. We evaluated and compared the performance of four regression calibration (RC) methods (simple, direct, inverse, and generalized additive model for location, shape, and scale), complete-case (CC) analysis and multiple imputation (MI) in a simulation study with a binary health outcome. We used self-reported and operator-recorded mobile phone call data collected at baseline (2007-2012) from participants in Denmark, Finland, the Netherlands, Sweden, and the UK. Parameter estimates obtained using simple, direct, and inverse RC methods were associated with less bias and lower mean squared error than those obtained with CC analysis or MI. We showed that RC methods resulted in more accurate estimation of the relation between mobile phone use and health outcomes, by combining self-reported data with objective operator-recorded data available for a subset of participants.

17.
J Hepatol ; 81(1): 149-162, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38531493

ABSTRACT

Prediction models are everywhere in clinical medicine. We use them to assign a diagnosis or a prognosis, and there have been continuous efforts to develop better prediction models. It is important to understand the fundamentals of prediction modelling, thus, we herein describe nine steps to develop and validate a clinical prediction model with the intention of implementing it in clinical practice: Determine if there is a need for a new prediction model; define the purpose and intended use of the model; assess the quality and quantity of the data you wish to develop the model on; develop the model using sound statistical methods; generate risk predictions on the probability scale (0-100%); evaluate the performance of the model in terms of discrimination, calibration, and clinical utility; validate the model using bootstrapping to correct for the apparent optimism in performance; validate the model on external datasets to assess the generalisability and transportability of the model; and finally publish the model so that it can be implemented or validated by others.


Subject(s)
Gastroenterology , Humans , Gastroenterology/methods , Gastroenterology/standards , Models, Statistical , Prognosis , Reproducibility of Results
18.
Article in English | MEDLINE | ID: mdl-38904851

ABSTRACT

Computational, or in-silico, models are an effective, non-invasive tool for investigating cardiovascular function. These models can be used in the analysis of experimental and clinical data to identify possible mechanisms of (ab)normal cardiovascular physiology. Recent advances in computing power and data management have led to innovative and complex modeling frameworks that simulate cardiovascular function across multiple scales. While commonly used in multiple disciplines, there is a lack of concise guidelines for the implementation of computer models in cardiovascular research. In line with recent calls for more reproducible research, it is imperative that scientists adhere to credible practices when developing and applying computational models to their research. The goal of this manuscript is to provide a consensus document that identifies best practices for in-silico computational modeling in cardiovascular research. These guidelines provide the necessary methods for mechanistic model development, model analysis, and formal model calibration using fundamentals from statistics. We outline rigorous practices for computational modeling in cardiovascular research and discuss its synergistic value to experimental and clinical data.

19.
Biostatistics ; 24(3): 760-775, 2023 Jul 14.
Article in English | MEDLINE | ID: mdl-35166342

ABSTRACT

Leveraging large-scale electronic health record (EHR) data to estimate survival curves for clinical events can enable more powerful risk estimation and comparative effectiveness research. However, use of EHR data is hindered by a lack of direct event time observations. Occurrence times of relevant diagnostic codes or target disease mentions in clinical notes are at best a good approximation of the true disease onset time. On the other hand, extracting precise information on the exact event time requires laborious manual chart review and is sometimes altogether infeasible due to a lack of detailed documentation. Current status labels-binary indicators of phenotype status during follow-up-are significantly more efficient and feasible to compile, enabling more precise survival curve estimation given limited resources. Existing survival analysis methods using current status labels focus almost entirely on supervised estimation, and naive incorporation of unlabeled data into these methods may lead to biased estimates. In this article, we propose Semisupervised Calibration of Risk with Noisy Event Times (SCORNET), which yields a consistent and efficient survival function estimator by leveraging a small set of current status labels and a large set of informative features. In addition to providing theoretical justification of SCORNET, we demonstrate in both simulation and real-world EHR settings that SCORNET achieves efficiency akin to the parametric Weibull regression model, while also exhibiting semi-nonparametric flexibility and relatively low empirical bias in a variety of generative settings.


Subject(s)
Electronic Health Records , Humans , Calibration , Bias , Computer Simulation
20.
Magn Reson Med ; 2024 May 10.
Article in English | MEDLINE | ID: mdl-38733068

ABSTRACT

PURPOSE: To address the limitations of spinal cord imaging at ultra-high field (UHF) due to time-consuming parallel transmit (pTx) adjustments. This study introduces calibration-free offline computed universal shim modes that can be applied seamlessly for different pTx RF coils and spinal cord target regions, substantially enhancing spinal cord imaging efficiency at UHF. METHODS: A library of channel-wise relative B 1 + $$ {B}_1^{+} $$ maps for the cervical spinal cord (six datasets) and thoracic and lumbar spinal cord (nine datasets) was constructed to optimize transmit homogeneity and efficiency for these regions. A tailored B0 shim was optimized for the cervical spine to enhance spatial magnetic field homogeneity further. The performance of the universal shims was validated using absolute saturation based B 1 + $$ {B}_1^{+} $$ mapping and high-resolution 2D and 3D multi-echo gradient-recalled echo (GRE) data to assess the image quality. RESULTS: The proposed universal shims demonstrated a 50% improvement in B 1 + $$ {B}_1^{+} $$ efficiency compared to the default (zero phase) shim mode. B 1 + $$ {B}_1^{+} $$ homogeneity was also improved by 20%. The optimized universal shims achieved performance comparable to subject-specific pTx adjustments, while eliminating the need for lengthy pTx calibration times, saving about 10 min per experiment. CONCLUSION: The development of universal shims represents a significant advance by eliminating time-consuming subject-specific pTx adjustments. This approach is expected to make UHF spinal cord imaging more accessible and user-friendly, particularly for non-pTx experts.

SELECTION OF CITATIONS
SEARCH DETAIL