Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 106
1.
Pediatr Res ; 2024 Apr 16.
Article En | MEDLINE | ID: mdl-38627591

BACKGROUND: Neurodevelopmental trajectories of preterm children may have changed due to changes in care and in society. We aimed to compare neurodevelopmental trajectories in early and moderately late preterm children, measured using the Developmental (D)-score, in two cohorts born 15 years apart. METHODS: We included early preterm and moderately late preterm children from two Dutch cohorts (LOLLIPOP, 2002-2003 and ePREM, 2016-2017). ePREM counterparts were matched to LOLLIPOP participants by gestational age and sex. D-score trajectories were summarized by a multilevel model with random intercepts and random slopes, and multigroup analyses were used to test if the intercepts and slopes differed across cohorts. RESULTS: We included 1686 preterm children (1071 moderately late preterm, 615 early preterm) from LOLLIPOP, and matched these with 1686 ePREM counterparts. The neurodevelopmental trajectories of the two cohorts were mostly similar. For early preterm children, we found no statistically significant differences. For moderately late preterm children, both the intercept (43.0 vs. 42.3, p < 0.001) and slope (23.5 vs. 23.9, p = 0.002) showed some, but only clinically minor, differences. CONCLUSION: Developmental trajectories, measured using the D-score, in the first four years of life are comparable and stable across a period of 15 years for both early and moderately late preterm children. IMPACT: Neurodevelopmental trajectories are similar for early and moderately late preterm children born 15 years apart and thus seem quite stable in time. The validated Developmental score visualizes these trajectories based on developmental milestone attainment Because of its stability over time, the Developmental score trajectory may aid clinicians in neurodevelopmental assessment of preterm children as this simplifies monitoring and interpretation, similar to a growth chart.

2.
BMC Pediatr ; 23(1): 554, 2023 11 04.
Article En | MEDLINE | ID: mdl-37925410

BACKGROUND: This study evaluates changes in the neonatal morbidity, the neonatal care practices, and the length of hospital stay of surviving very preterm (VP) infants born in the Netherlands in the 1980s and in the 2000s; a period over which historical improvements were introduced into neonatal care. We, herein, also study whether these changes in neonatal morbidity, neonatal care practices and length of hospital stay are associated with sociodemographic, prenatal, and infant characteristics. METHODS: Two community-based cohorts from 1983 (POPS) and 2002-03 (LOLLIPOP) have provided the perinatal data for our study. The analysis enrolled 1,228 participants born VP (before the 32nd week of gestation) and surviving to 2 years of age without any severe congenital malformation. A rigorous harmonisation protocol ensured a precise comparison of the cohorts by using identical definitions of the perinatal characteristics. RESULTS: In 2003, mothers were older when giving birth, had higher multiple birth rates, and significantly more parents had received higher education. In 2003, less VP infants had severe intraventricular haemorrhage and sepsis and relatively more received continuous positive airway pressure, mechanical ventilation and caffeine therapy than in 1983. Antenatal corticosteroids and surfactant therapy were provided only in 2003. The length of the stay in the neonatal intensive care unit and in hospital had decreased in 2003 by 22 and 11 days, respectively. Differences persisted after adjustment for sociodemographic, prenatal, and infant characteristics. CONCLUSIONS: Neonatal morbidities of the surviving VP infants in this study have not increased, and exhibit improvements for various characteristics in two cohorts born 20 years apart with comparable gestational age and birth weight. Our data suggest that the improvements found are associated with more advanced therapeutic approaches and new national protocols in place, and less so with sociodemographic changes. This analysis provides a basis for further comparative analyses of the health and the development of VP children, particularly with regard to long-term outcomes.


Infant, Extremely Premature , Infant, Premature, Diseases , Infant, Newborn , Child , Infant , Humans , Pregnancy , Female , Netherlands/epidemiology , Length of Stay , Infant, Very Low Birth Weight , Infant, Premature, Diseases/epidemiology , Infant, Premature, Diseases/therapy , Gestational Age , Morbidity
3.
Stat Methods Med Res ; 32(11): 2172-2183, 2023 11.
Article En | MEDLINE | ID: mdl-37750213

Multivariate imputation using chained equations (MICE) is a popular algorithm for imputing missing data that entails specifying multivariate models through conditional distributions. For imputing missing continuous variables, two common imputation methods are the use of parametric imputation using a linear model and predictive mean matching. When imputing missing binary variables, the default approach is parametric imputation using a logistic regression model. In the R implementation of MICE, the use of predictive mean matching can be substantially faster than using logistic regression as the imputation model for missing binary variables. However, there is a paucity of research into the statistical performance of predictive mean matching for imputing missing binary variables. Our objective was to compare the statistical performance of predictive mean matching with that of logistic regression for imputing missing binary variables. Monte Carlo simulations were used to compare the statistical performance of predictive mean matching with that of logistic regression for imputing missing binary outcomes when the analysis model of scientific interest was a multivariable logistic regression model. We varied the size of the analysis samples (N = 250, 500, 1,000, 5,000, and 10,000) and the prevalence of missing data (5%-50% in increments of 5%). In general, the statistical performance of predictive mean matching was virtually identical to that of logistic regression for imputing missing binary variables when the analysis model was a logistic regression model. This was true across a wide range of scenarios defined by sample size and the prevalence of missing data. In conclusion, predictive mean matching can be used to impute missing binary variables. The use of predictive mean matching to impute missing binary variables can result in a substantial reduction in computer processing time when conducting simulations of multiple imputation.


Algorithms , Logistic Models , Computer Simulation
4.
Ann Hum Biol ; 50(1): 247-257, 2023 Feb.
Article En | MEDLINE | ID: mdl-37394524

BACKGROUND: Conventional growth charts offer limited guidance to track individual growth. AIM: To explore new approaches to improve the evaluation and prediction of individual growth trajectories. SUBJECTS AND METHODS: We generalise the conditional SDS gain to multiple historical measurements, using the Cole correlation model to find correlations at exact ages, the sweep operator to find regression weights and a specified longitudinal reference. We explain the various steps of the methodology and validate and demonstrate the method using empirical data from the SMOCC study with 1985 children measured during ten visits at ages 0-2 years. RESULTS: The method performs according to statistical theory. We apply the method to estimate the referral rates for a given screening policy. We visualise the child's trajectory as an adaptive growth chart featuring two new graphical elements: amplitude (for evaluation) and flag (for prediction). The relevant calculations take about 1 millisecond per child. CONCLUSION: Longitudinal references capture the dynamic nature of child growth. The adaptive growth chart for individual monitoring works with exact ages, corrects for regression to the mean, has a known distribution at any pair of ages and is fast. We recommend the method for evaluating and predicting individual child growth.


Growth Charts , Humans , Infant , Child, Preschool
5.
Heliyon ; 9(6): e17077, 2023 Jun.
Article En | MEDLINE | ID: mdl-37360073

Problem: The congenial of the imputation model is crucial for valid statistical inferences. Hence, it is important to develop methodologies for diagnosing imputation models. Aim: We propose and evaluate a new diagnostic method based on posterior predictive checking to diagnose the congeniality of fully conditional imputation models. Our method applies to multiple imputation by chained equations, which is widely used in statistical software. Methods: The proposed method compares the observed data with their replicates generated under the corresponding posterior predictive distributions to diagnose the performance of imputation models. The method applies to various imputation models, including parametric and semi-parametric approaches and continuous and discrete incomplete variables. We studied the validity of the method through simulation and application. Results: The proposed diagnostic method based on posterior predictive checking demonstrates its validity in assessing the performance of imputation models. The method can diagnose the consistency of imputation models with the substantive model and can be applied to a broad range of research contexts. Conclusion: The diagnostic method based on posterior predictive checking provides a valuable tool for researchers who use fully conditional specification to handle missing data. By assessing the performance of imputation models, our method can help researchers improve the accuracy and reliability of their analyzes. Furthermore, our method applies to different imputation models. Hence, it is a versatile and valuable tool for researchers identifying plausible imputation models.

6.
Stat Med ; 42(10): 1525-1541, 2023 05 10.
Article En | MEDLINE | ID: mdl-36807923

We examined the setting in which a variable that is subject to missingness is used both as an inclusion/exclusion criterion for creating the analytic sample and subsequently as the primary exposure in the analysis model that is of scientific interest. An example is cancer stage, where patients with stage IV cancer are often excluded from the analytic sample, and cancer stage (I to III) is an exposure variable in the analysis model. We considered two analytic strategies. The first strategy, referred to as "exclude-then-impute," excludes subjects for whom the observed value of the target variable is equal to the specified value and then uses multiple imputation to complete the data in the resultant sample. The second strategy, referred to as "impute-then-exclude," first uses multiple imputation to complete the data and then excludes subjects based on the observed or filled-in values in the completed samples. Monte Carlo simulations were used to compare five methods (one based on "exclude-then-impute" and four based on "impute-then-exclude") along with the use of a complete case analysis. We considered both missing completely at random and missing at random missing data mechanisms. We found that an impute-then-exclude strategy using substantive model compatible fully conditional specification tended to have superior performance across 72 different scenarios. We illustrated the application of these methods using empirical data on patients hospitalized with heart failure when heart failure subtype was used for cohort creation (excluding subjects with heart failure with preserved ejection fraction) and was also an exposure in the analysis model.


Research Design , Humans , Data Interpretation, Statistical , Monte Carlo Method
7.
Sci Rep ; 13(1): 644, 2023 01 12.
Article En | MEDLINE | ID: mdl-36635443

Fully conditional specification (FCS) is a convenient and flexible multiple imputation approach. It specifies a sequence of simple regression models instead of a potential complex joint density for missing variables. However, FCS may not converge to a stationary distribution. Many authors have studied the convergence properties of FCS when priors of conditional models are non-informative. We extend to the case of informative priors. This paper evaluates the convergence properties of the normal linear model with normal-inverse gamma priors. The theoretical and simulation results prove the convergence of FCS and show the equivalence of prior specification under the joint model and a set of conditional models when the analysis model is a linear regression with normal inverse-gamma priors.


Models, Statistical , Linear Models , Data Interpretation, Statistical , Computer Simulation , Bayes Theorem
8.
Sci Rep ; 13(1): 952, 2023 01 18.
Article En | MEDLINE | ID: mdl-36653404

Intensive longitudinal data can be used to explore important associations and patterns between various types of inputs and outcomes. Nonlinear relations and irregular measurement occasions can pose problems to develop an accurate model for these kinds of data. This paper focuses on the development, fitting and evaluation of a prediction model with irregular intensive longitudinal data. A three-step process for developing a prediction tool for (daily) monitoring and prediction is outlined and illustrated for intensive weight measurements in piglets. Step 1 addresses a nonlinear relation in the data by developing and applying a normalizing transformation. Step 2 addresses the intermittent nature of the time points by aligning the measurement times to a common time grid with a broken-stick model. Step 3 addresses the prediction problem by selecting and evaluating inputs and covariates in the model to obtain accurate predictions. The final model predicts future outcomes accurately, while allowing for nonlinearities between input and output and for different measurement histories of individuals. The methodology described can be used to develop a tool to deal with intensive irregular longitudinal data that uses the available information in an optimal way. The resulting tool demonstrated to perform well for piglet weight prediction and can be adapted to many different applications.


Time , Swine , Animals , Forecasting
9.
BMJ Glob Health ; 8(1)2023 01.
Article En | MEDLINE | ID: mdl-36650017

INTRODUCTION: With the ratification of the Sustainable Development Goals, there is an increased emphasis on early childhood development (ECD) and well-being. The WHO led Global Scales for Early Development (GSED) project aims to provide population and programmatic level measures of ECD for 0-3 years that are valid, reliable and have psychometrically stable performance across geographical, cultural and language contexts. This paper reports on the creation of two measures: (1) the GSED Short Form (GSED-SF)-a caregiver reported measure for population-evaluation-self-administered with no training required and (2) the GSED Long Form (GSED-LF)-a directly administered/observed measure for programmatic evaluation-administered by a trained professional. METHODS: We selected 807 psychometrically best-performing items using a Rasch measurement model from an ECD measurement databank which comprised 66 075 children assessed on 2211 items from 18 ECD measures in 32 countries. From 766 of these items, in-depth subject matter expert judgements were gathered to inform final item selection. Specifically collected were data on (1) conceptual matches between pairs of items originating from different measures, (2) developmental domain(s) measured by each item and (3) perceptions of feasibility of administration of each item in diverse contexts. Prototypes were finalised through a combination of psychometric performance evaluation and expert consensus to optimally identify items. RESULTS: We created the GSED-SF (139 items) and GSED-LF (157 items) for tablet-based and paper-based assessments, with an optimal set of items that fit the Rasch model, met subject matter expert criteria, avoided conceptual overlap, covered multiple domains of child development and were feasible to implement across diverse settings. CONCLUSIONS: State-of-the-art quantitative and qualitative procedures were used to select of theoretically relevant and globally feasible items representing child development for children aged 0-3 years. GSED-SF and GSED-LF will be piloted and validated in children across diverse cultural, demographic, social and language contexts for global use.


Big Data , Judgment , Humans , Child , Child, Preschool , Surveys and Questionnaires , Child Development , Psychometrics
10.
BMJ Open ; 13(1): e062562, 2023 01 24.
Article En | MEDLINE | ID: mdl-36693690

INTRODUCTION: Children's early development is affected by caregiving experiences, with lifelong health and well-being implications. Governments and civil societies need population-based measures to monitor children's early development and ensure that children receive the care needed to thrive. To this end, the WHO developed the Global Scales for Early Development (GSED) to measure children's early development up to 3 years of age. The GSED includes three measures for population and programmatic level measurement: (1) short form (SF) (caregiver report), (2) long form (LF) (direct administration) and (3) psychosocial form (PF) (caregiver report). The primary aim of this protocol is to validate the GSED SF and LF. Secondary aims are to create preliminary reference scores for the GSED SF and LF, validate an adaptive testing algorithm and assess the feasibility and preliminary validity of the GSED PF. METHODS AND ANALYSIS: We will conduct the validation in seven countries (Bangladesh, Brazil, Côte d'Ivoire, Pakistan, The Netherlands, People's Republic of China, United Republic of Tanzania), varying in geography, language, culture and income through a 1-year prospective design, combining cross-sectional and longitudinal methods with 1248 children per site, stratified by age and sex. The GSED generates an innovative common metric (Developmental Score: D-score) using the Rasch model and a Development for Age Z-score (DAZ). We will evaluate six psychometric properties of the GSED SF and LF: concurrent validity, predictive validity at 6 months, convergent and discriminant validity, and test-retest and inter-rater reliability. We will evaluate measurement invariance by comparing differential item functioning and differential test functioning across sites. ETHICS AND DISSEMINATION: This study has received ethical approval from the WHO (protocol GSED validation 004583 20.04.2020) and approval in each site. Study results will be disseminated through webinars and publications from WHO, international organisations, academic journals and conference proceedings. REGISTRATION DETAILS: Open Science Framework https://osf.io/ on 19 November 2021 (DOI 10.17605/OSF.IO/KX5T7; identifier: osf-registrations-kx5t7-v1).


Caregivers , Language , Humans , Child , Child, Preschool , Reproducibility of Results , Cross-Sectional Studies , Surveys and Questionnaires , Psychometrics/methods
11.
Pharmacoeconomics ; 41(1): 93-105, 2023 01.
Article En | MEDLINE | ID: mdl-36287335

BACKGROUND AND OBJECTIVE: Assessment of health-related quality of life for individuals born very preterm and/or low birthweight (VP/VLBW) offers valuable complementary information alongside biomedical assessments. However, the impact of VP/VLBW status on health-related quality of life in adulthood is inconclusive. The objective of this study was to examine associations between VP/VLBW status and preference-based health-related quality-of-life outcomes in early adulthood. METHODS: Individual participant data were obtained from five prospective cohorts of individuals born VP/VLBW and controls contributing to the 'Research on European Children and Adults Born Preterm' Consortium. The combined dataset included over 2100 adult VP/VLBW survivors with an age range of 18-29 years. The main exposure was defined as birth before 32 weeks' gestation (VP) and/or birth weight below 1500 g (VLBW). Outcome measures included multi-attribute utility scores generated by the Health Utilities Index Mark 3 and the Short Form 6D. Data were analysed using generalised linear mixed models in a one-step approach using fixed-effects and random-effects models. RESULTS: VP/VLBW status was associated with a significant difference in the Health Utilities Index Mark 3 multi-attribute utility score of - 0.06 (95% confidence interval - 0.08, - 0.04) in comparison to birth at term or at normal birthweight; this was not replicated for the Short Form 6D. Impacted functional domains included vision, ambulation, dexterity and cognition. VP/VLBW status was not associated with poorer emotional or social functioning, or increased pain. CONCLUSIONS: VP/VLBW status is associated with lower overall health-related quality of life in early adulthood, particularly in terms of physical and cognitive functioning. Further studies that estimate the effects of VP/VLBW status on health-related quality-of-life outcomes in mid and late adulthood are needed.


Infant, Extremely Premature , Quality of Life , Infant, Newborn , Child , Humans , Adult , Adolescent , Young Adult , Prospective Studies , Birth Weight , Infant, Very Low Birth Weight/psychology
12.
BMC Med Res Methodol ; 22(1): 196, 2022 07 18.
Article En | MEDLINE | ID: mdl-35850734

BACKGROUND: Multiple imputation is frequently used to address missing data when conducting statistical analyses. There is a paucity of research into the performance of multiple imputation when the prevalence of missing data is very high. Our objective was to assess the performance of multiple imputation when estimating a logistic regression model when the prevalence of missing data for predictor variables is very high. METHODS: Monte Carlo simulations were used to examine the performance of multiple imputation when estimating a multivariable logistic regression model. We varied the size of the analysis samples (N = 500, 1,000, 5,000, 10,000, and 25,000) and the prevalence of missing data (5-95% in increments of 5%). RESULTS: In general, multiple imputation performed well across the range of scenarios. The exceptions were in scenarios when the sample size was 500 or 1,000 and the prevalence of missing data was at least 90%. In these scenarios, the estimated standard errors of the log-odds ratios were very large and did not accurately estimate the standard deviation of the sampling distribution of the log-odds ratio. Furthermore, in these settings, estimated confidence intervals tended to be conservative. In all other settings (i.e., sample sizes > 1,000 or when the prevalence of missing data was less than 90%), then multiple imputation allowed for accurate estimation of a logistic regression model. CONCLUSIONS: Multiple imputation can be used in many scenarios with a very high prevalence of missing data.


Research Design , Humans , Logistic Models , Odds Ratio , Prevalence , Sample Size
13.
Acta Paediatr ; 111(1): 59-75, 2022 Jan.
Article En | MEDLINE | ID: mdl-34469604

AIM: We investigated the timing of survival differences and effects on morbidity for foetuses alive at maternal admission to hospital delivered at 22 to 26 weeks' gestational age (GA). METHODS: Data from the EXPRESS (Sweden, 2004-07), EPICure-2 (England, 2006) and EPIPAGE-2 (France, 2011) cohorts were harmonised. Survival, stratified by GA, was analysed to 112 days using Kaplan-Meier analyses and Cox regression adjusted for population and pregnancy characteristics; neonatal morbidities, survival to discharge and follow-up and outcomes at 2-3 years of age were compared. RESULTS: Among 769 EXPRESS, 2310 EPICure-2 and 1359 EPIPAGE-2 foetuses, 112-day survival was, respectively, 28.2%, 10.8% and 0.5% at 22-23 weeks' GA; 68.5%, 40.0% and 23.6% at 24 weeks; 80.5%, 64.8% and 56.9% at 25 weeks; and 86.6%, 77.1% and 74.4% at 26 weeks. Deaths were most marked in EPIPAGE-2 before 1 day at 22-23 and 24 weeks GA. At 25 weeks, survival varied before 28 days; differences at 26 weeks were minimal. Cox analyses were consistent with the Kaplan-Meier analyses. Variations in morbidities were not clearly associated with survival. CONCLUSION: Differences in survival and morbidity outcomes for extremely preterm births are evident despite adjustment for background characteristics. No clear relationship was identified between early mortality and later patterns of morbidity.


Infant, Premature, Diseases , Premature Birth , Female , France/epidemiology , Gestational Age , Humans , Infant, Newborn , Morbidity , Pregnancy , Premature Birth/epidemiology , Sweden/epidemiology
14.
Sci Rep ; 11(1): 16719, 2021 08 18.
Article En | MEDLINE | ID: mdl-34408167

The purpose of this study was to develop and test personalized predictions for functional recovery after Total Knee Arthroplasty (TKA) surgery, using a novel neighbors-based prediction approach. We used data from 397 patients with TKA to develop the prediction methodology and then tested the predictions in a temporally distinct sample of 202 patients. The Timed Up and Go (TUG) Test was used to assess physical function. Neighbors-based predictions were generated by estimating an index patient's prognosis from the observed recovery data of previous similar patients (a.k.a., the index patient's "matches"). Matches were determined by an adaptation of predictive mean matching. Matching characteristics included preoperative TUG time, age, sex and Body Mass Index. The optimal number of matches was determined to be m = 35, based on low bias (- 0.005 standard deviations), accurate coverage (50% of the realized observations within the 50% prediction interval), and acceptable precision (the average width of the 50% prediction interval was 2.33 s). Predictions were well-calibrated in out-of-sample testing. These predictions have the potential to inform care decisions both prior to and following TKA surgery.


Arthroplasty, Replacement, Knee , Knee Joint , Models, Biological , Osteoarthritis, Knee , Recovery of Function , Aged , Female , Humans , Knee Joint/physiopathology , Knee Joint/surgery , Male , Middle Aged , Osteoarthritis, Knee/physiopathology , Osteoarthritis, Knee/surgery , Predictive Value of Tests
15.
BMC Med Res Methodol ; 21(1): 118, 2021 06 06.
Article En | MEDLINE | ID: mdl-34092226

BACKGROUND: Loss to follow-up is a major challenge for very preterm (VPT) cohorts; attrition is associated with social disadvantage and parents with impaired children may participate less in research. We investigated the impact of loss to follow-up on the estimated prevalence of neurodevelopmental impairment in a VPT cohort using different methodological approaches. METHODS: This study includes births < 32 weeks of gestational age (GA) from 4 regions in the UK and Portugal participating in a European birth cohort (N = 1737 survivors). Data on maternal characteristics, pregnancy complications, neonatal outcomes and neighborhood deprivation were collected at baseline. Neurodevelopment was assessed at 2 years of corrected age (CA) using standardized parent-report measures. We applied (1) multiple imputation (MI) and (2) inverse probability weighting (IPW) to estimate the impact of non-response on the prevalence of moderate to severe neurodevelopmental impairment and assessed violations of the missing at random (MAR) assumption using the delta method. RESULTS: 54.2% of children were followed-up. Follow-up was less likely when mothers were younger, multiparous, foreign-born, did not breastfeed and came from deprived areas. The prevalence of neurodevelopmental impairment was 18.4% (95% confidence interval (CI):15.9-21.1) and increased to 20.4% (95%CI: 17.3-23.4) and 20.0% (95%CI:16.9-23.1) for MI and IPW models, respectively. Simulating strong violations of MAR (children with impairments being 50% less likely to be followed-up) raised estimates to 23.6 (95%CI:20.1-27.1) CONCLUSIONS: In a VPT cohort with high loss to follow-up, correcting for attrition yielded modest increased estimates of neurodevelopmental impairment at 2 years CA; estimates were relatively robust to violations of the MAR assumption.


Infant, Extremely Premature , Child, Preschool , Cohort Studies , Female , Follow-Up Studies , Gestational Age , Humans , Infant, Newborn , Portugal/epidemiology , Pregnancy
16.
J Clin Monit Comput ; 35(2): 259-267, 2021 Apr.
Article En | MEDLINE | ID: mdl-32783094

Physiologic data from anesthesia monitors are automatically captured. Yet erroneous data are stored in the process as well. While this is not interfering with clinical care, research can be affected. Researchers should find ways to remove artifacts. The aim of the present study was to compare different artifact annotation strategies, and to assess if a machine learning algorithm is able to accept or reject individual data points. Non-cardiac procedures requiring invasive blood pressure monitoring were eligible. Two trained research assistants observed procedures live for artifacts. The same procedures were also retrospectively annotated for artifacts by a different person. We compared the different ways of artifact identifications and modelled artifacts with three different learning algorithms (lasso restrictive logistic regression, neural network and support vector machine). In 88 surgical procedures including 5711 blood pressure data points, the live observed incidence of artifacts was 2.1% and the retrospective incidence was 2.2%. Comparing retrospective with live annotation revealed a sensitivity of 0.32 and specificity of 0.98. The performance of the learning algorithms which we applied ranged from poor (kappa 0.053) to moderate (kappa 0.651). Manual identification of artifacts yielded different incidences in different situations, which were not comparable. Artifact detection in physiologic data collected during anesthesia could be automated, but the performance of the learning algorithms in the present study remained moderate. Future research should focus on optimization and finding ways to apply them with minimal manual work. The present study underlines the importance of an explicit definition for artifacts in database research.


Anesthesia , Artifacts , Algorithms , Blood Pressure , Humans , Male , Retrospective Studies , Vital Signs
17.
Can J Cardiol ; 37(9): 1322-1331, 2021 09.
Article En | MEDLINE | ID: mdl-33276049

Missing data is a common occurrence in clinical research. Missing data occurs when the value of the variables of interest are not measured or recorded for all subjects in the sample. Common approaches to addressing the presence of missing data include complete-case analyses, where subjects with missing data are excluded, and mean-value imputation, where missing values are replaced with the mean value of that variable in those subjects for whom it is not missing. However, in many settings, these approaches can lead to biased estimates of statistics (eg, of regression coefficients) and/or confidence intervals that are artificially narrow. Multiple imputation (MI) is a popular approach for addressing the presence of missing data. With MI, multiple plausible values of a given variable are imputed or filled in for each subject who has missing data for that variable. This results in the creation of multiple completed data sets. Identical statistical analyses are conducted in each of these complete data sets and the results are pooled across complete data sets. We provide an introduction to MI and discuss issues in its implementation, including developing the imputation model, how many imputed data sets to create, and addressing derived variables. We illustrate the application of MI through an analysis of data on patients hospitalised with heart failure. We focus on developing a model to estimate the probability of 1-year mortality in the presence of missing data. Statistical software code for conducting MI in R, SAS, and Stata are provided.


Clinical Trials as Topic , Data Interpretation, Statistical , Humans , Research Design
18.
BMC Musculoskelet Disord ; 21(1): 482, 2020 Jul 22.
Article En | MEDLINE | ID: mdl-32698900

BACKGROUND: Clinicians and patients lack an evidence-based framework by which to judge individual-level recovery following total knee arthroplasty (TKA) surgery, thus impeding personalized treatment approaches for this elective surgery. Our study aimed to develop and validate a reference chart for monitoring recovery of knee flexion following TKA surgery. METHODS: Retrospective analysis of data collected in routine rehabilitation practice for patients following TKA surgery. Reference charts were constructed using Generalized Additive Models for Location Scale and Shape. Various models were compared using the Schwarz Bayesian Criterion, Mean Squared Error in 5-fold cross validation, and centile coverage (i.e. the percent of observed data represented below specified centiles). The performance of the reference chart was then validated against a test set of patients with later surgical dates, by examining the centile coverage and average bias (i.e. difference between observed and predicted values) in the test dataset. RESULTS: A total of 1173 observations from 327 patients were used to develop a reference chart for knee flexion over the first 120 days following TKA. The best fitting model utilized a non-linear time trend, with smoothing splines for median and variance parameters. Additionally, optimization of the number of knots in smoothing splines and power transformation of time improved model fit. The reference chart performed adequately in a test set of 171 patients (377 observations), with accurate centile coverage and minimal average bias (< 3 degrees). CONCLUSION: A reference chart developed with clinically collected data offers a new approach to monitoring knee flexion following TKA.


Arthroplasty, Replacement, Knee , Osteoarthritis, Knee , Arthroplasty, Replacement, Knee/adverse effects , Bayes Theorem , Humans , Knee Joint/surgery , Osteoarthritis, Knee/surgery , Postoperative Period , Range of Motion, Articular , Retrospective Studies
19.
Anesthesiology ; 132(4): 723-737, 2020 04.
Article En | MEDLINE | ID: mdl-32022770

BACKGROUND: Physiologic data that is automatically collected during anesthesia is widely used for medical record keeping and clinical research. These data contain artifacts, which are not relevant in clinical care, but may influence research results. The aim of this study was to explore the effect of different methods of filtering and processing artifacts in anesthesiology data on study findings in order to demonstrate the importance of proper artifact filtering. METHODS: The authors performed a systematic literature search to identify artifact filtering methods. Subsequently, these methods were applied to the data of anesthesia procedures with invasive blood pressure monitoring. Different hypotension measures were calculated (i.e., presence, duration, maximum deviation below threshold, and area under threshold) across different definitions (i.e., thresholds for mean arterial pressure of 50, 60, 65, 70 mmHg). These were then used to estimate the association with postoperative myocardial injury. RESULTS: After screening 3,585 papers, the authors included 38 papers that reported artifact filtering methods. The authors applied eight of these methods to the data of 2,988 anesthesia procedures. The occurrence of hypotension (defined with a threshold of 50 mmHg) varied from 24% with a median filter of seven measurements to 55% without an artifact filtering method, and between 76 and 90% with a threshold of 65 mmHg. Standardized odds ratios for presence of hypotension ranged from 1.16 (95% CI, 1.07 to 1.26) to 1.24 (1.14 to 1.34) when hypotension was defined with a threshold of 50 mmHg. Similar variations in standardized odds ratios were found when applying methods to other hypotension measures and definitions. CONCLUSIONS: The method of artifact filtering can have substantial effects on estimates of hypotension prevalence. The effect on the association between intraoperative hypotension and postoperative myocardial injury was relatively small. Nevertheless, the authors recommend that researchers carefully consider artifacts handling and report the methodology used.


Artifacts , Hypotension/diagnosis , Intraoperative Complications/diagnosis , Monitoring, Intraoperative/methods , Humans , Hypotension/etiology , Hypotension/physiopathology , Intraoperative Complications/etiology , Intraoperative Complications/physiopathology , Monitoring, Intraoperative/standards , Prevalence , Treatment Outcome
20.
Acta Anaesthesiol Scand ; 64(4): 472-480, 2020 04.
Article En | MEDLINE | ID: mdl-31833065

BACKGROUND: Intraoperative blood pressure has been suggested as a key factor for safe pediatric anesthesia. However, there is not much insight into factors that discriminate between children with low and normal pre-incision blood pressure. Our aim was to explore whether children who have a low blood pressure during anesthesia are different than those with normal blood pressure. The focus of the present study was on the pre-incision period. METHODS: This retrospective study included pediatric patients undergoing anesthesia for non-cardiac surgery at a tertiary pediatric university hospital, between 2012 and 2016. We analyzed the association between pre-incision blood pressure and patient- and anesthesia characteristics, comparing low with normal pre-incision blood pressure. This association was further explored with a multivariable linear regression. RESULTS: In total, 20 962 anesthetic cases were included. Pre-incision blood pressure was associated with age (beta -0.04 SD per year), gender (female -0.11), previous surgery (-0.15), preoperative blood pressure (+0.01 per mm Hg), epilepsy (0.12), bronchial hyperactivity (-0.18), emergency surgery (0.10), loco-regional technique (-0.48), artificial airway device (supraglottic airway device instead of tube 0.07), and sevoflurane concentration (0.03 per sevoflurane %). CONCLUSIONS: Children with low pre-incision blood pressure do not differ on clinically relevant factors from children with normal blood pressure. Although the present explorative study shows that pre-incision blood pressure is partly dependent on patient characteristics and partly dependent on anesthetic technique, other unmeasured variables might play a more important role.


Anesthesia/methods , Anesthetics, Inhalation/administration & dosage , Blood Pressure/physiology , Hypotension/physiopathology , Preoperative Care , Sevoflurane/administration & dosage , Adolescent , Body Weight , Child , Child, Preschool , Cohort Studies , Female , Humans , Infant , Infant, Newborn , Male , Retrospective Studies , Sex Factors
...