Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 30
Filter
Add more filters

Country/Region as subject
Publication year range
1.
Stat Med ; 40(25): 5642-5656, 2021 11 10.
Article in English | MEDLINE | ID: mdl-34291499

ABSTRACT

In a quantitative synthesis of studies via meta-analysis, it is possible that some studies provide a markedly different relative treatment effect or have a large impact on the summary estimate and/or heterogeneity. Extreme study effects (outliers) can be detected visually with forest/funnel plots and by using statistical outlying detection methods. A forward search (FS) algorithm is a common outlying diagnostic tool recently extended to meta-analysis. FS starts by fitting the assumed model to a subset of the data which is gradually incremented by adding the remaining studies according to their closeness to the postulated data-generating model. At each step of the algorithm, parameter estimates, measures of fit (residuals, likelihood contributions), and test statistics are being monitored and their sharp changes are used as an indication for outliers. In this article, we extend the FS algorithm to network meta-analysis (NMA). In NMA, visualization of outliers is more challenging due to the multivariate nature of the data and the fact that studies contribute both directly and indirectly to the network estimates. Outliers are expected to contribute not only to heterogeneity but also to inconsistency, compromising the NMA results. The FS algorithm was applied to real and artificial networks of interventions that include outliers. We developed an R package (NMAoutlier) to allow replication and dissemination of the proposed method. We conclude that the FS algorithm is a visual diagnostic tool that helps to identify studies that are a potential source of heterogeneity and inconsistency.


Subject(s)
Algorithms , Research Design , Humans , Network Meta-Analysis
2.
Surg Endosc ; 35(8): 4061-4068, 2021 08.
Article in English | MEDLINE | ID: mdl-34159464

ABSTRACT

OBJECTIVE: To inform the development of an AGREE II extension specifically tailored for surgical guidelines. AGREE II was designed to inform the development, reporting, and appraisal of clinical practice guidelines. Previous research has suggested substantial room for improvement of the quality of surgical guidelines. METHODS: A previously published search in MEDLINE for clinical practice guidelines published by surgical scientific organizations with an international scope between 2008 and 2017, resulted in a total of 67 guidelines. The quality of these guidelines was assessed using AGREE II. We performed a series of statistical analyses (reliability, correlation and Factor Analysis, Item Response Theory) with the objective to calibrate AGREE II for use specifically in surgical guidelines. RESULTS: Reliability/correlation/factor analysis and Item Response Theory produced similar results and suggested that a structure of 5 domains, instead of 6 domains of the original instrument, might be more appropriate. Furthermore, exclusion and re-arrangement of items to other domains was found to increase the reliability of AGREE II when applied in surgical guidelines. CONCLUSIONS: The findings of this study suggest that statistical calibration of AGREE II might improve the development, reporting, and appraisal of surgical guidelines.


Subject(s)
Research Design , Calibration , Factor Analysis, Statistical , Humans , Reproducibility of Results
3.
Biostatistics ; 15(4): 677-89, 2014 Oct.
Article in English | MEDLINE | ID: mdl-24812421

ABSTRACT

Models with random effects/latent variables are widely used for capturing unobserved heterogeneity in multilevel/hierarchical data and account for associations in multivariate data. The estimation of those models becomes cumbersome as the number of latent variables increases due to high-dimensional integrations involved. Composite likelihood is a pseudo-likelihood that combines lower-order marginal or conditional densities such as univariate and/or bivariate; it has been proposed in the literature as an alternative to full maximum likelihood estimation. We propose a weighted pairwise likelihood estimator based on estimates obtained from separate maximizations of marginal pairwise likelihoods. The derived weights minimize the total variance of the estimated parameters. The proposed weighted estimator is found to be more efficient than the one that assumes all weights to be equal. The methodology is applied to a multivariate growth model for binary outcomes in the analysis of four indicators of schistosomiasis before and after drug administration.


Subject(s)
Data Interpretation, Statistical , Likelihood Functions , Schistosomiasis/epidemiology , Computer Simulation , Humans
4.
PLoS Comput Biol ; 9(12): e1003402, 2013.
Article in English | MEDLINE | ID: mdl-24367250

ABSTRACT

Regular treatment with praziquantel (PZQ) is the strategy for human schistosomiasis control aiming to prevent morbidity in later life. With the recent resolution on schistosomiasis elimination by the 65th World Health Assembly, appropriate diagnostic tools to inform interventions are keys to their success. We present a discrete Markov chains modelling framework that deals with the longitudinal study design and the measurement error in the diagnostic methods under study. A longitudinal detailed dataset from Uganda, in which one or two doses of PZQ treatment were provided, was analyzed through Latent Markov Models (LMMs). The aim was to evaluate the diagnostic accuracy of Circulating Cathodic Antigen (CCA) and of double Kato-Katz (KK) faecal slides over three consecutive days for Schistosoma mansoni infection simultaneously by age group at baseline and at two follow-up times post treatment. Diagnostic test sensitivities and specificities and the true underlying infection prevalence over time as well as the probabilities of transitions between infected and uninfected states are provided. The estimated transition probability matrices provide parsimonious yet important insights into the re-infection and cure rates in the two age groups. We show that the CCA diagnostic performance remained constant after PZQ treatment and that this test was overall more sensitive but less specific than single-day double KK for the diagnosis of S. mansoni infection. The probability of clearing infection from baseline to 9 weeks was higher among those who received two PZQ doses compared to one PZQ dose for both age groups, with much higher re-infection rates among children compared to adolescents and adults. We recommend LMMs as a useful methodology for monitoring and evaluation and treatment decision research as well as CCA for mapping surveys of S. mansoni infection, although additional diagnostic tools should be incorporated in schistosomiasis elimination programs.


Subject(s)
Anthelmintics/therapeutic use , Antigens, Protozoan/blood , Markov Chains , Praziquantel/therapeutic use , Schistosomiasis/diagnosis , Schistosomiasis/drug therapy , Humans , Sensitivity and Specificity , Uganda
5.
Psychometrika ; 89(1): 267-295, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38383880

ABSTRACT

Ensuring fairness in instruments like survey questionnaires or educational tests is crucial. One way to address this is by a Differential Item Functioning (DIF) analysis, which examines if different subgroups respond differently to a particular item, controlling for their overall latent construct level. DIF analysis is typically conducted to assess measurement invariance at the item level. Traditional DIF analysis methods require knowing the comparison groups (reference and focal groups) and anchor items (a subset of DIF-free items). Such prior knowledge may not always be available, and psychometric methods have been proposed for DIF analysis when one piece of information is unknown. More specifically, when the comparison groups are unknown while anchor items are known, latent DIF analysis methods have been proposed that estimate the unknown groups by latent classes. When anchor items are unknown while comparison groups are known, methods have also been proposed, typically under a sparsity assumption - the number of DIF items is not too large. However, DIF analysis when both pieces of information are unknown has not received much attention. This paper proposes a general statistical framework under this setting. In the proposed framework, we model the unknown groups by latent classes and introduce item-specific DIF parameters to capture the DIF effects. Assuming the number of DIF items is relatively small, an L 1 -regularised estimator is proposed to simultaneously identify the latent classes and the DIF items. A computationally efficient Expectation-Maximisation (EM) algorithm is developed to solve the non-smooth optimisation problem for the regularised estimator. The performance of the proposed method is evaluated by simulation studies and an application to item response data from a real-world educational test.


Subject(s)
Psychometrics , Psychometrics/methods , Humans , Models, Statistical , Surveys and Questionnaires/standards , Educational Measurement/methods , Computer Simulation
6.
Article in English | MEDLINE | ID: mdl-38676427

ABSTRACT

Pairwise likelihood is a limited-information method widely used to estimate latent variable models, including factor analysis of categorical data. It can often avoid evaluating high-dimensional integrals and, thus, is computationally more efficient than relying on the full likelihood. Despite its computational advantage, the pairwise likelihood approach can still be demanding for large-scale problems that involve many observed variables. We tackle this challenge by employing an approximation of the pairwise likelihood estimator, which is derived from an optimization procedure relying on stochastic gradients. The stochastic gradients are constructed by subsampling the pairwise log-likelihood contributions, for which the subsampling scheme controls the per-iteration computational complexity. The stochastic estimator is shown to be asymptotically equivalent to the pairwise likelihood one. However, finite-sample performance can be improved by compounding the sampling variability of the data with the uncertainty introduced by the subsampling scheme. We demonstrate the performance of the proposed method using simulation studies and two real data applications.

7.
Am J Epidemiol ; 177(9): 913-22, 2013 May 01.
Article in English | MEDLINE | ID: mdl-23548755

ABSTRACT

In disease control or elimination programs, diagnostics are essential for assessing the impact of interventions, refining treatment strategies, and minimizing the waste of scarce resources. Although high-performance tests are desirable, increased accuracy is frequently accompanied by a requirement for more elaborate infrastructure, which is often not feasible in the developing world. These challenges are pertinent to mapping, impact monitoring, and surveillance in trachoma elimination programs. To help inform rational design of diagnostics for trachoma elimination, we outline a nonparametric multilevel latent Markov modeling approach and apply it to 2 longitudinal cohort studies of trachoma-endemic communities in Tanzania (2000-2002) and The Gambia (2001-2002) to provide simultaneous inferences about the true population prevalence of Chlamydia trachomatis infection and disease and the sensitivity, specificity, and predictive values of 3 diagnostic tests for C. trachomatis infection. Estimates were obtained by using data collected before and after mass azithromycin administration. Such estimates are particularly important for trachoma because of the absence of a true "gold standard" diagnostic test for C. trachomatis. Estimated transition probabilities provide useful insights into key epidemiologic questions about the persistence of disease and the clearance of infection as well as the required frequency of surveillance in the post-elimination setting.


Subject(s)
Azithromycin/administration & dosage , Chlamydia trachomatis/isolation & purification , Disease Eradication/methods , Trachoma/prevention & control , Anti-Bacterial Agents/administration & dosage , Endemic Diseases/prevention & control , Gambia/epidemiology , Humans , Longitudinal Studies , Markov Chains , Models, Biological , Population Surveillance/methods , Prevalence , Statistics, Nonparametric , Tanzania/epidemiology , Trachoma/diagnosis , Trachoma/epidemiology
8.
Qual Life Res ; 22(8): 1973-86, 2013 Oct.
Article in English | MEDLINE | ID: mdl-23324984

ABSTRACT

PURPOSE: To investigate the dimensionality, construct validity in the form of factorial, convergent, discriminant, and known-groups validity, as well as scale reliability of the fifteen dimensional (15D) instrument. METHODS: 15D data were collected from a large Greek general population sample (N = 3,268) which was randomly split into two halves. Data from the first sample were used to examine the distributional properties of the 15 items, as well as the factor structure adopting an exploratory approach. Data from the second sample were used to perform a confirmatory factor analysis of the 15 items, examine the goodness of fit of several measurement models, and evaluate reliability and known-groups validity of the resulting subscales, along with convergent and discriminant validity of the constructs. RESULTS: Exploratory factor analysis, using a distribution-free method, revealed a three-factor solution of the 15D (functional ability, physiological needs satisfaction, emotional well-being). Confirmatory factor analysis provided support for the three-factor solution but suggested that certain modifications should be made to this solution, involving freeing certain elements of the matrix of factor loadings and of the covariance matrix of measurement errors in the observed variables. Evidence of convergent validity was provided for all three factors, but discriminant validity was supported only for the emotional well-being construct. Scale reliability and known-groups validity of the resulting three subscales were satisfactory. CONCLUSIONS: Our results confirm the multidimensional structure of the 15D and the existence of three latent factors that cover important aspects of the health-related quality of life domain (physical and emotional functioning). The implications of our results for the validity of the 15D and suggestions for future research are outlined.


Subject(s)
Health Care Surveys , Health Status , Psychometrics/instrumentation , Quality of Life , Surveys and Questionnaires , Adolescent , Adult , Aged , Aged, 80 and over , Factor Analysis, Statistical , Female , Greece , Humans , Male , Mental Health , Middle Aged , Personal Satisfaction , Population Surveillance , Psychometrics/statistics & numerical data , Reproducibility of Results , Sickness Impact Profile , Socioeconomic Factors , Young Adult
9.
Br J Math Stat Psychol ; 76(3): 559-584, 2023 11.
Article in English | MEDLINE | ID: mdl-37401608

ABSTRACT

The paper proposes a novel model assessment paradigm aiming to address shortcoming of posterior predictive p -values, which provide the default metric of fit for Bayesian structural equation modelling (BSEM). The model framework presented in the paper focuses on the approximate zero approach (Psychological Methods, 17, 2012, 313), which involves formulating certain parameters (such as factor loadings) to be approximately zero through the use of informative priors, instead of explicitly setting them to zero. The introduced model assessment procedure monitors the out-of-sample predictive performance of the fitted model, and together with a list of guidelines we provide, one can investigate whether the hypothesised model is supported by the data. We incorporate scoring rules and cross-validation to supplement existing model assessment metrics for BSEM. The proposed tools can be applied to models for both continuous and binary data. The modelling of categorical and non-normally distributed continuous data is facilitated with the introduction of an item-individual random effect. We study the performance of the proposed methodology via simulation experiments as well as real data on the 'Big-5' personality scale and the Fagerstrom test for nicotine dependence.


Subject(s)
Models, Theoretical , Research Design , Bayes Theorem , Computer Simulation , Latent Class Analysis
10.
Psychometrika ; 88(2): 527-553, 2023 06.
Article in English | MEDLINE | ID: mdl-37002429

ABSTRACT

Researchers have widely used exploratory factor analysis (EFA) to learn the latent structure underlying multivariate data. Rotation and regularised estimation are two classes of methods in EFA that they often use to find interpretable loading matrices. In this paper, we propose a new family of oblique rotations based on component-wise [Formula: see text] loss functions [Formula: see text] that is closely related to an [Formula: see text] regularised estimator. We develop model selection and post-selection inference procedures based on the proposed rotation method. When the true loading matrix is sparse, the proposed method tends to outperform traditional rotation and regularised estimation methods in terms of statistical accuracy and computational cost. Since the proposed loss functions are nonsmooth, we develop an iteratively reweighted gradient projection algorithm for solving the optimisation problem. We also develop theoretical results that establish the statistical consistency of the estimation, model selection, and post-selection inference. We evaluate the proposed method and compare it with regularised estimation and traditional rotation methods via simulation studies. We further illustrate it using an application to the Big Five personality assessment.


Subject(s)
Algorithms , Psychometrics , Computer Simulation
11.
J Clin Epidemiol ; 154: 188-196, 2023 02.
Article in English | MEDLINE | ID: mdl-36581305

ABSTRACT

OBJECTIVES: Ranking metrics in network meta-analysis (NMA) are computed separately for each outcome. Our aim is to 1) present graphical ways to group competing interventions considering multiple outcomes and 2) use conjoint analysis for placing weights on the various outcomes based on the stakeholders' preferences. STUDY DESIGN AND SETTING: We used multidimensional scaling (MDS) and hierarchical tree clustering to visualize the extent of similarity of interventions in terms of the relative effects they produce through a random effect NMA. We reanalyzed a published network of 212 psychosis trials taking three outcomes into account as follows: reduction in symptoms of schizophrenia, all-cause treatment discontinuation, and weight gain. RESULTS: Conjoint analysis provides a mathematical method to transform judgements into weights that can be subsequently used to visually represent interventions on a two-dimensional plane or through a dendrogram. These plots provide insightful information about the clustering of interventions. CONCLUSION: Grouping interventions can help decision makers not only to identify the optimal ones in terms of benefit-risk balance but also choose one from the best cluster based on other grounds, such as cost, implementation etc. Placing weights on outcomes allows considering patient profile or preferences.


Subject(s)
Psychotic Disorders , Humans , Network Meta-Analysis
12.
Psychometrika ; 77(3): 425-41, 2012 Jul.
Article in English | MEDLINE | ID: mdl-27519774

ABSTRACT

The paper proposes a composite likelihood estimation approach that uses bivariate instead of multivariate marginal probabilities for ordinal longitudinal responses using a latent variable model. The model considers time-dependent latent variables and item-specific random effects to be accountable for the interdependencies of the multivariate ordinal items. Time-dependent latent variables are linked with an autoregressive model. Simulation results have shown composite likelihood estimators to have a small amount of bias and mean square error and as such they are feasible alternatives to full maximum likelihood. Model selection criteria developed for composite likelihood estimation are used in the applications. Furthermore, lower-order residuals are used as measures-of-fit for the selected models.

13.
Appl Psychol Meas ; 46(3): 167-184, 2022 May.
Article in English | MEDLINE | ID: mdl-35528272

ABSTRACT

Common methods for determining the number of latent dimensions underlying an item set include eigenvalue analysis and examination of fit statistics for factor analysis models with varying number of factors. Given a set of dichotomous items, the authors demonstrate that these empirical assessments of dimensionality often incorrectly estimate the number of dimensions when there is a preponderance of individuals in the sample with all-zeros as their responses, for example, not endorsing any symptoms on a health battery. Simulated data experiments are conducted to demonstrate when each of several common diagnostics of dimensionality can be expected to under- or over-estimate the true dimensionality of the underlying latent variable. An example is shown from psychiatry assessing the dimensionality of a social anxiety disorder battery where 1, 2, 3, or more factors are identified, depending on the method of dimensionality assessment. An all-zero inflated exploratory factor analysis model (AZ-EFA) is introduced for assessing the dimensionality of the underlying subgroup corresponding to those possessing the measurable trait. The AZ-EFA approach is demonstrated using simulation experiments and an example measuring social anxiety disorder from a large nationally representative survey. Implications of the findings are discussed, in particular, regarding the potential for different findings in community versus patient populations.

14.
Educ Psychol Meas ; 82(2): 254-280, 2022 Apr.
Article in English | MEDLINE | ID: mdl-35185159

ABSTRACT

This article studies the Type I error, false positive rates, and power of four versions of the Lagrange multiplier test to detect measurement noninvariance in item response theory (IRT) models for binary data under model misspecification. The tests considered are the Lagrange multiplier test computed with the Hessian and cross-product approach, the generalized Lagrange multiplier test and the generalized jackknife score test. The two model misspecifications are those of local dependence among items and nonnormal distribution of the latent variable. The power of the tests is computed in two ways, empirically through Monte Carlo simulation methods and asymptotically, using the asymptotic distribution of each test under the alternative hypothesis. The performance of these tests is evaluated by means of a simulation study. The results highlight that, under mild model misspecification, all tests have good performance while, under strong model misspecification, the tests performance deteriorates, especially for false positive rates under local dependence and power for small sample size under misspecification of the latent variable distribution. In general, the Lagrange multiplier test computed with the Hessian approach and the generalized Lagrange multiplier test have better performance in terms of false positive rates while the Lagrange multiplier test computed with the cross-product approach has the highest power for small sample sizes. The asymptotic power turns out to be a good alternative to the classic empirical power because it is less time consuming. The Lagrange tests studied here have been also applied to a real data set.

15.
Br J Math Stat Psychol ; 75(1): 23-45, 2022 02.
Article in English | MEDLINE | ID: mdl-33856692

ABSTRACT

Methods for the treatment of item non-response in attitudinal scales and in large-scale assessments under the pairwise likelihood (PL) estimation framework and under a missing at random (MAR) mechanism are proposed. Under a full information likelihood estimation framework and MAR, ignorability of the missing data mechanism does not lead to biased estimates. However, this is not the case for pseudo-likelihood approaches such as the PL. We develop and study the performance of three strategies for incorporating missing values into confirmatory factor analysis under the PL framework, the complete-pairs (CP), the available-cases (AC) and the doubly robust (DR) approaches. The CP and AC require only a model for the observed data and standard errors are easy to compute. Doubly-robust versions of the PL estimation require a predictive model for the missing responses given the observed ones and are computationally more demanding than the AC and CP. A simulation study is used to compare the proposed methods. The proposed methods are employed to analyze the UK data on numeracy and literacy collected as part of the OECD Survey of Adult Skills.


Subject(s)
Models, Statistical , Computer Simulation , Data Interpretation, Statistical , Factor Analysis, Statistical , Likelihood Functions
16.
J Appl Stat ; 49(13): 3361-3376, 2022.
Article in English | MEDLINE | ID: mdl-36213777

ABSTRACT

The paper proposes a joint mixture model to model non-ignorable drop-out in longitudinal cohort studies of mental health outcomes. The model combines a (non)-linear growth curve model for the time-dependent outcomes and a discrete-time survival model for the drop-out with random effects shared by the two sub-models. The mixture part of the model takes into account population heterogeneity by accounting for latent subgroups of the shared effects that may lead to different patterns for the growth and the drop-out tendency. A simulation study shows that the joint mixture model provides greater precision in estimating the average slope and covariance matrix of random effects. We illustrate its benefits with data from a longitudinal cohort study that characterizes depression symptoms over time yet is hindered by non-trivial participant drop-out.

17.
Psychometrika ; 86(1): 65-95, 2021 03.
Article in English | MEDLINE | ID: mdl-33768403

ABSTRACT

Penalized factor analysis is an efficient technique that produces a factor loading matrix with many zero elements thanks to the introduction of sparsity-inducing penalties within the estimation process. However, sparse solutions and stable model selection procedures are only possible if the employed penalty is non-differentiable, which poses certain theoretical and computational challenges. This article proposes a general penalized likelihood-based estimation approach for single- and multiple-group factor analysis models. The framework builds upon differentiable approximations of non-differentiable penalties, a theoretically founded definition of degrees of freedom, and an algorithm with integrated automatic multiple tuning parameter selection that exploits second-order analytical derivative information. The proposed approach is evaluated in two simulation studies and illustrated using a real data set. All the necessary routines are integrated into the R package penfa.


Subject(s)
Algorithms , Trust , Computer Simulation , Likelihood Functions , Psychometrics
18.
PLoS Negl Trop Dis ; 15(2): e0009042, 2021 02.
Article in English | MEDLINE | ID: mdl-33539357

ABSTRACT

Various global health initiatives are currently advocating the elimination of schistosomiasis within the next decade. Schistosomiasis is a highly debilitating tropical infectious disease with severe burden of morbidity and thus operational research accurately evaluating diagnostics that quantify the epidemic status for guiding effective strategies is essential. Latent class models (LCMs) have been generally considered in epidemiology and in particular in recent schistosomiasis diagnostic studies as a flexible tool for evaluating diagnostics because assessing the true infection status (via a gold standard) is not possible. However, within the biostatistics literature, classical LCM have already been criticised for real-life problems under violation of the conditional independence (CI) assumption and when applied to a small number of diagnostics (i.e. most often 3-5 diagnostic tests). Solutions of relaxing the CI assumption and accounting for zero-inflation, as well as collecting partial gold standard information, have been proposed, offering the potential for more robust model estimates. In the current article, we examined such approaches in the context of schistosomiasis via analysis of two real datasets and extensive simulation studies. Our main conclusions highlighted poor model fit in low prevalence settings and the necessity of collecting partial gold standard information in such settings in order to improve the accuracy and reduce bias of sensitivity and specificity estimates.


Subject(s)
Diagnostic Tests, Routine/statistics & numerical data , Diagnostic Tests, Routine/standards , Models, Statistical , Schistosomiasis/diagnosis , Diagnostic Errors , Humans , Latent Class Analysis , Reference Standards , Sensitivity and Specificity
19.
Psychometrika ; 85(4): 996-1012, 2020 12.
Article in English | MEDLINE | ID: mdl-33346885

ABSTRACT

The likelihood ratio test (LRT) is widely used for comparing the relative fit of nested latent variable models. Following Wilks' theorem, the LRT is conducted by comparing the LRT statistic with its asymptotic distribution under the restricted model, a [Formula: see text] distribution with degrees of freedom equal to the difference in the number of free parameters between the two nested models under comparison. For models with latent variables such as factor analysis, structural equation models and random effects models, however, it is often found that the [Formula: see text] approximation does not hold. In this note, we show how the regularity conditions of Wilks' theorem may be violated using three examples of models with latent variables. In addition, a more general theory for LRT is given that provides the correct asymptotic theory for these LRTs. This general theory was first established in Chernoff (J R Stat Soc Ser B (Methodol) 45:404-413, 1954) and discussed in both van der Vaart (Asymptotic statistics, Cambridge, Cambridge University Press, 2000) and Drton (Ann Stat 37:979-1012, 2009), but it does not seem to have received enough attention. We illustrate this general theory with the three examples.


Subject(s)
Models, Theoretical , Humans , Likelihood Functions , Psychometrics
20.
Br J Math Stat Psychol ; 62(Pt 2): 401-15, 2009 May.
Article in English | MEDLINE | ID: mdl-18625083

ABSTRACT

The paper proposes a full information maximum likelihood estimation method for modelling multivariate longitudinal ordinal variables. Two latent variable models are proposed that account for dependencies among items within time and between time. One model fits item-specific random effects which account for the between time points correlations and the second model uses a common factor. The relationships between the time-dependent latent variables are modelled with a non-stationary autoregressive model. The proposed models are fitted to a real data set.


Subject(s)
Data Interpretation, Statistical , Likelihood Functions , Longitudinal Studies , Models, Statistical , Multivariate Analysis , Bias , Confidence Intervals , Humans , Normal Distribution , Public Opinion , Regression Analysis , Reproducibility of Results , Sample Size , Statistics as Topic/methods , United Kingdom
SELECTION OF CITATIONS
SEARCH DETAIL