Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 113
Filtrar
1.
Am J Hum Genet ; 110(5): 762-773, 2023 05 04.
Artigo em Inglês | MEDLINE | ID: mdl-37019109

RESUMO

The ongoing release of large-scale sequencing data in the UK Biobank allows for the identification of associations between rare variants and complex traits. SAIGE-GENE+ is a valid approach to conducting set-based association tests for quantitative and binary traits. However, for ordinal categorical phenotypes, applying SAIGE-GENE+ with treating the trait as quantitative or binarizing the trait can cause inflated type I error rates or power loss. In this study, we propose a scalable and accurate method for rare-variant association tests, POLMM-GENE, in which we used a proportional odds logistic mixed model to characterize ordinal categorical phenotypes while adjusting for sample relatedness. POLMM-GENE fully utilizes the categorical nature of phenotypes and thus can well control type I error rates while remaining powerful. In the analyses of UK Biobank 450k whole-exome-sequencing data for five ordinal categorical traits, POLMM-GENE identified 54 gene-phenotype associations.


Assuntos
Exoma , Estudo de Associação Genômica Ampla , Estudo de Associação Genômica Ampla/métodos , Exoma/genética , Bancos de Espécimes Biológicos , Fenótipo , Análise de Dados , Reino Unido
2.
Am J Hum Genet ; 108(5): 825-839, 2021 05 06.
Artigo em Inglês | MEDLINE | ID: mdl-33836139

RESUMO

In genome-wide association studies, ordinal categorical phenotypes are widely used to measure human behaviors, satisfaction, and preferences. However, because of the lack of analysis tools, methods designed for binary or quantitative traits are commonly used inappropriately to analyze categorical phenotypes. To accurately model the dependence of an ordinal categorical phenotype on covariates, we propose an efficient mixed model association test, proportional odds logistic mixed model (POLMM). POLMM is computationally efficient to analyze large datasets with hundreds of thousands of samples, can control type I error rates at a stringent significance level regardless of the phenotypic distribution, and is more powerful than alternative methods. In contrast, the standard linear mixed model approaches cannot control type I error rates for rare variants when the phenotypic distribution is unbalanced, although they performed well when testing common variants. We applied POLMM to 258 ordinal categorical phenotypes on array genotypes and imputed samples from 408,961 individuals in UK Biobank. In total, we identified 5,885 genome-wide significant variants, of which, 424 variants (7.2%) are rare variants with MAF < 0.01.


Assuntos
Simulação por Computador , Estudo de Associação Genômica Ampla , Modelos Genéticos , Fenótipo , Bancos de Espécimes Biológicos , Criança , Feminino , Humanos , Masculino , Projetos de Pesquisa , Reino Unido
3.
Biometrics ; 80(3)2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-39073773

RESUMO

The scope of this paper is a multivariate setting involving categorical variables. Following an external manipulation of one variable, the goal is to evaluate the causal effect on an outcome of interest. A typical scenario involves a system of variables representing lifestyle, physical and mental features, symptoms, and risk factors, with the outcome being the presence or absence of a disease. These variables are interconnected in complex ways, allowing the effect of an intervention to propagate through multiple paths. A distinctive feature of our approach is the estimation of causal effects while accounting for uncertainty in both the dependence structure, which we represent through a directed acyclic graph (DAG), and the DAG-model parameters. Specifically, we propose a Markov chain Monte Carlo algorithm that targets the joint posterior over DAGs and parameters, based on an efficient reversible-jump proposal scheme. We validate our method through extensive simulation studies and demonstrate that it outperforms current state-of-the-art procedures in terms of estimation accuracy. Finally, we apply our methodology to analyze a dataset on depression and anxiety in undergraduate students.


Assuntos
Algoritmos , Causalidade , Simulação por Computador , Depressão , Cadeias de Markov , Modelos Estatísticos , Método de Monte Carlo , Humanos , Ansiedade , Biometria/métodos
4.
BMC Med Res Methodol ; 24(1): 140, 2024 Jun 28.
Artigo em Inglês | MEDLINE | ID: mdl-38943068

RESUMO

BACKGROUND: Longitudinal ordinal data are commonly analyzed using a marginal proportional odds model for relating ordinal outcomes to covariates in the biomedical and health sciences. The generalized estimating equation (GEE) consistently estimates the regression parameters of marginal models even if the working covariance structure is misspecified. For small-sample longitudinal binary data, recent studies have shown that the bias of regression parameters may result from the GEE and have addressed the issue by applying Firth's adjustment for the likelihood score equation to the GEE as if generalized estimating functions were likelihood score functions. In this manuscript, for the proportional odds model for longitudinal ordinal data, the small-sample properties of the GEE were investigated, and a bias-reduced GEE (BR-GEE) was derived. METHODS: By applying the adjusted function originally derived for the likelihood score function of the proportional odds model to the GEE, we produced the BR-GEE. We investigated the small-sample properties of both GEE and BR-GEE through simulation and applied them to a clinical study dataset. RESULTS: In simulation studies, the BR-GEE had a bias closer to zero, smaller root mean square error than the GEE with coverage probability of confidence interval near or above the nominal level. The simulation also showed that BR-GEE maintained a type I error rate near or below the nominal level. CONCLUSIONS: For the analysis of longitudinal ordinal data involving a small number of subjects, the BR-GEE is advantageous for obtaining estimates of the regression parameters of marginal proportional odds models.


Assuntos
Viés , Humanos , Estudos Longitudinais , Funções Verossimilhança , Simulação por Computador , Modelos Estatísticos , Interpretação Estatística de Dados , Tamanho da Amostra , Algoritmos
5.
Multivariate Behav Res ; 59(5): 1077-1097, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38997141

RESUMO

We implement an analytic approach for ordinal measures and we use it to investigate the structure and the changes over time of self-worth in a sample of adolescents students in high school. We represent the variations in self-worth and its various sub-domains using entropy-based measures that capture the observed uncertainty. We then study the evolution of the entropy across four time points throughout a semester of high school. Our analytic approach yields information about the configuration of the various dimensions of the self together with time-related changes and associations among these dimensions. We represent the results using a network that depicts self-worth changes over time. This approach also identifies groups of adolescent students who show different patterns of associations, thus emphasizing the need to consider heterogeneity in the data.


Assuntos
Entropia , Instituições Acadêmicas , Autoimagem , Estudantes , Humanos , Adolescente , Estudantes/psicologia , Estudantes/estatística & dados numéricos , Masculino , Feminino , Modelos Estatísticos
6.
Multivariate Behav Res ; 59(1): 17-45, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-37195880

RESUMO

The multilevel hidden Markov model (MHMM) is a promising method to investigate intense longitudinal data obtained within the social and behavioral sciences. The MHMM quantifies information on the latent dynamics of behavior over time. In addition, heterogeneity between individuals is accommodated with the inclusion of individual-specific random effects, facilitating the study of individual differences in dynamics. However, the performance of the MHMM has not been sufficiently explored. We performed an extensive simulation to assess the effect of the number of dependent variables (1-8), number of individuals (5-90), and number of observations per individual (100-1600) on the estimation performance of a Bayesian MHMM with categorical data including various levels of state distinctiveness and separation. We found that using multivariate data generally alleviates the sample size needed and improves the stability of the results. Moreover, including variables only consisting of random noise was generally not detrimental to model performance. Regarding the estimation of group-level parameters, the number of individuals and observations largely compensate for each other. However, only the former drives the estimation of between-individual variability. We conclude with guidelines on the sample size necessary based on the level of state distinctiveness and separation and study objectives of the researcher.


Assuntos
Modelos Estatísticos , Humanos , Teorema de Bayes , Simulação por Computador , Cadeias de Markov
7.
Behav Res Methods ; 56(3): 1506-1532, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37118647

RESUMO

Intensive longitudinal designs are increasingly popular, as are dynamic structural equation models (DSEM) to accommodate unique features of these designs. Many helpful resources on DSEM exist, though they focus on continuous outcomes while categorical outcomes are omitted, briefly mentioned, or considered as a straightforward extension. This viewpoint regarding categorical outcomes is not unwarranted for technical audiences, but there are non-trivial nuances in model building and interpretation with categorical outcomes that are not necessarily straightforward for empirical researchers. Furthermore, categorical outcomes are common given that binary behavioral indicators or Likert responses are frequently solicited as low-burden variables to discourage participant non-response. This tutorial paper is therefore dedicated to providing an accessible treatment of DSEM in Mplus exclusively for categorical outcomes. We cover the general probit model whereby the raw categorical responses are assumed to come from an underlying normal process. We cover probit DSEM and expound why existing treatments have considered categorical outcomes as a straightforward extension of the continuous case. Data from a motivating ecological momentary assessment study with a binary outcome are used to demonstrate an unconditional model, a model with disaggregated covariates, and a model for data with a time trend. We provide annotated Mplus code for these models and discuss interpretation of the results. We then discuss model specification and interpretation in the case of an ordinal outcome and provide an example to highlight differences between ordinal and binary outcomes. We conclude with a discussion of caveats and extensions.


Assuntos
Modelos Estatísticos , Humanos
8.
Entropy (Basel) ; 26(9)2024 Sep 13.
Artigo em Inglês | MEDLINE | ID: mdl-39330117

RESUMO

An information-theoretic data mining method is employed to analyze categorical spatiotemporal Geographic Information System land use data. Reconstructability Analysis (RA) is a maximum-entropy-based data modeling methodology that works exclusively with discrete data such as those in the National Land Cover Database (NLCD). The NLCD is organized into a spatial (raster) grid and data are available in a consistent format for every five years from 2001 to 2021. An NLCD tool reports how much change occurred for each category of land use; for the study area examined, the most dynamic class is Evergreen Forest (EFO), so the presence or absence of EFO in 2021 was chosen as the dependent variable that our data modeling attempts to predict. RA predicts the outcome with approximately 80% accuracy using a sparse set of cells from a spacetime data cube consisting of neighboring lagged-time cells. When the predicting cells are all Shrubs and Grasses, there is a high probability for a 2021 state of EFO, while when the predicting cells are all EFO, there is a high probability that the 2021 state will not be EFO. These findings are interpreted as detecting forest clear-cut cycles that show up in the data and explain why this class is so dynamic. This study introduces a new approach to analyzing GIS categorical data and expands the range of applications that this entropy-based methodology can successfully model.

9.
Biostatistics ; 2022 Dec 19.
Artigo em Inglês | MEDLINE | ID: mdl-36534895

RESUMO

Clustered observations are ubiquitous in controlled and observational studies and arise naturally in multicenter trials or longitudinal surveys. We present a novel model for the analysis of clustered observations where the marginal distributions are described by a linear transformation model and the correlations by a joint multivariate normal distribution. The joint model provides an analytic formula for the marginal distribution. Owing to the richness of transformation models, the techniques are applicable to any type of response variable, including bounded, skewed, binary, ordinal, or survival responses. We demonstrate how the common normal assumption for reaction times can be relaxed in the sleep deprivation benchmark data set and report marginal odds ratios for the notoriously difficult toe nail data. We furthermore discuss the analysis of two clinical trials aiming at the estimation of marginal treatment effects. In the first trial, pain was repeatedly assessed on a bounded visual analog scale and marginal proportional-odds models are presented. The second trial reported disease-free survival in rectal cancer patients, where the marginal hazard ratio from Weibull and Cox models is of special interest. An empirical evaluation compares the performance of the novel approach to general estimation equations for binary responses and to conditional mixed-effects models for continuous responses. An implementation is available in the tram add-on package to the $\texttt{R}$ system and was benchmarked against established models in the literature.

10.
Stat Med ; 42(12): 1965-1980, 2023 05 30.
Artigo em Inglês | MEDLINE | ID: mdl-36896833

RESUMO

Hypertension significantly increases the risk for many health conditions including heart disease and stroke. Hypertensive patients often have continuous measurements of their blood pressure to better understand how it fluctuates over the day. The continuous-time Markov chain (CTMC) is commonly used to study repeated measurements with categorical outcomes. However, the standard CTMC may be restrictive, because the rates of transitions between states are assumed to be constant through time, while the transition rates for describing the dynamics of hypertension are likely to be changing over time. In addition, the applications of CTMC rarely account for the effects of other covariates on state transitions. In this article, we considered a non-homogeneous continuous-time Markov chain with two states to analyze changes in hypertension while accounting for multiple covariates. The explicit formulas for the transition probability matrix as well as the corresponding likelihood function were derived. In addition, we proposed a maximum likelihood estimation algorithm for estimating the parameters in the time-dependent rate function. Lastly, the model performance was demonstrated through both a simulation study and application to ambulatory blood pressure data.


Assuntos
Monitorização Ambulatorial da Pressão Arterial , Hipertensão , Humanos , Cadeias de Markov , Funções Verossimilhança , Simulação por Computador
11.
J Math Biol ; 87(2): 26, 2023 07 10.
Artigo em Inglês | MEDLINE | ID: mdl-37428265

RESUMO

Data taking values on discrete sample spaces are the embodiment of modern biological research. "Omics" experiments based on high-throughput sequencing produce millions of symbolic outcomes in the form of reads (i.e., DNA sequences of a few dozens to a few hundred nucleotides). Unfortunately, these intrinsically non-numerical datasets often deviate dramatically from natural assumptions a practitioner might make, and the possible sources of this deviation are usually poorly characterized. This contrasts with numerical datasets where Gaussian-type errors are often well-justified. To overcome this hurdle, we introduce the notion of latent weight, which measures the largest expected fraction of samples from a probabilistic source that conform to a model in a class of idealized models. We examine various properties of latent weights, which we specialize to the class of exchangeable probability distributions. As proof of concept, we analyze DNA methylation data from the 22 human autosome pairs. Contrary to what is usually assumed in the literature, we provide strong evidence that highly specific methylation patterns are overrepresented at some genomic locations when latent weights are taken into account.


Assuntos
Genoma , Genômica , Humanos , Probabilidade , Sequenciamento de Nucleotídeos em Larga Escala
12.
J Biopharm Stat ; 33(3): 371-385, 2023 05 04.
Artigo em Inglês | MEDLINE | ID: mdl-36533908

RESUMO

For ordered categorical data from randomized clinical trials, the relative effect, the probability that observations in one group tend to be larger, has been considered appropriate for a measure of an effect size. Although the Wilcoxon-Mann-Whitney test is widely used to compare two groups, the null hypothesis is not just the relative effect of 50%, but the identical distribution between groups. The null hypothesis of the Brunner-Munzel test, another rank-based method used for arbitrary types of data, is just the relative effect of 50%. In this study, we compared actual type I error rates (or 1 - coverage probability) of the profile-likelihood-based confidence intervals for the relative effect and other rank-based methods in simulation studies at the relative effect of 50%. The profile-likelihood method, as with the Brunner- Munzel test, does not require any assumptions on distributions. Actual type I error rates of the profile-likelihood method and the Brunner-Munzel test were close to the nominal level in large or medium samples, even under unequal distributions. Those of the Wilcoxon-Mann-Whitney test largely differed from the nominal level under unequal distributions, especially under unequal sample sizes. In small samples, the actual type I error rates of Brunner-Munzel test were slightly larger than the nominal level and those of the profile-likelihood method were even larger. We provide a paradoxical numerical example: only the Wilcoxon-Mann-Whitney test was significant under equal sample sizes, but by changing only the allocation ratio, it was not significant but the profile-likelihood method and the Brunner-Munzel test were significant. This phenomenon might reflect the nature of the Wilcoxon-Mann-Whitney test in the simulation study, that is, the actual type I error rates become over and under the nominal level depending on the allocation ratio.


Assuntos
Modelos Estatísticos , Humanos , Simulação por Computador , Intervalos de Confiança , Funções Verossimilhança , Estatísticas não Paramétricas
13.
Eur Spine J ; 32(9): 3009-3014, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37306800

RESUMO

BACKGROUND: Recent signs of fraudulent behaviour in spine RCTs have queried the integrity of trials in the field. RCTs are particularly important due to the weight they are accorded in guiding treatment decisions, and thus, ensuring RCTs' reliability is crucial. This study investigates the presence of non-random baseline frequency data in purported RCTs published in spine journals. METHODS: A PubMed search was performed to obtain all RCTs published in four spine journals (Spine, The Spine Journal, the Journal of Neurosurgery Spine, and European Spine Journal) between Jan-2016 and Dec-2020. Baseline frequency data were extracted, and variable-wise p values were calculated using the Pearson Chi-squared test. These p values were combined for each study into study-wise p values using the Stouffer method. Studies with p values below 0.01 and 0.05 and those above 0.95 and 0.99 were reviewed. Results were compared to Carlisle's 2017 survey of anaesthesia and critical care medicine RCTs. RESULTS: One hundred sixty-seven of the 228 studies identified were included. Study-wise p values were largely consistent with expected genuine randomized experiments. Slightly more study-wise p values above 0.99 were observed than expected, but a number of these had good explanations to account for that excess. The distribution of observed study-wise p values was more closely matched to the expected distribution than those in a similar survey of the anaesthesia and critical care medicine literature. CONCLUSION: The data surveyed do not show evidence of systemic fraudulent behaviour. Spine RCTs in major spine journals were found to be consistent with genuine random allocation and experimentally derived data.


Assuntos
Anestesia , Procedimentos Neurocirúrgicos , Humanos , Reprodutibilidade dos Testes
14.
Prev Sci ; 24(3): 393-397, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-36633766

RESUMO

A variety of health and social problems are routinely measured in the form of categorical outcome data (such as presence/absence of a problem behavior or stages of disease progression). Therefore, proper quantitative analysis of categorical data lies at the heart of the empirical work conducted in prevention science. Categorical data analysis constitutes a broad dynamic field of methods research and data analysts in prevention science can benefit from incorporating recent advances and developments in the statistical evaluation of categorical outcomes in their methodological repertoire. The present Special Issue, Advanced Categorical Data Analysis in Prevention Science, highlights recent methods developments and illustrates their application in the context of prevention science. Contributions of the Special Issue cover a wide variety of areas ranging from statistical models for binary as well as multi-categorical data, advances in the statistical evaluation of moderation and mediation effects for categorical data, developments in model evaluation and measurement, as well as methods that integrate variable- and person-oriented categorical data analysis. The articles of this Special issue make methodological advances in these areas accessible to the audience of prevention scientists to maintain rigorous statistical practice and decision making. The current paper provides background and rationale for this Special Issue, an overview of the articles, and a brief discussion of some potential future directions for prevention research involving categorical data analysis.


Assuntos
Modelos Estatísticos , Comportamento Problema , Humanos , Problemas Sociais , Pesquisa sobre Serviços de Saúde , Análise de Dados
15.
Prev Sci ; 24(3): 455-466, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-33970410

RESUMO

The Tucker-Lewis index (TLI; Tucker & Lewis, 1973), also known as the non-normed fit index (NNFI; Bentler & Bonett, 1980), is one of the numerous incremental fit indices widely used in linear mean and covariance structure modeling, particularly in exploratory factor analysis, tools popular in prevention research. It augments information provided by other indices such as the root-mean-square error of approximation (RMSEA). In this paper, we develop and examine an analogous index for categorical item level data modeled with item response theory (IRT). The proposed Tucker-Lewis index for IRT (TLIRT) is based on Maydeu-Olivares and Joe's (2005) [Formula: see text] family of limited-information overall model fit statistics. The limited-information fit statistics have significantly better Chi-square approximation and power than traditional full-information Pearson or likelihood ratio statistics under realistic situations. Building on the incremental fit assessment principle, the TLIRT compares the fit of model under consideration along a spectrum of worst to best possible model fit scenarios. We examine the performance of the new index using simulated and empirical data. Results from a simulation study suggest that the new index behaves as theoretically expected, and it can offer additional insights about model fit not available from other sources. In addition, a more stringent cutoff value is perhaps needed than Hu and Bentler's (1999) traditional cutoff criterion with continuous variables. In the empirical data analysis, we use a data set from a measurement development project in support of cigarette smoking cessation research to illustrate the usefulness of the TLIRT. We noticed that had we only utilized the RMSEA index, we could have arrived at qualitatively different conclusions about model fit, depending on the choice of test statistics, an issue to which the TLIRT is relatively more immune.


Assuntos
Análise Fatorial , Humanos , Psicometria , Reprodutibilidade dos Testes
16.
Behav Res Methods ; 55(7): 3326-3347, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-36114386

RESUMO

We assessed several agreement coefficients applied in 2x2 contingency tables, which are commonly applied in research due to dichotomization. Here, we not only studied some specific estimators but also developed a general method for the study of any estimator candidate to be an agreement measurement. This method was developed in open-source R codes and it is available to the researchers. We tested this method by verifying the performance of several traditional estimators over all possible configurations with sizes ranging from 1 to 68 (total of 1,028,789 tables). Cohen's kappa showed handicapped behavior similar to Pearson's r, Yule's Q, and Yule's Y. Scott's pi, and Shankar and Bangdiwala's B seem to better assess situations of disagreement than agreement between raters. Krippendorff's alpha emulates, without any advantage, Scott's pi in cases with nominal variables and two raters. Dice's F1 and McNemar's chi-squared incompletely assess the information of the contingency table, showing the poorest performance among all. We concluded that Cohen's kappa is a measurement of association and McNemar's chi-squared assess neither association nor agreement; the only two authentic agreement estimators are Holley and Guilford's G and Gwet's AC1. The latter two estimators also showed the best performance over the range of table sizes and should be considered as the first choices for agreement measurement in contingency 2x2 tables. All procedures and data were implemented in R and are available to download from Harvard Dataverse https://doi.org/10.7910/DVN/HMYTCK.


Assuntos
Dissidências e Disputas , Humanos , Variações Dependentes do Observador , Reprodutibilidade dos Testes
17.
Behav Res Methods ; 2023 Aug 28.
Artigo em Inglês | MEDLINE | ID: mdl-37640961

RESUMO

To evaluate model fit in confirmatory factor analysis, researchers compare goodness-of-fit indices (GOFs) against fixed cutoff values (e.g., CFI > .950) derived from simulation studies. Methodologists have cautioned that cutoffs for GOFs are only valid for settings similar to the simulation scenarios from which cutoffs originated. Despite these warnings, fixed cutoffs for popular GOFs (i.e., χ2, χ2/df, CFI, RMSEA, SRMR) continue to be widely used in applied research. We (1) argue that the practice of using fixed cutoffs needs to be abandoned and (2) review time-honored and emerging alternatives to fixed cutoffs. We first present the most in-depth simulation study to date on the sensitivity of GOFs to model misspecification (i.e., misspecified factor dimensionality and unmodeled cross-loadings) and their susceptibility to further data and analysis characteristics (i.e., estimator, number of indicators, number and distribution of response options, loading magnitude, sample size, and factor correlation). We included all characteristics identified as influential in previous studies. Our simulation enabled us to replicate well-known influences on GOFs and establish hitherto unknown or underappreciated ones. In particular, the magnitude of the factor correlation turned out to moderate the effects of several characteristics on GOFs. Second, to address these problems, we discuss several strategies for assessing model fit that take the dependency of GOFs on the modeling context into account. We highlight tailored (or "dynamic") cutoffs as a way forward. We provide convenient tables with scenario-specific cutoffs as well as regression formulae to predict cutoffs tailored to the empirical setting of interest.

18.
Entropy (Basel) ; 25(9)2023 Sep 08.
Artigo em Inglês | MEDLINE | ID: mdl-37761610

RESUMO

Individual subjects' ratings neither are metric nor have homogeneous meanings, consequently digital- labeled collections of subjects' ratings are intrinsically ordinal and categorical. However, in these situations, the literature privileges the use of measures conceived for numerical data. In this paper, we discuss the exploratory theme of employing conditional entropy to measure degrees of uncertainty in responding to self-rating questions and that of displaying the computed entropies along the ordinal axis for visible pattern recognition. We apply this theme to the study of an online dataset, which contains responses to the Rosenberg Self-Esteem Scale. We report three major findings. First, at the fine scale level, the resultant multiple ordinal-display of response-vs-covariate entropy measures reveals that the subjects on both extreme labels (high self-esteem and low self-esteem) show distinct degrees of uncertainty. Secondly, at the global scale level, in responding to positively posed questions, the degree of uncertainty decreases for increasing levels of self-esteem, while, in responding to negative questions, the degree of uncertainty increases. Thirdly, such entropy-based computed patterns are preserved across age groups. We provide a set of tools developed in R that are ready to implement for the analysis of rating data and for exploring pattern-based knowledge in related research.

19.
Stat Med ; 41(26): 5189-5202, 2022 11 20.
Artigo em Inglês | MEDLINE | ID: mdl-36043693

RESUMO

We analyze repeated cross-sectional survey data collected by the Institute of Global Health Innovation, to characterize the perception and behavior of the Italian population during the Covid-19 pandemic, focusing on the period that spans from April 2020 to July 2021. To accomplish this goal, we propose a Bayesian dynamic latent-class regression model, that accounts for the effect of sampling bias including survey weights into the likelihood function. According to the proposed approach, attitudes towards covid-19 are described via ideal behaviors that are fixed over time, corresponding to different degrees of compliance with spread-preventive measures. The overall tendency toward a specific profile dynamically changes across survey waves via a latent Gaussian process regression, that adjusts for subject-specific covariates. We illustrate the evolution of Italians' behaviors during the pandemic, providing insights on how the proportion of ideal behaviors has varied during the phases of the lockdown, while measuring the effect of age, sex, region and employment of the respondents on the attitude toward covid-19.


Assuntos
COVID-19 , Humanos , COVID-19/epidemiologia , COVID-19/prevenção & controle , Pandemias/prevenção & controle , Estudos Transversais , Teorema de Bayes , Controle de Doenças Transmissíveis , Atitude , Inquéritos e Questionários
20.
BMC Health Serv Res ; 22(1): 1485, 2022 Dec 06.
Artigo em Inglês | MEDLINE | ID: mdl-36474283

RESUMO

BACKGROUND: Accurate and precise measures of health literacy (HL) is supportive for health policy making, tailoring health service design, and ensuring equitable access to health services. According to research, valid and reliable unidimensional HL measurement instruments explicitly targeted at young people (YP) are scarce. Thus, this study aims at assessing the psychometric properties of existing unidimensional instruments and developing an HL instrument suitable for YP aged 16-25 years. METHODS: Applying the HLS19-Q47 in computer-assisted telephone interviews, we collected data in a representative sample comprising 890 YP aged 16-25 years in Norway. Applying the partial credit parameterization of the unidimensional Rasch model for polytomous data (PCM) and confirmatory factor analysis (CFA) with categorical variables, we evaluated the psychometric properties of the short versions of the HLS19-Q47; HLS19-Q12, HLS19-SF12, and HLS19-Q12-NO. A new 12-item short version for measuring HL in YP, HLS19-YP12, is suggested. RESULTS: The HLS19-Q12 did not display sufficient fit to the PCM, and the HLS19-SF12 was not sufficiently unidimensional. Relative to the PCM, some items in the HLS19-Q12, the HLS19-SF12, and the HLS19-Q12-NO discriminated poorly between participants at high and at low locations on the underlying latent trait. We observed disordered response categories for some items in the HLS19-Q12 and the HLS19-SF12. A few items in the HLS19-Q12, the HLS19-SF12, and the HLS19-Q12-NO displayed either uniform or non-uniform differential item functioning. Applying one-factorial CFA, none of the aforementioned short versions achieved exact fit in terms of non-significant model chi-square statistic, or approximate fit in terms of SRMR ≤ .080 and all entries ≤ .10 that were observed in the respective residual matrix. The newly suggested parsimonious 12-item scale, HLS19-YP12, displayed sufficiently fit to the PCM and achieved approximate fit using one-factorial CFA. CONCLUSIONS: Compared to other parsimonious 12-item short versions of HLS19-Q47, the HLS19-YP12 has superior psychometric properties and unconditionally proved its unidimensionality. The HLS19-YP12 offers an efficient and much-needed screening tool for use among YP, which is likely a useful application in processes towards the development and evaluation of health policy and public health work, as well as for use in clinical settings.


Assuntos
Letramento em Saúde , Humanos , Adolescente , Política de Saúde , Noruega , Análise Fatorial
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa