Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
Add more filters










Publication year range
3.
Behav Res Methods ; 2023 Nov 29.
Article in English | MEDLINE | ID: mdl-38030925

ABSTRACT

A common challenge in designing empirical studies is determining an appropriate sample size. When more complex models are used, estimates of power can only be obtained using Monte Carlo simulations. In this tutorial, we introduce the R package mlpwr to perform simulation-based power analysis based on surrogate modeling. Surrogate modeling is a powerful tool in guiding the search for study design parameters that imply a desired power or meet a cost threshold (e.g., in terms of monetary cost). mlpwr can be used to search for the optimal allocation when there are multiple design parameters, e.g., when balancing the number of participants and the number of groups in multilevel modeling. At the same time, the approach can take into account the cost of each design parameter, and aims to find a cost-efficient design. We introduce the basic functionality of the package, which can be applied to a wide range of statistical models and study designs. Additionally, we provide two examples based on empirical studies for illustration: one for sample size planning when using an item response theory model, and one for assigning the number of participants and the number of countries for a study using multilevel modeling.

4.
JMIR Form Res ; 7: e45749, 2023 Aug 14.
Article in English | MEDLINE | ID: mdl-37578827

ABSTRACT

BACKGROUND: Digital tools assessing momentary parameters and offering interventions in people's daily lives play an increasingly important role in mental health research and treatment. Ecological momentary assessment (EMA) makes it possible to assess transient mental health states and their parameters. Ecological momentary interventions (EMIs) offer mental health interventions that fit well into individuals' daily lives and routines. Self-efficacy is a transdiagnostic construct that is commonly associated with positive mental health outcomes. OBJECTIVE: The aim of our study assessing mood, specific self-efficacy, and other parameters using EMA was 2-fold. First, we wanted to determine the effects of daily assessed moods and dissatisfaction with social contacts as well as the effects of baseline variables, such as depression, on specific self-efficacy in the training group (TG). Second, we aimed to explore which variables influenced both groups' positive and negative moods during the 7-day study period. METHODS: In this randomized controlled trial, we applied digital self-efficacy training (EMI) to 93 university students with elevated self-reported stress levels and daily collected different parameters, such as mood, dissatisfaction with social contacts, and specific self-efficacy, using EMA. Participants were randomized to either the TG, where they completed the self-efficacy training combined with EMA, or the control group, where they completed EMA only. RESULTS: In total, 93 university students participated in the trial. Positive momentary mood was associated with higher specific self-efficacy in the evening of the same day (b=0.15, SE 0.05, P=.005). Higher self-efficacy at baseline was associated with reduced negative mood during study participation (b=-0.61, SE 0.30, P=.04), while we could not determine an effect on positive mood. Baseline depression severity was significantly associated with lower specific self-efficacy over the week of the training (b=-0.92, SE 0.35, P=.004). Associations between higher baseline anxiety with higher mean negative mood (state anxiety: b=0.78, SE 0.38, P=.04; trait anxiety: b=0.73, SE 0.33, P=.03) and lower mean positive mood (b=-0.64, SE 0.28, P=.02) during study participation were found. Emotional flexibility was significantly enhanced in the TG. Additionally, dissatisfaction with social contacts was associated with both a decreased positive mood (b=-0.56, SE 0.15, P<.001) and an increased negative mood (b=0.45, SE 0.12, P<.001). CONCLUSIONS: This study showed several significant associations between mood and self-efficacy as well as those between mood and anxiety in students with elevated stress levels, for example, suggesting that improving mood in people with low mood could enhance the effects of digital self-efficacy training. In addition, engaging in 1-week self-efficacy training was associated with increased emotional flexibility. Future work is needed to replicate and investigate the training's effects in other groups and settings. TRIAL REGISTRATION: ClinicalTrials.gov NCT05617248; https://clinicaltrials.gov/study/NCT05617248.

5.
Psychol Methods ; 2023 May 25.
Article in English | MEDLINE | ID: mdl-37227894

ABSTRACT

In recent years, machine learning methods have become increasingly popular prediction methods in psychology. At the same time, psychological researchers are typically not only interested in making predictions about the dependent variable, but also in learning which predictor variables are relevant, how they influence the dependent variable, and which predictors interact with each other. However, most machine learning methods are not directly interpretable. Interpretation techniques that support researchers in describing how the machine learning technique came to its prediction may be a means to this end. We present a variety of interpretation techniques and illustrate the opportunities they provide for interpreting the results of two widely used black box machine learning methods that serve as our examples: random forests and neural networks. At the same time, we illustrate potential pitfalls and risks of misinterpretation that may occur in certain data settings. We show in which way correlated predictors impact interpretations with regard to the relevance or shape of predictor effects and in which situations interaction effects may or may not be detected. We use simulated didactic examples throughout the article, as well as an empirical data set for illustrating an approach to objectify the interpretation of visualizations. We conclude that, when critically reflected, interpretable machine learning techniques may provide useful tools when describing complex psychological relationships. (PsycInfo Database Record (c) 2023 APA, all rights reserved).

6.
Educ Psychol Meas ; 83(1): 181-212, 2023 Feb.
Article in English | MEDLINE | ID: mdl-36601252

ABSTRACT

To detect differential item functioning (DIF), Rasch trees search for optimal splitpoints in covariates and identify subgroups of respondents in a data-driven way. To determine whether and in which covariate a split should be performed, Rasch trees use statistical significance tests. Consequently, Rasch trees are more likely to label small DIF effects as significant in larger samples. This leads to larger trees, which split the sample into more subgroups. What would be more desirable is an approach that is driven more by effect size rather than sample size. In order to achieve this, we suggest to implement an additional stopping criterion: the popular Educational Testing Service (ETS) classification scheme based on the Mantel-Haenszel odds ratio. This criterion helps us to evaluate whether a split in a Rasch tree is based on a substantial or an ignorable difference in item parameters, and it allows the Rasch tree to stop growing when DIF between the identified subgroups is small. Furthermore, it supports identifying DIF items and quantifying DIF effect sizes in each split. Based on simulation results, we conclude that the Mantel-Haenszel effect size further reduces unnecessary splits in Rasch trees under the null hypothesis, or when the sample size is large but DIF effects are negligible. To make the stopping criterion easy-to-use for applied researchers, we have implemented the procedure in the statistical software R. Finally, we discuss how DIF effects between different nodes in a Rasch tree can be interpreted and emphasize the importance of purification strategies for the Mantel-Haenszel procedure on tree stopping and DIF item classification.

7.
Front Psychol ; 13: 1032091, 2022.
Article in English | MEDLINE | ID: mdl-36619056

ABSTRACT

Introduction: Migrant populations usually report higher smoking rates than locals. At the same time, people with a migration background have little or no access to regular smoking cessation treatment. In the last two decades, regular smoking cessation courses were adapted to reach out to Turkish- and Albanian-speaking migrants living in Switzerland. The main aims of the current study were (1) to analyze the effects of an adapted smoking cessation course for Turkish- and Albanian-speaking migrants in Switzerland on attitudes toward smoking and smoking behavior; and (2) to elucidate whether changes in attitudes toward smoking were associated to changes in smoking behavior in the short- and in the long-term. Methods: A total of 59 smoking cessation courses (Turkish: 37; Albanian: 22) with 436 participants (T: 268; A: 168) held between 2014 and 2019 were evaluated. Attitudes toward smoking and cigarettes smoked per day were assessed at baseline and 3-months follow-up. One-year follow-up calls included assessment of cigarettes smoked per day. Data were analyzed by means of structural equation modeling with latent change scores. Results: Participation in an adapted smoking cessation course led to a decrease of positive attitudes toward smoking (T: ß = -0.65, p < 0.001; A: ß = -0.68, p < 0.001) and a decrease of cigarettes smoked per day in the short-term (T: ß = -0.58, p < 0.001; A: ß = -0.43, p < 0.001) with only Turkish-speaking migrants further reducing their smoking in the long-term (T: ß = -0.59, p < 0.001; A: ß = -0.14, p = 0.57). Greater decreases in positive attitudes were associated with greater reductions of smoking in the short-term (T: r = 0.39, p < 0.001; A: r = 0.32, p = 0.03), but not in the long-term (T: r = -0.01, p = 0.88; A: r = -0.001, p = 0.99). Conclusion: The adapted smoking cessation courses fostered changes in positive attitudes toward smoking that were associated with intended behavior change in the short-term. The importance of socio-cognitive characteristics related to behavior change maintenance to further increase treatment effectiveness in the long-term is discussed.

8.
Assessment ; 28(5): 1301-1319, 2021 07.
Article in English | MEDLINE | ID: mdl-31976748

ABSTRACT

When respondents use different ways to answer rating scale items, they employ so-called response styles that can bias inferences drawn from measurement. To describe the influence of such response styles on the response process, we investigated relations between extreme, acquiescent, and mid response style and response times in three studies using multilevel modeling. On the response level, agreement and midpoint, but not extreme responses were slower. On the person level, response times increased for extreme, but not for acquiescence or mid response style traits. For all three response styles, we found negative cross-level interaction effects, indicating that a response matching the response style trait is faster. The results demonstrate that response styles facilitate the choice of specific category combinations in terms of response speed across a wide range of response style trait levels.


Subject(s)
Reaction Time , Bias , Humans
9.
Psychol Methods ; 25(5): 560-576, 2020 Oct.
Article in English | MEDLINE | ID: mdl-33017166

ABSTRACT

A large variety of item response theory (IRT) modeling approaches aim at measuring and correcting for response styles in rating data. Here, we integrate response style models of the divide-by-total model family into one superordinate framework that parameterizes response styles as person-specific shifts in threshold parameters. This superordinate framework allows us to structure and compare existing approaches to modeling response styles and therewith makes model-implied restrictions explicit. With a simulation study, we show how the new framework allows us to assess consequences of violations of model assumptions and to compare response style estimates across different model parameterizations. The integrative framework of divide-by-total modeling approaches facilitates the correction for and examination of response styles. In addition to providing a superordinate framework for psychometric research, it gives guidance to applied researchers for model selection and specification in psychological assessment. (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Subject(s)
Models, Psychological , Models, Statistical , Personality , Psychology/methods , Psychometrics/methods , Computer Simulation , Humans
10.
Psychol Methods ; 25(5): 577-595, 2020 Oct.
Article in English | MEDLINE | ID: mdl-33017167

ABSTRACT

Many approaches in the item response theory (IRT) literature have incorporated response styles to control for potential biases. However, the specific assumptions about response styles are often not made explicit. Having integrated different IRT modeling variants into a superordinate framework, we highlighted assumptions and restrictions of the models (Henninger & Meiser, 2020). In this article, we show that based on the superordinate framework, we can estimate the different models as multidimensional extensions of the nominal response models in standard software environments. Furthermore, we illustrate the differences in estimated parameters, restrictions, and model fit of the IRT variants in a German Big Five standardization sample and show that psychometric models can be used to debias trait estimates. Based on this analysis, we suggest 2 novel modeling extensions that combine fixed and estimated scoring weights for response style dimensions, or explain discrimination parameters through item attributes. In summary, we highlight possibilities to estimate, apply, and extend psychometric modeling approaches for response styles in order to test hypotheses on response styles through model comparisons. (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Subject(s)
Models, Psychological , Models, Statistical , Personality , Psychology/methods , Psychometrics/methods , Humans
11.
Br J Math Stat Psychol ; 72(3): 501-516, 2019 11.
Article in English | MEDLINE | ID: mdl-30756379

ABSTRACT

IRTree models decompose observed rating responses into sequences of theory-based decision nodes, and they provide a flexible framework for analysing trait-related judgements and response styles. However, most previous applications of IRTree models have been limited to binary decision nodes that reflect qualitatively distinct and unidimensional judgement processes. The present research extends the family of IRTree models for the analysis of response styles to ordinal judgement processes for polytomous decisions and to multidimensional parametrizations of decision nodes. The integration of ordinal judgement processes overcomes the limitation to binary nodes, and it allows researchers to test whether decisions reflect qualitatively distinct response processes or gradual steps on a joint latent continuum. The extension to multidimensional node models enables researchers to specify multiple judgement processes that simultaneously affect the decision between competing response options. Empirical applications highlight the roles of extreme and midpoint response style in rating judgements and show that judgement processes are moderated by different response formats. Model applications with multidimensional decision nodes reveal that decisions among rating categories are jointly informed by trait-related processes and response styles.


Subject(s)
Bias , Data Interpretation, Statistical , Decision Making , Judgment , Models, Statistical , Research/statistics & numerical data
12.
Behav Res Methods ; 47(2): 506-18, 2015 Jun.
Article in English | MEDLINE | ID: mdl-24903691

ABSTRACT

In research on multiattribute decisions, information is typically preorganized in a well-structured manner (e.g., in attributes-by-options matrices). Participants can therefore conveniently identify the information needed for the decision strategy they are using. However, in everyday decision situations, we often face information that is not well-structured; that is, we not only have to search for, but we also need to organize the information. This latter aspect--subjective information organization--has so far largely been neglected in decision research. The few exceptions used crude experimental manipulations, and the assessment of subjective organization suffered from laborious methodology and a lack of objectiveness. We introduce a new task format to overcome these methodological issues, and we provide an organization index (OI) to assess subjective organization of information objectively and automatically. The OI makes it possible to assess information organization on the same scale as the strategy index (SI) typically used for assessing information search behavior. A simulation study shows that the OI has a similar distribution as the SI but that the two indices are a priori largely independent. In a validation experiment with instructed strategy use, we demonstrate the usefulness of the task to trace decision processes in multicue inference situations.


Subject(s)
Decision Making , Information Seeking Behavior , Mental Processes , Behavioral Research/methods , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...