Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros

Base de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Behav Res Methods ; 56(7): 1-21, 2024 10.
Artigo em Inglês | MEDLINE | ID: mdl-38575774

RESUMO

In recent years, much research and many data sources have become digital. Some advantages of digital or Internet-based research, compared to traditional lab research (e.g., comprehensive data collection and storage, availability of data) are ideal for an improved meta-analyses approach.In the meantime, in meta-analyses research, different types of meta-analyses have been developed to provide research syntheses with accurate quantitative estimations. Due to its rich and unique palette of corrections, we recommend to using the Schmidt and Hunter approach for meta-analyses in a digitalized world. Our primer shows in a step-by-step fashion how to conduct a high quality meta-analysis considering digital data and highlights the most obvious pitfalls (e.g., using only a bare-bones meta-analysis, no data comparison) not only in aggregation of the data, but also in the literature search and coding procedure which are essential steps in any meta-analysis. Thus, this primer of meta-analyses is especially suited for a situation where much of future research is headed to: digital research. To map Internet-based research and to reveal any research gap, we further synthesize meta-analyses on Internet-based research (15 articles containing 24 different meta-analyses, on 745 studies, with 1,601 effect sizes), resulting in the first mega meta-analysis of the field. We found a lack of individual participant data (e.g., age and nationality). Hence, we provide a primer for high-quality meta-analyses and mega meta-analyses that applies to much of coming research and also basic hands-on knowledge to conduct or judge the quality of a meta-analyses in a digitalized world.


Assuntos
Internet , Metanálise como Assunto , Humanos , Projetos de Pesquisa
2.
PLoS One ; 19(7): e0307594, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39052673

RESUMO

Teachers' judgment accuracy is a core competency in their daily business. Due to its importance, several meta-analyses have estimated how accurately teachers judge students' academic achievements by measuring teachers' judgment accuracy (i.e., the correlation between teachers' judgments of students' academic abilities and students' scores on achievement tests). In our study, we considered previous meta-analyses and updated these databases and the analytic combination of data using a psychometric meta-analysis to explain variations in results across studies. Our results demonstrate the importance of considering aggregation and publication bias as well as correcting for the most important artifacts (e.g., sampling and measurement error), but also that most studies fail to report the data needed for conducting a meta-analysis according to current best practices. We find that previous reviews have underestimated teachers' judgment accuracy and overestimated the variance in estimates of teachers' judgment accuracy across studies because at least 10% of this variance may be associated with common artifacts. We conclude that ignoring artifacts, as in classical meta-analysis, may lead one to erroneously conclude that moderator variables, instead of artifacts, explain any variation. We describe how online data repositories could improve the scientific process and the potential for using psychometric meta-analysis to synthesize results and assess replicability.


Assuntos
Julgamento , Psicometria , Humanos , Psicometria/métodos , Professores Escolares/psicologia , Estudantes/psicologia
3.
PLoS One ; 11(6): e0157914, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27327085

RESUMO

The success of bootstrapping or replacing a human judge with a model (e.g., an equation) has been demonstrated in Paul Meehl's (1954) seminal work and bolstered by the results of several meta-analyses. To date, however, analyses considering different types of meta-analyses as well as the potential dependence of bootstrapping success on the decision domain, the level of expertise of the human judge, and the criterion for what constitutes an accurate decision have been missing from the literature. In this study, we addressed these research gaps by conducting a meta-analysis of lens model studies. We compared the results of a traditional (bare-bones) meta-analysis with findings of a meta-analysis of the success of bootstrap models corrected for various methodological artifacts. In line with previous studies, we found that bootstrapping was more successful than human judgment. Furthermore, bootstrapping was more successful in studies with an objective decision criterion than in studies with subjective or test score criteria. We did not find clear evidence that the success of bootstrapping depended on the decision domain (e.g., education or medicine) or on the judge's level of expertise (novice or expert). Correction of methodological artifacts increased the estimated success of bootstrapping, suggesting that previous analyses without artifact correction (i.e., traditional meta-analyses) may have underestimated the value of bootstrapping models.


Assuntos
Tomada de Decisões , Metanálise como Assunto , Artefatos , Bases de Dados como Assunto , Modelos Lineares , Psicometria
4.
PLoS One ; 8(12): e83528, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24391781

RESUMO

Achieving accurate judgment ('judgmental achievement') is of utmost importance in daily life across multiple domains. The lens model and the lens model equation provide useful frameworks for modeling components of judgmental achievement and for creating tools to help decision makers (e.g., physicians, teachers) reach better judgments (e.g., a correct diagnosis, an accurate estimation of intelligence). Previous meta-analyses of judgment and decision-making studies have attempted to evaluate overall judgmental achievement and have provided the basis for evaluating the success of bootstrapping (i.e., replacing judges by linear models that guide decision making). However, previous meta-analyses have failed to appropriately correct for a number of study design artifacts (e.g., measurement error, dichotomization), which may have potentially biased estimations (e.g., of the variability between studies) and led to erroneous interpretations (e.g., with regards to moderator variables). In the current study we therefore conduct the first psychometric meta-analysis of judgmental achievement studies that corrects for a number of study design artifacts. We identified 31 lens model studies (N = 1,151, k = 49) that met our inclusion criteria. We evaluated overall judgmental achievement as well as whether judgmental achievement depended on decision domain (e.g., medicine, education) and/or the level of expertise (expert vs. novice). We also evaluated whether using corrected estimates affected conclusions with regards to the success of bootstrapping with psychometrically-corrected models. Further, we introduce a new psychometric trim-and-fill method to estimate the effect sizes of potentially missing studies correct psychometric meta-analyses for effects of publication bias. Comparison of the results of the psychometric meta-analysis with the results of a traditional meta-analysis (which only corrected for sampling error) indicated that artifact correction leads to a) an increase in values of the lens model components, b) reduced heterogeneity between studies, and c) increases the success of bootstrapping. We argue that psychometric meta-analysis is useful for accurately evaluating human judgment and show the success of bootstrapping.


Assuntos
Tomada de Decisões , Julgamento , Técnicas de Apoio para a Decisão , Humanos , Modelos Lineares , Psicometria , Viés de Publicação , Viés de Seleção
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA