RESUMO
OBJECTIVES: We tested the effect of true and fabricated baseline statements from the same sender on veracity judgments. HYPOTHESES: We predicted that presenting a combination of true and fabricated baseline statements would improve truth and lie detection accuracy, while presenting a true baseline would improve only truth detection, and presenting a fabricated baseline would only improve lie detection compared with presenting no baseline statement. METHOD: In a 4 × 2 within-subjects design, 142 student participants (Mage = 23.47 years; 118 female) read no baseline statement, a true baseline statement, a fabricated baseline statement, and a combination of a true and a fabricated baseline statement from 29 different senders. Participants then rated the veracity of a true or fabricated target statement from the same 29 senders. RESULTS: Logistic mixed-effects models with senders and participants as random effects showed no significant differences in overall veracity judgment accuracy between the no-baseline (51%) and either the true-baseline (44%) or the fabricated-baseline (49%) conditions. Equivalence tests failed to show the predicted equivalence of these accuracy rates. Separate analyses of truth and lie detection rates confirmed the assumed improvement of lie detection in the combination-of-true-and-fabricated-baseline condition (accuracy = 39%-61%). No other truth or lie detection rate changed significantly except that, unexpectedly, a true baseline reduced truth detection accuracy (64%-49%). CONCLUSIONS: Baseline statements largely did not affect judgment accuracy and, in the case of true baselines, even had a negative impact on truth detection. The rather small positive effect of two baseline statements on lie detection suggests an avenue for further research, especially with expert raters. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Assuntos
Detecção de Mentiras , Adulto , Feminino , Humanos , Julgamento , Estudantes , Adulto JovemRESUMO
In 2014, Volbert and Steller introduced a revised model of Criteria-Based Content Analysis (CBCA) that grouped a modified set of content criteria in closer reference to their assumed latent processes, resulting in three dimensions of memory-related, script-deviant and strategy-based criteria. In this model, it is assumed that deceivers try to integrate memory-related criteria-but will not be as good as truth tellers in achieving this-whereas out of strategic considerations they will avoid the expression of the other criteria. The aim of the current study was to test this assumption. A vignette was presented via an online-questionnaire to inquire how participants (n = 135) rate the strategic value of CBCA criteria on a five-point scale. One-sample t-tests showed that participants attribute positive strategic value to most memory-related criteria and negative value to the remaining criteria, except for the criteria self-deprecation and pardoning the perpetrator. Overall, our results corroborated the model's suitability in distinguishing different groups of criteria-some which liars are inclined to integrate and others which liars intend to avoid-and in this way provide useful hints for forensic practitioners in appraising the criteria' diagnostic value.
RESUMO
Statement Validity Assessment (SVA) proposes that baseline statements on different events can serve as a within-subject measure of a witness' individual verbal capabilities when evaluating scores from Criteria-based Content Analysis (CBCA). This assumes that CBCA scores will generally be consistent across two accounts by the same witness. We present a first pilot study on this assumption. In two sessions, we asked 29 participants to produce one experience-based and one fabricated baseline account as well as one experience-based and one fabricated target account (each on different events), resulting in a total of 116 accounts. We hypothesized at least moderate correlations between target and baseline indicating a consistency across both experience-based and fabricated CBCA scores, and that fabricated CBCA scores would be more consistent because truth-telling has to consider random event characteristics, whereas lies must be constructed completely by the individual witness. Results showed that differences in correlations between experience-based CBCA scores and between fabricated CBCA scores took the predicted direction (cexperience-based = .44 versus cfabricated =.61) but this difference was not statistically significant. As predicted, a subgroup of event-related CBCA criteria were significantly less consistent than CBCA total scores, but only in experience-based accounts. The discussion considers methodological issues regarding the usage of total CBCA scores and whether to measure consistency with correlation coefficients. It is concluded that more studies are needed with larger samples
El Statement Validity Assessment (SVA) propone que las declaraciones sobre diferentes eventos pueden servir como una línea base intrasujeto de la medida de las capacidades verbales individuales de un testigo al evaluar las puntuaciones del Criteria Based Content Analysis (CBCA). Esto implica que las puntuaciones del CBCA serán congruentes en dos relatos del mismo testigo. Presentamos un primer estudio piloto sobre este supuesto. Se pidió a 29 participantes en dos sesiones que elaboraran un relato verdadero (línea base) y otro inventado, además de un relato verdadero y otro inventado (cada uno en situaciones diferentes), arrojando un total de 116 relatos. Se planteó la hipótesis de una correlación al menos moderada entre la declaración fabricada y la verdadera, que indicaría una consistencia entre las puntuaciones en el CBCA de relatos inventados y experimentados y que las puntuaciones en el CBCA inventadas serían más consistentes porque la verdad incluye las características aleatorias de los hechos, mientras que las mentiras las construye totalmente el testigo. Los resultados mostraron que las diferencias en las correlaciones entre las puntuaciones en el CBCA de relatos experimentados y fabricados iban en la dirección predicha (cvivido = .44 frente a cinventado = .61), pero esta diferencia no fue significativa. Como se predijo, un subgrupo de criterios de CBCA relacionados con los hechos fue menos congruente que las puntuaciones totales de CBCA, pero sólo en los relatos de hechos experimentados. Se discuten las implicaciones metodológicas relacionadas con el uso de las puntuaciones totales del CBCA y si se debe medir la consistencia mediante el coeficiente de correlación. Se concluye que se necesitan otros estudios con muestras más grandes