Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 22
Filtrar
Mais filtros












Base de dados
Intervalo de ano de publicação
1.
PLoS One ; 17(8): e0271288, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35921280

RESUMO

This paper draws on individual-level data from the National Study of Family Growth (NSFG) to identify likely underreporters of abortion and miscarriage and examine their characteristics. The NSFG asks about abortion and miscarriage twice, once in the computer-assisted personal interviewing (CAPI) part of the questionnaire and the other in the audio computer-assisted self-interviewing (ACASI) part. We used two different methods to identify likely underreporters of abortion and miscarriage: direct comparison of answers obtained from CAPI and ACASI and latent class models. The two methods produce very similar results. Although miscarriages are just as prone to underreporting as abortions, characteristics of women underreporting abortion differ somewhat from those misreporting miscarriages. Underreporters of abortions tended to be older, poorer, less likely to be Hispanic or Black, and more likely to have no religion. They also reported more traditional attitudes toward sexual behavior. By contrast, underreporters of miscarriage also tended to be older, poorer, and more likely to be Hispanic or Black, but were also more likely to have children in the household, had fewer pregnancies, and held less traditional attitudes toward marriage.


Assuntos
Aborto Induzido , Aborto Espontâneo , Aborto Espontâneo/epidemiologia , Criança , Características da Família , Feminino , Humanos , Casamento , Gravidez , Comportamento Sexual
2.
J Surv Stat Methodol ; 9(5): 961-991, 2021 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-34912940

RESUMO

Although most survey researchers agree that reliability is a critical requirement for survey data, there have not been many efforts to assess the reliability of responses in national surveys. In addition, there are quite different approaches to studying the reliability of survey responses. In the first section of the Lecture, I contrast a psychological theory of over-time consistency with three statistical models that use reinterview data, multi-trait multi-method experiments, and three-wave panel data to estimate reliability. The more sophisticated statistical models reflect concerns about memory effects and the impact of method factors in reinterview studies. In the following section of the Lecture, I examine some of the major findings from the literature on reliability. Despite the differences across methods for exploring reliability, the findings mostly converge, identifying similar respondent and question characteristics as major determinants of reliability. The next section of the paper looks at the correlations among estimates of reliability derived from the different methods; it finds some support for the validity of the measures from traditional reinterview studies. The empirical claims motivating the more sophisticated methods for estimating reliability are not strongly supported in the literature. Reliability is, in my judgment, a neglected topic among survey researchers, and I hope the Lecture spurs further studies of the reliability of survey questions.

3.
J Surv Stat Methodol ; 9(4): 651-673, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-34671685

RESUMO

The usual method for assessing the reliability of survey data has been to conduct reinterviews a short interval (such as one to two weeks) after an initial interview and to use these data to estimate relatively simple statistics, such as gross difference rates (GDRs). More sophisticated approaches have also been used to estimate reliability. These include estimates from multi-trait, multi-method experiments, models applied to longitudinal data, and latent class analyses. To our knowledge, no prior study has systematically compared these different methods for assessing reliability. The Population Assessment of Tobacco and Health Reliability and Validity (PATH-RV) Study, done on a national probability sample, assessed the reliability of answers to the Wave 4 questionnaire from the PATH Study. Respondents in the PATH-RV were interviewed twice about two weeks apart. We examined whether the classic survey approach yielded different conclusions from the more sophisticated methods. We also examined two ex ante methods for assessing problems with survey questions and item nonresponse rates and response times to see how strongly these related to the different reliability estimates. We found that kappa was highly correlated with both GDRs and over-time correlations, but the latter two statistics were less highly correlated, particularly for adult respondents; estimates from longitudinal analyses of the same items in the main PATH study were also highly correlated with the traditional reliability estimates. The latent class analysis results, based on fewer items, also showed a high level of agreement with the traditional measures. The other methods and indicators had at best weak relationships with the reliability estimates derived from the reinterview data. Although the Question Understanding Aid seems to tap a different factor from the other measures, for adult respondents, it did predict item nonresponse and response latencies and thus may be a useful adjunct to the traditional measures.

4.
J Surv Stat Methodol ; 9(1): 202-204, 2021 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-33521155

RESUMO

[This corrects the article DOI: 10.1093/jssam/smz034.][This corrects the article DOI: 10.1093/jssam/smz034.].

5.
J Surv Stat Methodol ; 8(5): 903-931, 2020 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-33381609

RESUMO

Using reinterview data from the PATH Reliability and Validity (PATH-RV) study, we examine the characteristics of questions and respondents that predict the reliability of the answers. In the PATH-RV study, 524 respondents completed an interview twice, five to twenty-four days apart. We coded a number of question characteristics and used them to predict the gross discrepancy rates (GDRs) and kappas for each question. We also investigated respondent characteristics associated with reliability. Finally, we fitted cross-classified models that simultaneously examined a range of respondent and question characteristics. Although the different models yielded somewhat different conclusions, in general factual questions (especially demographic questions), shorter questions, questions that did not use scales, those with fewer response options, and those that asked about a noncentral topic produced more reliable answers than attitudinal questions, longer questions, questions using ordinal scales, those with more response options, and those asking about a central topic. One surprising finding was that items raising potential social desirability concerns yielded more reliable answers than items that did not raise such concerns. The respondent-level models and cross-classified models indicated that five adult respondent characteristics were associated with giving the same answer in both interviews-education, the Big Five trait of conscientiousness, tobacco use, sex, and income. Hispanic youths and non-Hispanic black youths were less likely to give the same answer in both interviews. The cross-classified model also found that more words were associated with less reliable answers. The results are mostly consistent with earlier findings but are nonetheless important because they are much less model-dependent than the earlier work. In addition, this study is the first to incorporate such personality traits as needed for cognition and the Big Five personality factors and to examine the relationships among reliability, item nonresponse, and response latency.

7.
Tob Control ; 28(6): 663-668, 2019 11.
Artigo em Inglês | MEDLINE | ID: mdl-30297373

RESUMO

INTRODUCTION: This paper reports a study done to estimate the reliability and validity of answers to the Youth and Adult questionnaires of the Population Assessment of Tobacco and Health (PATH) Study. METHODS: 407 adults and 117 youth respondents completed the wave 4 (2016-2017) PATH Study interview twice, 6-24 days apart. The reinterview data were used to estimate the reliability of answers to the questionnaire. Kappa statistics, gross discrepancy rates and correlations between answers to the initial interview and the reinterview were used to measure reliability. We examined every item in the questionnaire for which there were at least 100 observations. After the reinterview, most respondents provided a saliva sample that allowed us to assess the accuracy of their answers to the tobacco use questions. RESULTS: There was generally a very high level of agreement between answers in the interview and reinterview. On the key current tobacco use items, the average kappa (the agreement rate adjusted for chance agreement) was 0.79 for adult respondents (age 18 or older). Youth respondents exhibited equally high levels of agreement across interviews. The items on current tobacco use also exhibited high levels of agreement with saliva test results (kappa=0.72). Rating scale items showed lower levels of exact agreement across interviews but the answers were generally within one scale point or category. CONCLUSIONS: The PATH Study questions were developed using a careful protocol and the results indicate the answers provide reliable and valid information about tobacco use.


Assuntos
Saliva/química , Inquéritos e Questionários , Uso de Tabaco/epidemiologia , Adolescente , Adulto , Fatores Etários , Criança , Humanos , Entrevistas como Assunto , Reprodutibilidade dos Testes , Adulto Jovem
10.
Surv Res Methods ; 11(1): 45-61, 2017 Apr 10.
Artigo em Inglês | MEDLINE | ID: mdl-31745400

RESUMO

It is well known that some survey respondents reduce the effort they invest in answering questions by taking mental shortcuts - survey satisficing. This is a concern because such shortcuts can reduce the quality of responses and, potentially, the accuracy of survey estimates. This article explores "speeding," an extreme type of satisficing, which we define as answering so quickly that respondents could not have given much, if any, thought to their answers. To reduce speeding among online respondents we implemented an interactive prompting technique. When respondents answered faster than a minimal response time threshold, they received a message encouraging them to answer carefully and take their time. Across six web survey experiments, this prompting technique reduced speeding on subsequent questions compared to a no prompt control. Prompting slowed response times whether the speeding that triggered the prompt occurred early or late in the questionnaire, in the first or later waves of a longitudinal survey, among respondents recruited from non-probability or probability panels, or whether the prompt was delivered on only the first or on all speeding episodes. In addition to reducing speeding, the prompts increased response accuracy on simple arithmetic questions for a key subgroup. Prompting also reduced later straightlining in one experiment, suggesting the benefits may generalize to other types of mental shortcuts. Although the prompting could have annoyed respondents, it was not accompanied by a noticeable increase in breakoffs. As an alternative technique, respondents in one experiment were asked to explicitly commit to responding carefully. This global approach complemented the more local, interactive prompting technique on several measures. Taken together, these results suggest that interactive interventions of this sort may be useful for increasing respondents' conscientiousness in online questionnaires, even though these questionnaires are self-administered.

11.
12.
Ann Am Acad Pol Soc Sci ; 645(1): 6-22, 2013 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-25506081
13.
Soc Sci Comput Rev ; 31(3): 322-345, 2013 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-25258472

RESUMO

Grid or matrix questions are associated with a number of problems in Web surveys. In this paper, we present results from two experiments testing the design of grid questions to reduce breakoffs, missing data, and satisficing. The first examines dynamic elements to help guide respondent through the grid, and on splitting a larger grid into component pieces. The second manipulates the visual complexity of the grid and on simplifying the grid. We find that using dynamic feedback to guide respondents through a multi-question grid helps reduce missing data. Splitting the grids into component questions further reduces missing data and motivated underreporting. The visual complexity of the grid appeared to have little effect on performance.

14.
Public Opin Q ; 77(Suppl 1): 69-88, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24634546

RESUMO

This paper presents results from six experiments that examine the effect of the position of an item on the screen on the evaluative ratings it receives. The experiments are based on the idea that respondents expect "good" things-those they view positively-to be higher up on the screen than "bad" things. The experiments use items on different topics (Congress and HMOs, a variety of foods, and six physician specialties) and different methods for varying their vertical position on the screen. A meta-analysis of all six experiments demonstrates a small but reliable effect of the item's screen position on mean ratings of the item; the ratings are significantly more positive when the item appears in a higher position on the screen than when it appears farther down. These results are consistent with the hypothesis that respondents follow the "Up means good" heuristic, using the vertical position of the item as a cue in evaluating it. Respondents seem to rely on heuristics both in interpreting response scales and in forming judgments.

15.
Soc Sci Res ; 41(5): 1017-27, 2012 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-23017914

RESUMO

Latent class analysis (LCA) has been hailed as a promising technique for studying measurement errors in surveys, because the models produce estimates of the error rates associated with a given question. Still, the issue arises as to how accurate these error estimates are and under what circumstances they can be relied on. Skeptics argue that latent class models can understate the true error rates and at least one paper (Kreuter et al., 2008) demonstrates such underestimation empirically. We applied latent class models to data from two waves of the National Survey of Family Growth (NSFG), focusing on a pair of similar items about abortion that are administered under different modes of data collection. The first item is administered by computer-assisted personal interviewing (CAPI); the second, by audio computer-assisted self-interviewing (ACASI). Evidence shows that abortions are underreported in the NSFG and the conventional wisdom is that ACASI item yields fewer false negatives than the CAPI item. To evaluate these items, we made assumptions about the error rates within various subgroups of the population; these assumptions were needed to achieve an identifiable LCA model. Because there are external data available on the actual prevalence of abortion (by subgroup), we were able to form subgroups for which the identifying restrictions were likely to be (approximately) met and other subgroups for which the assumptions were likely to be violated. We also ran more complex models that took potential heterogeneity within subgroups into account. Most of the models yielded implausibly low error rates, supporting the argument that, under specific conditions, LCA models underestimate the error rates.

16.
J Off Stat ; 27(1): 65-85, 2011.
Artigo em Inglês | MEDLINE | ID: mdl-23411468

RESUMO

Web surveys often collect information such as frequencies, currency amounts, dates, or other items requiring short structured answers in an open-ended format, typically using text boxes for input. We report on several experiments exploring design features of such input fields. We find little effect of the size of the input field on whether frequency or dollar amount answers are well-formed or not. By contrast, the use of templates to guide formatting significantly improves the well-formedness of responses to questions eliciting currency amounts. For date questions (whether month/year or month/day/year), we find that separate input fields improve the quality of responses over single input fields, while drop boxes further reduce the proportion of ill-formed answers. Drop boxes also reduce completion time when the list of responses is short (e.g., months), but marginally increases completion time when the list is long (e.g., birth dates). These results suggest that non-narrative open questions can be designed to help guide respondents to provide answers in the desired format.

17.
Interact Comput ; 22(5): 417-427, 2010 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-20676386

RESUMO

A near ubiquitous feature of user interfaces is feedback on task completion or progress indicators such as the graphical bar that grows as more of the task is completed. The presumed benefit is that users will be more likely to complete the task if they see they are making progress but it is also possible that feedback indicating slow progress may sometimes discourage users from completing the task. This paper describes two experiments that evaluate the impact of progress indicators on the completion of on-line questionnaires. In the first experiment, progress was displayed at different speeds throughout the questionnaire. If the early feedback indicated slow progress, abandonment rates were higher and users' subjective experience more negative than if the early feedback indicated faster progress. In the second experiment, intermittent feedback seemed to minimize the costs of discouraging feedback while preserving the benefits of encouraging feedback. Overall, the results suggest that when progress seems to outpace users' expectations, feedback can improve their experience though not necessarily their completion rates; when progress seems to lag behind what users expect, feedback degrades their experience and lowers completion rates.

18.
J Off Stat ; 26(4): 633-650, 2010.
Artigo em Inglês | MEDLINE | ID: mdl-23411499

RESUMO

Survey respondents may misinterpret the questions they are asked, potentially undermining the accuracy of their answers. One way to reduce this risk is to make definitions of key question concepts available to the respondents. In the current study we compared two methods of making definitions available to web survey respondents - displaying the definition with the question text and displaying the definition when respondents roll the mouse over the relevant question terms. When definitions were always displayed they were consulted more than when they required a rollover request. The length of the definitions did not affect how frequently they were used under either method of display. Respondents who completed training items designed to encourage definition use actually requested definitions less often, suggesting that they may value minimal effort over improved understanding. We conclude that at least for small numbers of questions, providing definitions with the question is likely to be the more effective approach than rollovers or hyperlinks.

19.
Public Opin Q ; 72(5): 892-913, 2008.
Artigo em Inglês | MEDLINE | ID: mdl-21253437

RESUMO

Survey researchers since Cannell have worried that respondents may take various shortcuts to reduce the effort needed to complete a survey. The evidence for such shortcuts is often indirect. For instance, preferences for earlier versus later response options have been interpreted as evidence that respondents do not read beyond the first few options. This is really only a hypothesis, however, that is not supported by direct evidence regarding the allocation of respondent attention. In the current study, we used a new method to more directly observe what respondents do and do not look at by recording their eye movements while they answered questions in a Web survey. The eye-tracking data indicate that respondents do in fact spend more time looking at the first few options in a list of response options than those at the end of the list; this helps explain their tendency to select the options presented first regardless of their content. In addition, the eye-tracking data reveal that respondents are reluctant to invest effort in reading definitions of survey concepts that are only a mouse click away or paying attention to initially hidden response options. It is clear from the eye-tracking data that some respondents are more prone to these and other cognitive shortcuts than others, providing relatively direct evidence for what had been suspected based on more conventional measures.

20.
Psychol Bull ; 133(5): 859-83, 2007 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-17723033

RESUMO

Psychologists have worried about the distortions introduced into standardized personality measures by social desirability bias. Survey researchers have had similar concerns about the accuracy of survey reports about such topics as illicit drug use, abortion, and sexual behavior. The article reviews the research done by survey methodologists on reporting errors in surveys on sensitive topics, noting parallels and differences from the psychological literature on social desirability. The findings from the survey studies suggest that misreporting about sensitive topics is quite common and that it is largely situational. The extent of misreporting depends on whether the respondent has anything embarrassing to report and on design features of the survey. The survey evidence also indicates that misreporting on sensitive topics is a more or less motivated process in which respondents edit the information they report to avoid embarrassing themselves in the presence of an interviewer or to avoid repercussions from third parties.


Assuntos
Inquéritos Epidemiológicos , Testes de Personalidade/estatística & dados numéricos , Desejabilidade Social , Problemas Sociais/estatística & dados numéricos , Inquéritos e Questionários , Aborto Induzido/psicologia , Aborto Induzido/estatística & dados numéricos , Viés , Feminino , Humanos , Drogas Ilícitas , Masculino , Gravidez , Comportamento Sexual/psicologia , Comportamento Sexual/estatística & dados numéricos , Transtornos Relacionados ao Uso de Substâncias/epidemiologia , Transtornos Relacionados ao Uso de Substâncias/psicologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...