Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 28
Filtrar
1.
J Med Internet Res ; 16(12): e276, 2014 Dec 08.
Artigo em Inglês | MEDLINE | ID: mdl-25488851

RESUMO

BACKGROUND: Nonprobability Web surveys using volunteer panels can provide a relatively cheap and quick alternative to traditional health and epidemiological surveys. However, concerns have been raised about their representativeness. OBJECTIVE: The aim was to compare results from different Web panels with a population-based probability sample survey (n=8969 aged 18-44 years) that used computer-assisted self-interview (CASI) for sensitive behaviors, the third British National Survey of Sexual Attitudes and Lifestyles (Natsal-3). METHODS: Natsal-3 questions were included on 4 nonprobability Web panel surveys (n=2000 to 2099), 2 using basic quotas based on age and sex, and 2 using modified quotas based on additional variables related to key estimates. Results for sociodemographic characteristics were compared with external benchmarks and for sexual behaviors and opinions with Natsal-3. Odds ratios (ORs) were used to express differences between the benchmark data and each survey for each variable of interest. A summary measure of survey performance was the average absolute OR across variables. Another summary measure was the number of key estimates for which the survey differed significantly (at the 5% level) from the benchmarks. RESULTS: For sociodemographic variables, the Web surveys were less representative of the general population than Natsal-3. For example, for men, the average absolute OR for Natsal-3 was 1.14, whereas for the Web surveys the average absolute ORs ranged from 1.86 to 2.30. For all Web surveys, approximately two-thirds of the key estimates of sexual behaviors were different from Natsal-3 and the average absolute ORs ranged from 1.32 to 1.98. Differences were appreciable even for questions asked by CASI in Natsal-3. No single Web survey performed consistently better than any other did. Modified quotas slightly improved results for men, but not for women. CONCLUSIONS: Consistent with studies from other countries on less sensitive topics, volunteer Web panels provided appreciably biased estimates. The differences seen with Natsal-3 CASI questions, where mode effects may be similar, suggest a selection bias in the Web surveys. The use of more complex quotas may lead to some improvement, but many estimates are still likely to differ. Volunteer Web panels are not recommended if accurate prevalence estimates for the general population are a key objective.


Assuntos
Atitude , Inquéritos Epidemiológicos/métodos , Comportamento Sexual , Adolescente , Adulto , Feminino , Humanos , Internet , Estilo de Vida , Masculino , Pessoa de Meia-Idade , Prevalência , Estudos de Amostragem , Adulto Jovem
2.
J Happiness Stud ; 15(3): 639-655, 2014 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-25110460

RESUMO

Duration-based measures of happiness from retrospectively constructed daily diaries are gaining in popularity in population-based studies of the hedonic experience. Yet experimental evidence suggests that perceptions of duration - how long an event lasts - are influenced by individuals' emotional experiences during the event. An important remaining question is whether observational measures of duration outside the laboratory setting, where the events under study are engaged in voluntarily, may be similarly affected, and if so, for which emotions are duration biases a potential concern. This study assesses how duration and emotions co-vary using retrospective, 24-hour diaries from a national sample of older couples. Data are from the Disability and Use of Time (DUST) supplement to the nationally representative U.S. Panel Study of Income Dynamics. We find that experienced wellbeing (positive, negative emotion) and activity duration are inversely associated. Specific positive emotions (happy, calm) are not associated with duration, but all measures of negative wellbeing considered here (frustrated, worried, sad, tired, and pain) have positive correlations (ranging from 0.04 to 0.08; p<.05). However, only frustration remains correlated with duration after controlling for respondent, activity and day-related characteristics (0.06, p<.01). The correlation translates into a potentially upward biased estimate of duration of up to 10 minutes (20%) for very frustrating activities. We conclude that estimates of time spent feeling happy yesterday generated from diary data are unlikely to be biased but more research is needed on the link between duration estimation and feelings of frustration.

3.
Soc Sci Comput Rev ; 31(3): 322-345, 2013 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-25258472

RESUMO

Grid or matrix questions are associated with a number of problems in Web surveys. In this paper, we present results from two experiments testing the design of grid questions to reduce breakoffs, missing data, and satisficing. The first examines dynamic elements to help guide respondent through the grid, and on splitting a larger grid into component pieces. The second manipulates the visual complexity of the grid and on simplifying the grid. We find that using dynamic feedback to guide respondents through a multi-question grid helps reduce missing data. Splitting the grids into component questions further reduces missing data and motivated underreporting. The visual complexity of the grid appeared to have little effect on performance.

4.
Methoden Daten Anal ; 17(2): 135-170, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37724168

RESUMO

This study investigates the extent to which video technologies - now ubiquitous - might be useful for survey measurement. We compare respondents' performance and experience (n = 1,067) in live video-mediated interviews, a web survey in which prerecorded interviewers read questions, and a conventional (textual) web survey. Compared to web survey respondents, those interviewed via live video were less likely to select the same response for all statements in a battery (non-differentiation) and reported higher satisfaction with their experience but provided more rounded numerical (presumably less thoughtful) answers and selected answers that were less sensitive (more socially desirable). This suggests the presence of a live interviewer, even if mediated, can keep respondents motivated and conscientious but may introduce time pressure - a likely reason for increased rounding - and social presence - a likely reason for more socially desirable responding. Respondents "interviewed" by a prerecorded interviewer, rounded fewer numerical answers and responded more candidly than did those in the other modes, but engaged in non-differentiation more than did live video respondents, suggesting there are advantages and disadvantages for both video modes. Both live and prerecorded video seem potentially viable for use in production surveys and may be especially valuable when in-person interviews are not feasible.

5.
J Surv Stat Methodol ; 10(2): 317-336, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-37406077

RESUMO

Live video (LV) communication tools (e.g., Zoom) have the potential to provide survey researchers with many of the benefits of in-person interviewing, while also greatly reducing data collection costs, given that interviewers do not need to travel and make in-person visits to sampled households. The COVID-19 pandemic has exposed the vulnerability of in-person data collection to public health crises, forcing survey researchers to explore remote data collection modes-such as LV interviewing-that seem likely to yield high-quality data without in-person interaction. Given the potential benefits of these technologies, the operational and methodological aspects of video interviewing have started to receive research attention from survey methodologists. Although it is remote, video interviewing still involves respondent-interviewer interaction that introduces the possibility of interviewer effects. No research to date has evaluated this potential threat to the quality of the data collected in video interviews. This research note presents an evaluation of interviewer effects in a recent experimental study of alternative approaches to video interviewing including both LV interviewing and the use of prerecorded videos of the same interviewers asking questions embedded in a web survey ("prerecorded video" interviewing). We find little evidence of significant interviewer effects when using these two approaches, which is a promising result. We also find that when interviewer effects were present, they tended to be slightly larger in the LV approach as would be expected in light of its being an interactive approach. We conclude with a discussion of the implications of these findings for future research using video interviewing.

6.
J Urban Health ; 88(1): 30-40, 2011 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-21293937

RESUMO

In clinical and research settings, it is increasingly acknowledged that adolescents may be better positioned than their caregivers to provide information in regard to their own health status, including information related to asthma. Very little is known, however, about the congruence between adolescent and caregiver responses to questions about asthma beyond reports of symptoms. We analyzed data for 215 urban, primarily African-American adolescent-caregiver pairs. Adolescents and caregiver reports concerning the adolescent's asthma-related medical history were moderately correlated and not found to differ at the aggregate level. Correlations between adolescent and caregiver reports of the adolescent's asthma symptoms and functional status were weak, although these differences deteriorated at the aggregate level. Adolescent-caregiver reports of symptoms and functioning were more likely to be in agreement if the adolescent was older, if school personnel were unaware of the child's asthma, and if the adolescent's asthma was classified as mild intermittent. For questions concerning the frequency of hospitalizations, emergency department visits, and physician visits, moderate correlations between adolescent and caregiver responses were noted, although with some differences at the aggregate level. Findings suggest that, when adolescents and their caregivers are asked about the adolescent's asthma in clinical and research settings, the extent to which the two perspectives are likely to agree depends on the type of information sought. Clinicians and researchers may obtain more accurate information if questions about symptoms and functional status are directed toward adolescents.


Assuntos
Asma , Cuidadores , Procurador , População Urbana , Adolescente , Negro ou Afro-Americano , Fatores Etários , Feminino , Inquéritos Epidemiológicos , Humanos , Masculino , Testes de Função Respiratória , Estatísticas não Paramétricas , Inquéritos e Questionários , Estados Unidos
7.
Int J Soc Res Methodol ; 24(2): 249-364, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33732090

RESUMO

To explore socially desirable responding in telephone surveys, this study examines response latencies in answers to 27 questions in 319 audio-recorded iPhone interviews from Schober et al. (2015). Response latencies were compared when respondents (a) answered questions on sensitive vs. nonsensitive topics (as classified by online raters); (b) produced more vs. less socially desirable answers; and (c) were interviewed by a professional interviewer or an automated system. Respondents answered questions on sensitive topics more quickly than on nonsensitive topics, though patterns varied by question format (categorical, numerical, ordinal). Independent of question sensitivity, respondents gave less socially desirable answers more quickly when answering categorical and ordinal questions but more slowly when answering numeric questions. Respondents were particularly quicker to answer sensitive questions when asked by interviewers than by the automated system. Findings demonstrate that response times can be (differently) revealing about question and response sensitivity in a telephone survey.

8.
J Asthma ; 47(1): 26-32, 2010 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-20100017

RESUMO

AIMS: To investigate what African American adolescents with asthma and their caregivers understand by "wheeze". METHODS: Caregivers (n = 35) and adolescents (n = 35) were each asked to describe what they understood by "wheeze". Respondents were also shown a video clip of an adolescent wheezing and asked: a) to describe the breathing of the adolescent in the video; and, b) whether the adolescent respondent's breathing had ever been similar to the video-presented symptoms. RESULTS: Most caregivers described wheeze in terms of sound alone (61.8%) while the majority of adolescents described wheeze as something that is felt (55.8%). Few caregivers and adolescents (5.8% each) included "whistling" in their descriptions of "wheeze". Most caregivers and adolescents used the word "wheeze" when describing the video clip, but nearly one-quarter of the caregivers and one-third of the adolescents felt that the adolescent's breathing was never similar to the video. CONCLUSION: Caregiver and adolescents descriptions of wheeze are different from each other and both may be different from clinical definitions of the term. Study findings have implications for the ways in which questions about "wheeze" are framed and interpreted.


Assuntos
Negro ou Afro-Americano , Cuidadores/educação , Conhecimentos, Atitudes e Prática em Saúde , Educação de Pacientes como Assunto , Pacientes , Sons Respiratórios/diagnóstico , Terminologia como Assunto , Adolescente , Adulto , Recursos Audiovisuais , Cuidadores/economia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Pacientes/psicologia
9.
Interact Comput ; 22(5): 417-427, 2010 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-20676386

RESUMO

A near ubiquitous feature of user interfaces is feedback on task completion or progress indicators such as the graphical bar that grows as more of the task is completed. The presumed benefit is that users will be more likely to complete the task if they see they are making progress but it is also possible that feedback indicating slow progress may sometimes discourage users from completing the task. This paper describes two experiments that evaluate the impact of progress indicators on the completion of on-line questionnaires. In the first experiment, progress was displayed at different speeds throughout the questionnaire. If the early feedback indicated slow progress, abandonment rates were higher and users' subjective experience more negative than if the early feedback indicated faster progress. In the second experiment, intermittent feedback seemed to minimize the costs of discouraging feedback while preserving the benefits of encouraging feedback. Overall, the results suggest that when progress seems to outpace users' expectations, feedback can improve their experience though not necessarily their completion rates; when progress seems to lag behind what users expect, feedback degrades their experience and lowers completion rates.

10.
Field methods ; 32(1): 3-22, 2020 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-34135694

RESUMO

Acquiescence is often defined as the systematic selection of agreeable ("strongly agree") or affirmative ("yes") responses to survey items, regardless of item content or directionality. This definition implies that acquiescence is immune to item characteristics; however, the influence of item characteristics on acquiescence remains largely unexplored. We examined the influence of eight item characteristics on acquiescence in a telephone survey of 400 Latinos and non-Latino whites: qualified wording, mental comparisons, negated wording, unfamiliar terms, ambiguous wording, knowledge accessibility, item length, and polysyllabic wording. Negated and ambiguous wording was associated with reduced acquiescence for the full sample, as well as subsamples stratified by ethnicity and sociodemographic characteristics. This effect was strongest among younger, more educated, and non-Latino white respondents. No other item characteristics had a significant influence on respondent acquiescence. Findings from this study suggest that acquiescence may be affected by interactions between respondent and item characteristics.

11.
Psychol Sci ; 20(4): 399-405, 2009 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-19298262

RESUMO

Memories of war, terrorism, and natural disaster play a critical role in the construction of group identity and the persistence of group conflict. Here, we argue that personal memory and knowledge of the collective past become entwined only when public events have a direct, forceful, and prolonged impact on a population. Support for this position comes from a cross-national study in which participants thought aloud as they dated mundane autobiographical events. We found that Bosnians often mentioned their civil war and that Izmit Turks made frequent reference to the 1999 earthquake in their country. In contrast, public events were rarely mentioned by Serbs, Montenegrins, Ankara Turks, Canadians, Danes, or Israelis. Surprisingly, historical references were absent from (post-September 11) protocols collected in New York City and elsewhere in the United States. Taken together, these findings indicate that it is personal significance, not historical importance, that determines whether public events play a role in organizing autobiographical memory.


Assuntos
Autobiografias como Assunto , Desastres , Memória , Terrorismo , Guerra , Adulto , Feminino , Humanos , Idioma , Masculino , Adulto Jovem
12.
Top Cogn Sci ; 10(2): 452-484, 2018 04.
Artigo em Inglês | MEDLINE | ID: mdl-29630774

RESUMO

This paper examines when conceptual misalignments in dialog lead to consequential miscommunication. Two studies explore misunderstanding in survey interviews of the sort conducted by governments and social scientists, where mismeasurement can have real social costs. In 131 interviews about tobacco use, misalignment between respondents' and researchers' conceptions of ordinary expressions like "smoking" and "every day" was quantified by probing respondents' interpretations of survey terms and re-administering the survey questionnaire with standard definitions after the interview. Respondents' interpretations were surprisingly variable, and in many cases they did not match the conceptions that researchers intended them to use. More often than one might expect, this conceptual variability was consequential, leading to answers (and, in principle, to estimates of the prevalence of smoking and related attributes in the population) that would have been different had conceptualizations been aligned; for example, fully 12% of respondents gave a different answer about having smoked 100 cigarettes in their entire life when later given a standard definition. In other cases misaligned interpretations did not lead to miscommunication, in that the differences would not have led to different survey responses. Although clarification of survey terms during the interview sometimes improved conceptual alignment, this was not guaranteed; in this corpus some needed attempts at clarification were never made, some attempts did not succeed, and some seemed to make understanding worse. The findings suggest that conceptual misalignments may be more frequent in ordinary conversation than interlocutors know, and that attempts to detect and clarify them may not always work. They also suggest that at least some unresolved misunderstandings do not matter in the sense that they do not change the outcome of the communication-in this case, the survey estimates.


Assuntos
Comunicação , Compreensão , Relações Interpessoais , Fumar , Inquéritos e Questionários , Adulto , Feminino , Humanos , Masculino
13.
Surv Res Methods ; 11(1): 45-61, 2017 Apr 10.
Artigo em Inglês | MEDLINE | ID: mdl-31745400

RESUMO

It is well known that some survey respondents reduce the effort they invest in answering questions by taking mental shortcuts - survey satisficing. This is a concern because such shortcuts can reduce the quality of responses and, potentially, the accuracy of survey estimates. This article explores "speeding," an extreme type of satisficing, which we define as answering so quickly that respondents could not have given much, if any, thought to their answers. To reduce speeding among online respondents we implemented an interactive prompting technique. When respondents answered faster than a minimal response time threshold, they received a message encouraging them to answer carefully and take their time. Across six web survey experiments, this prompting technique reduced speeding on subsequent questions compared to a no prompt control. Prompting slowed response times whether the speeding that triggered the prompt occurred early or late in the questionnaire, in the first or later waves of a longitudinal survey, among respondents recruited from non-probability or probability panels, or whether the prompt was delivered on only the first or on all speeding episodes. In addition to reducing speeding, the prompts increased response accuracy on simple arithmetic questions for a key subgroup. Prompting also reduced later straightlining in one experiment, suggesting the benefits may generalize to other types of mental shortcuts. Although the prompting could have annoyed respondents, it was not accompanied by a noticeable increase in breakoffs. As an alternative technique, respondents in one experiment were asked to explicitly commit to responding carefully. This global approach complemented the more local, interactive prompting technique on several measures. Taken together, these results suggest that interactive interventions of this sort may be useful for increasing respondents' conscientiousness in online questionnaires, even though these questionnaires are self-administered.

14.
Public Opin Q ; 80(1): 180-211, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27257310

RESUMO

Demonstrations that analyses of social media content can align with measurement from sample surveys have raised the question of whether survey research can be supplemented or even replaced with less costly and burdensome data mining of already-existing or "found" social media content. But just how trustworthy such measurement can be-say, to replace official statistics-is unknown. Survey researchers and data scientists approach key questions from starting assumptions and analytic traditions that differ on, for example, the need for representative samples drawn from frames that fully cover the population. New conversations between these scholarly communities are needed to understand the potential points of alignment and non-alignment. Across these approaches, there are major differences in (a) how participants (survey respondents and social media posters) understand the activity they are engaged in; (b) the nature of the data produced by survey responses and social media posts, and the inferences that are legitimate given the data; and (c) practical and ethical considerations surrounding the use of the data. Estimates are likely to align to differing degrees depending on the research topic and the populations under consideration, the particular features of the surveys and social media sites involved, and the analytic techniques for extracting opinions and experiences from social media. Traditional population coverage may not be required for social media content to effectively predict social phenomena to the extent that social media content distills or summarizes broader conversations that are also measured by surveys.

15.
PLoS One ; 11(2): e0147983, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-26866687

RESUMO

BACKGROUND: Interviewer-administered surveys are an important method of collecting population-level epidemiological data, but suffer from declining response rates and increasing costs. Web surveys offer more rapid data collection and lower costs. There are concerns, however, about data quality from web surveys. Previous research has largely focused on selection biases, and few have explored measurement differences. This paper aims to assess the extent to which mode affects the responses given by the same respondents at two points in time, providing information on potential measurement error if web surveys are used in the future. METHODS: 527 participants from the third British National Survey of Sexual Attitudes and Lifestyles (Natsal-3), which uses computer assisted personal interview (CAPI) and self-interview (CASI) modes, subsequently responded to identically-worded questions in a web survey. McNemar tests assessed whether within-person differences in responses were at random or indicated a mode effect, i.e. higher reporting of more sensitive responses in one mode. An analysis of pooled responses by generalized estimating equations addressed the impact of gender and question type on change. RESULTS: Only 10% of responses changed between surveys. However mode effects were found for about a third of variables, with higher reporting of sensitive responses more commonly found on the web compared with Natsal-3. CONCLUSIONS: The web appears a promising mode for surveys of sensitive behaviours, most likely as part of a mixed-mode design. Our findings suggest that mode effects may vary by question type and content, and by the particular mix of modes used. Mixed-mode surveys need careful development to understand mode effects and how to account for them.


Assuntos
Atitude , Coleta de Dados/métodos , Internet , Comportamento Sexual , Adolescente , Adulto , Idoso , Feminino , Inquéritos Epidemiológicos , Humanos , Estilo de Vida , Masculino , Pessoa de Meia-Idade , Fatores Sexuais , Classe Social , Inquéritos e Questionários , Reino Unido , Adulto Jovem
16.
Front Psychol ; 6: 1578, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26539138

RESUMO

This study investigates how an onscreen virtual agent's dialog capability and facial animation affect survey respondents' comprehension and engagement in "face-to-face" interviews, using questions from US government surveys whose results have far-reaching impact on national policies. In the study, 73 laboratory participants were randomly assigned to respond in one of four interviewing conditions, in which the virtual agent had either high or low dialog capability (implemented through Wizard of Oz) and high or low facial animation, based on motion capture from a human interviewer. Respondents, whose faces were visible to the Wizard (and videorecorded) during the interviews, answered 12 questions about housing, employment, and purchases on the basis of fictional scenarios designed to allow measurement of comprehension accuracy, defined as the fit between responses and US government definitions. Respondents answered more accurately with the high-dialog-capability agents, requesting clarification more often particularly for ambiguous scenarios; and they generally treated the high-dialog-capability interviewers more socially, looking at the interviewer more and judging high-dialog-capability agents as more personal and less distant. Greater interviewer facial animation did not affect response accuracy, but it led to more displays of engagement-acknowledgments (verbal and visual) and smiles-and to the virtual interviewer's being rated as less natural. The pattern of results suggests that a virtual agent's dialog capability and facial animation differently affect survey respondents' experience of interviews, behavioral displays, and comprehension, and thus the accuracy of their responses. The pattern of results also suggests design considerations for building survey interviewing agents, which may differ depending on the kinds of survey questions (sensitive or not) that are asked.

17.
PLoS One ; 10(6): e0128337, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26060991

RESUMO

As people increasingly communicate via asynchronous non-spoken modes on mobile devices, particularly text messaging (e.g., SMS), longstanding assumptions and practices of social measurement via telephone survey interviewing are being challenged. In the study reported here, 634 people who had agreed to participate in an interview on their iPhone were randomly assigned to answer 32 questions from US social surveys via text messaging or speech, administered either by a human interviewer or by an automated interviewing system. 10 interviewers from the University of Michigan Survey Research Center administered voice and text interviews; automated systems launched parallel text and voice interviews at the same time as the human interviews were launched. The key question was how the interview mode affected the quality of the response data, in particular the precision of numerical answers (how many were not rounded), variation in answers to multiple questions with the same response scale (differentiation), and disclosure of socially undesirable information. Texting led to higher quality data-fewer rounded numerical answers, more differentiated answers to a battery of questions, and more disclosure of sensitive information-than voice interviews, both with human and automated interviewers. Text respondents also reported a strong preference for future interviews by text. The findings suggest that people interviewed on mobile devices at a time and place that is convenient for them, even when they are multitasking, can give more trustworthy and accurate answers than those in more traditional spoken interviews. The findings also suggest that answers from text interviews, when aggregated across a sample, can tell a different story about a population than answers from voice interviews, potentially altering the policy implications from a survey.


Assuntos
Entrevistas como Assunto , Smartphone , Adulto , Idoso , Idoso de 80 Anos ou mais , Revelação , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Inquéritos e Questionários , Envio de Mensagens de Texto
18.
Electron Int J Time Use Res ; 10(1): 55-75, 2013 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-24729796

RESUMO

Systematic investigations of the cognitive challenges in completing time diaries and measures of quality for such interviews have been lacking. To fill this gap, we analyze respondent and interviewer behaviors and interviewer-provided observations about diary quality for a computer-assisted telephone-administered time diary supplement to the U.S. Panel Study of Income Dynamics. We find that 93%-96% of sequences result in a codable answer and interviewers rarely assist respondents with comprehension. Questions about what the respondent did next and for how long appear more challenging than follow-up descriptors. Long sequences do not necessarily signal comprehension problems, but often involve interviewer utterances designed to promote conversational flow. A 6-item diary quality scale appropriately reflects respondents' difficulties and interviewers' assistance with comprehension, but is not correlated with conversational flow. Discussion focuses on practical recommendations for time diary studies and future research.

19.
Public Opin Q ; 77(Suppl 1): 69-88, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24634546

RESUMO

This paper presents results from six experiments that examine the effect of the position of an item on the screen on the evaluative ratings it receives. The experiments are based on the idea that respondents expect "good" things-those they view positively-to be higher up on the screen than "bad" things. The experiments use items on different topics (Congress and HMOs, a variety of foods, and six physician specialties) and different methods for varying their vertical position on the screen. A meta-analysis of all six experiments demonstrates a small but reliable effect of the item's screen position on mean ratings of the item; the ratings are significantly more positive when the item appears in a higher position on the screen than when it appears farther down. These results are consistent with the hypothesis that respondents follow the "Up means good" heuristic, using the vertical position of the item as a cue in evaluating it. Respondents seem to rely on heuristics both in interpreting response scales and in forming judgments.

20.
Field methods ; 25(4)2013 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-24319346

RESUMO

Time diaries are a well established method for providing population estimates of the amount of time and types of activities respondents carry out over the course of a full day. This paper focuses on a computer assisted telephone application developed to collect multiple, same-day 24-hour diaries from older couples who participated in the 2009 Panel Study of Income Dynamics (PSID). We present selected findings from developmental and field activities, highlighting methods for three diary enhancements: 1) implementation of a multiple, same-day diary design; 2) minimizing erroneous reporting of sequential activities as simultaneous; and 3) tailoring activity descriptors (or "follow-up" questions) that depend on a pre-coded activity value. A final section discusses limitations and implications for future time diary efforts.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA