Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-34692379

RESUMO

While a growing body of work has focused on the interactional organization of telephone survey interviews, little if any research in conversation and discourse analysis has examined written online surveys as a form of talk-in-interaction. While survey researchers routinely examine such responses using content analysis or thematic analysis methods, this shifts the focus away from the precise language and turn constructional practices used by respondents. By contrast, in this study we examine open-ended text responses to online survey questions using a conversation analytic and discourse analytic approach. Focusing on the precise turn constructional practices used by survey respondents- specifically, how they formulate multi-unit responses and make use of turn-initial discourse markers- we demonstrate how online survey respondents treat open-ended survey questions much as they would any similar sequence of interaction in face-to-face or telephone survey talk, making online surveys a tenable source of data for further conversation analytic inquiry.

3.
Proc Natl Acad Sci U S A ; 115(12): 2952-2957, 2018 03 20.
Artigo em Inglês | MEDLINE | ID: mdl-29507248

RESUMO

Obtaining grant funding from the National Institutes of Health (NIH) is increasingly competitive, as funding success rates have declined over the past decade. To allocate relatively scarce funds, scientific peer reviewers must differentiate the very best applications from comparatively weaker ones. Despite the importance of this determination, little research has explored how reviewers assign ratings to the applications they review and whether there is consistency in the reviewers' evaluation of the same application. Replicating all aspects of the NIH peer-review process, we examined 43 individual reviewers' ratings and written critiques of the same group of 25 NIH grant applications. Results showed no agreement among reviewers regarding the quality of the applications in either their qualitative or quantitative evaluations. Although all reviewers received the same instructions on how to rate applications and format their written critiques, we also found no agreement in how reviewers "translated" a given number of strengths and weaknesses into a numeric rating. It appeared that the outcome of the grant review depended more on the reviewer to whom the grant was assigned than the research proposed in the grant. This research replicates the NIH peer-review process to examine in detail the qualitative and quantitative judgments of different reviewers examining the same application, and our results have broad relevance for scientific grant peer review.


Assuntos
Pesquisa Biomédica/economia , National Institutes of Health (U.S.) , Revisão da Pesquisa por Pares/métodos , Humanos , Variações Dependentes do Observador , Estados Unidos , Redação
4.
J Pragmat ; 113: 1-15, 2017 May.
Artigo em Inglês | MEDLINE | ID: mdl-29170594

RESUMO

In this paper we focus on how participants in peer review interactions use laughter as a resource as they publicly report divergence of evaluative positions, divergence that is typical in the give and take of joint grant evaluation. Using the framework of conversation analysis, we examine the infusion of laughter and multimodal laugh-relevant practices into sequences of talk in meetings of grant reviewers deliberating on the evaluation and scoring of high-level scientific grant applications. We focus on a recurrent sequence in these meetings, what we call the score-reporting sequence, in which the assigned reviewers first announce the preliminary scores they have assigned to the grant. We demonstrate that such sequences are routine sites for the use of laugh practices to navigate the initial moments in which divergence of opinion is made explicit. In the context of meetings convened for the purposes of peer review, laughter thus serves as a valuable resource for managing the socially delicate but institutionally required reporting of divergence and disagreement that is endemic to meetings where these types of evaluative tasks are a focal activity.

5.
Res Eval ; 26(1): 1-14, 2017 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-28458466

RESUMO

In scientific grant peer review, groups of expert scientists meet to engage in the collaborative decision-making task of evaluating and scoring grant applications. Prior research on grant peer review has established that inter-reviewer reliability is typically poor. In the current study, experienced reviewers for the National Institutes of Health (NIH) were recruited to participate in one of four constructed peer review panel meetings. Each panel discussed and scored the same pool of recently reviewed NIH grant applications. We examined the degree of intra-panel variability in panels' scores of the applications before versus after collaborative discussion, and the degree of inter-panel variability. We also analyzed videotapes of reviewers' interactions for instances of one particular form of discourse-Score Calibration Talk-as one factor influencing the variability we observe. Results suggest that although reviewers within a single panel agree more following collaborative discussion, different panels agree less after discussion, and Score Calibration Talk plays a pivotal role in scoring variability during peer review. We discuss implications of this variability for the scientific peer review process.

6.
Res Lang Soc Interact ; 49(4): 362-379, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-28936031

RESUMO

This paper examines how participants in face-to-face conversation employ mobile phones as a resource for social action. We focus on what we call mobile-supported sharing activities, in which participants use a mobile phone to share text or images with others by voicing text aloud from their mobile or providing others with visual access to the device's display screen. Drawing from naturalistic video recordings, we focus on how mobile-supported sharing activities invite assessments by providing access to an object that is not locally accessible to the participants. Such practices make relevant co-participants' assessment of these objects and allow for different forms of co-participation across sequence types. We additionally examine how the organization of assessments during these sharing activities displays sensitivity to preference structure. The analysis illustrates the relevance of embodiment, local objects, and new communicative technologies to the production of action in co-present interaction. Data are in American English.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA