Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
1.
Proc Natl Acad Sci U S A ; 119(47): e2118046119, 2022 11 22.
Artículo en Inglés | MEDLINE | ID: mdl-36395142

RESUMEN

There are long-standing concerns that peer review, which is foundational to scientific institutions like journals and funding agencies, favors conservative ideas over novel ones. We investigate the association between novelty and the acceptance of manuscripts submitted to a large sample of scientific journals. The data cover 20,538 manuscripts submitted between 2013 and 2018 to the journals Cell and Cell Reports and 6,785 manuscripts submitted in 2018 to 47 journals published by the Institute of Physics Publishing. Following previous work that found that a balance of novel and conventional ideas predicts citation impact, we measure the novelty and conventionality of manuscripts by the atypicality of combinations of journals in their reference lists, taking the 90th percentile most atypical combination as "novelty" and the 50th percentile as "conventionality." We find that higher novelty is consistently associated with higher acceptance; submissions in the top novelty quintile are 6.5 percentage points more likely than bottom quintile ones to get accepted. Higher conventionality is also associated with acceptance (+16.3% top-bottom quintile difference). Disagreement among peer reviewers was not systematically related to submission novelty or conventionality, and editors select strongly for novelty even conditional on reviewers' recommendations (+7.0% top-bottom quintile difference). Manuscripts exhibiting higher novelty were more highly cited. Overall, the findings suggest that journal peer review favors novel research that is well situated in the existing literature, incentivizing exploration in science and challenging the view that peer review is inherently antinovelty.


Asunto(s)
Revisión de la Investigación por Pares , Publicaciones Periódicas como Asunto
2.
Manage Sci ; 68(6): 4478-4495, 2022 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-36200060

RESUMEN

The evaluation and selection of novel projects lies at the heart of scientific and technological innovation, and yet there are persistent concerns about bias, such as conservatism. This paper investigates the role that the format of evaluation, specifically information sharing among expert evaluators, plays in generating conservative decisions. We executed two field experiments in two separate grant-funding opportunities at a leading research university, mobilizing 369 evaluators from seven universities to evaluate 97 projects, resulting in 761 proposal-evaluation pairs and more than $250,000 in awards. We exogenously varied the relative valence (positive and negative) of others' scores and measured how exposures to higher and lower scores affect the focal evaluator's propensity to change their initial score. We found causal evidence of a negativity bias, where evaluators lower their scores by more points after seeing scores more critical than their own rather than raise them after seeing more favorable scores. Qualitative coding of the evaluators' justifications for score changes reveals that exposures to lower scores were associated with greater attention to uncovering weaknesses, whereas exposures to neutral or higher scores were associated with increased emphasis on nonevaluation criteria, such as confidence in one's judgment. The greater power of negative information suggests that information sharing among expert evaluators can lead to more conservative allocation decisions that favor protecting against failure rather than maximizing success.

4.
Strateg Manag J ; 42(6): 1215-1244, 2021 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-34326562

RESUMEN

RESEARCH SUMMARY: We investigate how knowledge similarity between two individuals is systematically related to the likelihood that a serendipitous encounter results in knowledge production. We conduct a field experiment at a medical research symposium, where we exogenously varied opportunities for face-to-face encounters among 15,817 scientist-pairs. Our data include direct observations of interaction patterns collected using sociometric badges, and detailed, longitudinal data of the scientists' postsymposium publication records over 6 years. We find that interacting scientists acquire more knowledge and coauthor 1.2 more papers when they share some overlapping interests, but cite each other's work between three and seven times less when they are from the same field. Our findings reveal both collaborative and competitive effects of knowledge similarity on knowledge production outcomes. MANAGERIAL SUMMARY: Managers often try to stimulate innovation by encouraging serendipitous interactions between employees, for example by using office space redesigns, conferences and similar events. Are such interventions effective? This article proposes that an effective encounter depends on the degree of common knowledge shared by the individuals. We find that scientists who attend the same conference are more likely to learn from each other and collaborate effectively when they have some common interests, but may view each other competitively when they work in the same field. Hence, when designing opportunities for face-to-face interactions, managers should consider knowledge similarity as a criteria for fostering more productive exchanges.

5.
Bioinformatics ; 37(18): 2889-2895, 2021 09 29.
Artículo en Inglés | MEDLINE | ID: mdl-33824954

RESUMEN

MOTIVATION: Do machine learning methods improve standard deconvolution techniques for gene expression data? This article uses a unique new dataset combined with an open innovation competition to evaluate a wide range of approaches developed by 294 competitors from 20 countries. The competition's objective was to address a deconvolution problem critical to analyzing genetic perturbations from the Connectivity Map. The issue consists of separating gene expression of individual genes from raw measurements obtained from gene pairs. We evaluated the outcomes using ground-truth data (direct measurements for single genes) obtained from the same samples. RESULTS: We find that the top-ranked algorithm, based on random forest regression, beat the other methods in accuracy and reproducibility; more traditional gaussian-mixture methods performed well and tended to be faster, and the best deep learning approach yielded outcomes slightly inferior to the above methods. We anticipate researchers in the field will find the dataset and algorithms developed in this study to be a powerful research tool for benchmarking their deconvolution methods and a resource useful for multiple applications. AVAILABILITY AND IMPLEMENTATION: The data is freely available at clue.io/data (section Contests) and the software is on GitHub at https://github.com/cmap/gene_deconvolution_challenge. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Asunto(s)
Algoritmos , Programas Informáticos , Reproducibilidad de los Resultados , Bosques Aleatorios , Biología
7.
Health Care Manage Rev ; 45(3): 255-266, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-29957705

RESUMEN

BACKGROUND: Frontline staff are well positioned to conceive improvement opportunities based on first-hand knowledge of what works and does not work. The innovation contest may be a relevant and useful vehicle to elicit staff ideas. However, the success of the contest likely depends on perceived organizational support for learning; when staff believe that support for learning-oriented culture, practices, and leadership is low, they may be less willing or able to share ideas. PURPOSE: We examined how staff perception of organizational support for learning affected contest participation, which comprised ideation and evaluation of submitted ideas. METHODOLOGY/APPROACH: The contest held in a hospital cardiac center invited all clinicians and support staff (n ≈ 1,400) to participate. We used the 27-item Learning Organization Survey to measure staff perception of learning-oriented environment, practices and processes, and leadership. RESULTS: Seventy-two frontline staff submitted 138 ideas addressing wide-ranging issues including patient experience, cost of care, workflow, utilization, and access. Two hundred forty-five participated in evaluation. Supportive learning environment predicted participation in ideation and idea evaluation. Perceptions of insufficient experimentation with new ways of working also predicted participation. CONCLUSION: The contest enabled frontline staff to share input and assess input shared by other staff. Our findings indicate that the contest may serve as a fruitful outlet through which frontline staff can share and learn new ideas, especially for those who feel safe to speak up and believe that new ideas are not tested frequently enough. PRACTICE IMPLICATIONS: The contest's potential to decentralize innovation may be greater under stronger learning orientation. A highly visible intervention, like the innovation contest, has both benefits and risks. Our findings suggest benefits such as increased engagement with work and community as well as risks such as discontent that could arise if staff suggestions are not acted upon or if there is no desired change after the contest.


Asunto(s)
Costos de la Atención en Salud , Liderazgo , Aprendizaje , Innovación Organizacional , Participación de los Interesados , Instituciones Cardiológicas , Estudios Transversales , Eficiencia Organizacional , Humanos , Encuestas y Cuestionarios
8.
PLoS One ; 14(9): e0222165, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31560691

RESUMEN

Open data science and algorithm development competitions offer a unique avenue for rapid discovery of better computational strategies. We highlight three examples in computational biology and bioinformatics research in which the use of competitions has yielded significant performance gains over established algorithms. These include algorithms for antibody clustering, imputing gene expression data, and querying the Connectivity Map (CMap). Performance gains are evaluated quantitatively using realistic, albeit sanitized, data sets. The solutions produced through these competitions are then examined with respect to their utility and the prospects for implementation in the field. We present the decision process and competition design considerations that lead to these successful outcomes as a model for researchers who want to use competitions and non-domain crowds as collaborators to further their research.


Asunto(s)
Biología Computacional/tendencias , Algoritmos , Anticuerpos/clasificación , Anticuerpos/genética , Análisis por Conglomerados , Colaboración de las Masas/tendencias , Perfilación de la Expresión Génica/estadística & datos numéricos , Humanos , Invenciones/tendencias
9.
JAMA Oncol ; 5(5): 654-661, 2019 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-30998808

RESUMEN

IMPORTANCE: Radiation therapy (RT) is a critical cancer treatment, but the existing radiation oncologist work force does not meet growing global demand. One key physician task in RT planning involves tumor segmentation for targeting, which requires substantial training and is subject to significant interobserver variation. OBJECTIVE: To determine whether crowd innovation could be used to rapidly produce artificial intelligence (AI) solutions that replicate the accuracy of an expert radiation oncologist in segmenting lung tumors for RT targeting. DESIGN, SETTING, AND PARTICIPANTS: We conducted a 10-week, prize-based, online, 3-phase challenge (prizes totaled $55 000). A well-curated data set, including computed tomographic (CT) scans and lung tumor segmentations generated by an expert for clinical care, was used for the contest (CT scans from 461 patients; median 157 images per scan; 77 942 images in total; 8144 images with tumor present). Contestants were provided a training set of 229 CT scans with accompanying expert contours to develop their algorithms and given feedback on their performance throughout the contest, including from the expert clinician. MAIN OUTCOMES AND MEASURES: The AI algorithms generated by contestants were automatically scored on an independent data set that was withheld from contestants, and performance ranked using quantitative metrics that evaluated overlap of each algorithm's automated segmentations with the expert's segmentations. Performance was further benchmarked against human expert interobserver and intraobserver variation. RESULTS: A total of 564 contestants from 62 countries registered for this challenge, and 34 (6%) submitted algorithms. The automated segmentations produced by the top 5 AI algorithms, when combined using an ensemble model, had an accuracy (Dice coefficient = 0.79) that was within the benchmark of mean interobserver variation measured between 6 human experts. For phase 1, the top 7 algorithms had average custom segmentation scores (S scores) on the holdout data set ranging from 0.15 to 0.38, and suboptimal performance using relative measures of error. The average S scores for phase 2 increased to 0.53 to 0.57, with a similar improvement in other performance metrics. In phase 3, performance of the top algorithm increased by an additional 9%. Combining the top 5 algorithms from phase 2 and phase 3 using an ensemble model, yielded an additional 9% to 12% improvement in performance with a final S score reaching 0.68. CONCLUSIONS AND RELEVANCE: A combined crowd innovation and AI approach rapidly produced automated algorithms that replicated the skills of a highly trained physician for a critical task in radiation therapy. These AI algorithms could improve cancer care globally by transferring the skills of expert clinicians to under-resourced health care settings.


Asunto(s)
Inteligencia Artificial , Colaboración de las Masas , Invenciones , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/radioterapia , Tomografía Computarizada por Rayos X , Adulto , Anciano , Anciano de 80 o más Años , Femenino , Humanos , Neoplasias Pulmonares/patología , Masculino , Persona de Mediana Edad , Carga Tumoral
11.
Strateg Organ ; 15(2): 119-140, 2017 May.
Artículo en Inglés | MEDLINE | ID: mdl-28690428

RESUMEN

The purpose of this article is to suggest a (preliminary) taxonomy and research agenda for the topic of "firms, crowds, and innovation" and to provide an introduction to the associated special issue. We specifically discuss how various crowd-related phenomena and practices-for example, crowdsourcing, crowdfunding, user innovation, and peer production-relate to theories of the firm, with particular attention on "sociality" in firms and markets. We first briefly review extant theories of the firm and then discuss three theoretical aspects of sociality related to crowds in the context of strategy, organizations, and innovation: (1) the functions of sociality (sociality as extension of rationality, sociality as sensing and signaling, sociality as matching and identity), (2) the forms of sociality (independent/aggregate and interacting/emergent forms of sociality), and (3) the failures of sociality (misattribution and misapplication). We conclude with an outline of future research directions and introduce the special issue papers and essays.

12.
Rev Econ Stat ; 99(4): 565-576, 2017 10.
Artículo en Inglés | MEDLINE | ID: mdl-29375163

RESUMEN

We present the results of a field experiment conducted at Harvard Medical School to understand the extent to which search costs affect matching among scientific collaborators. We generated exogenous variation in search costs for pairs of potential collaborators by randomly assigning individuals to 90-minute structured information-sharing sessions as part of a grant funding opportunity. We estimate that the treatment increases the probability of grant co-application of a given pair of researchers by 75%. The findings suggest that matching between scientists is subject to considerable frictions, even in the case of geographically-proximate scientists working in the same institutional context.

13.
Manage Sci ; 62(10): 2765-2783, 2016 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-27746512

RESUMEN

Selecting among alternative projects is a core management task in all innovating organizations. In this paper, we focus on the evaluation of frontier scientific research projects. We argue that the "intellectual distance" between the knowledge embodied in research proposals and an evaluator's own expertise systematically relates to the evaluations given. To estimate relationships, we designed and executed a grant proposal process at a leading research university in which we randomized the assignment of evaluators and proposals to generate 2,130 evaluator-proposal pairs. We find that evaluators systematically give lower scores to research proposals that are closer to their own areas of expertise and to those that are highly novel. The patterns are consistent with biases associated with boundedly rational evaluation of new ideas. The patterns are inconsistent with intellectual distance simply contributing "noise" or being associated with private interests of evaluators. We discuss implications for policy, managerial intervention, and allocation of resources in the ongoing accumulation of scientific knowledge.

14.
Harv Bus Rev ; 91(4): 60-9, 140, 2013 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-23593768

RESUMEN

From Apple to Merck to Wikipedia, more and more organizations are turning to crowds for help in solving their most vexing innovation and research questions, but managers remain understandably cautious. It seems risky and even unnatural to push problems out to vast groups of strangers distributed around the world, particularly for companies built on a history of internal innovation. How can intellectual property be protected? How can a crowd-sourced solution be integrated into corporate operations? What about the costs? These concerns are all reasonable, the authors write, but excluding crowdsourcing from the corporate innovation tool kit means losing an opportunity. After a decade of study, they have identified when crowds tend to outperform internal organizations (or not). They outline four ways to tap into crowd-powered problem solving--contests, collaborative communities, complementors, and labor markets--and offer a system for picking the best one in a given situation. Contests, for example, are suited to highly challenging technical, analytical, and scientific problems; design problems; and creative or aesthetic projects. They are akin to running a series of independent experiments that generate multiple solutions--and if those solutions cluster at some extreme, a company can gain insight into where a problem's "technical frontier" lies. (Internal R&D may generate far less information.)


Asunto(s)
Comercio , Conducta Cooperativa , Difusión de Innovaciones , Conducta de Masa , Humanos , Estados Unidos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...