ABSTRACT
One of the most universal trends in science and technology today is the growth of large teams in all areas, as solitary researchers and small teams diminish in prevalence1-3. Increases in team size have been attributed to the specialization of scientific activities3, improvements in communication technology4,5, or the complexity of modern problems that require interdisciplinary solutions6-8. This shift in team size raises the question of whether and how the character of the science and technology produced by large teams differs from that of small teams. Here we analyse more than 65 million papers, patents and software products that span the period 1954-2014, and demonstrate that across this period smaller teams have tended to disrupt science and technology with new ideas and opportunities, whereas larger teams have tended to develop existing ones. Work from larger teams builds on more-recent and popular developments, and attention to their work comes immediately. By contrast, contributions by smaller teams search more deeply into the past, are viewed as disruptive to science and technology and succeed further into the future-if at all. Observed differences between small and large teams are magnified for higher-impact work, with small teams known for disruptive work and large teams for developing work. Differences in topic and research design account for a small part of the relationship between team size and disruption; most of the effect occurs at the level of the individual, as people move between smaller and larger teams. These results demonstrate that both small and large teams are essential to a flourishing ecology of science and technology, and suggest that, to achieve this, science policies should aim to support a diversity of team sizes.
Subject(s)
Diffusion of Innovation , Group Processes , Interdisciplinary Research/organization & administration , Science/organization & administration , Science/statistics & numerical data , Technology/organization & administration , Technology/statistics & numerical data , Cooperative Behavior , Databases, Factual , Interdisciplinary Research/statistics & numerical data , Interdisciplinary Research/trends , Nobel Prize , Patents as Topic/statistics & numerical data , Research Support as Topic , Science/trends , Software/supply & distribution , Technology/trendsSubject(s)
Artificial Intelligence , Protein Conformation , Protein Folding , Research Personnel , Software , Artificial Intelligence/economics , Artificial Intelligence/supply & distribution , Artificial Intelligence/trends , Deep Learning/economics , Deep Learning/supply & distribution , Deep Learning/trends , Software/economics , Software/supply & distribution , Software/trends , Time FactorsSubject(s)
Information Dissemination , Research Design , Software , Computational Biology/methods , Computational Biology/standards , Internet , Software/standards , Software/supply & distribution , User-Computer Interface , Reproducibility of Results , Information Dissemination/methods , Research Design/standardsABSTRACT
PSYCHOACOUSTICS-WEB is an online tool written in JavaScript and PHP that enables the estimation of auditory sensory thresholds via adaptive threshold tracking. The toolbox implements the transformed up-down methods proposed by Levitt (Journal of the Acoustical Society of America, 49, 467-477, (1971) for a set of classic psychoacoustical tasks: frequency, intensity, and duration discrimination of pure tones; duration discrimination and gap detection of noise; and amplitude modulation detection with noise carriers. The toolbox can be used through a common web browser; it works with both fixed and mobile devices, and requires no programming skills. PSYCHOACOUSTICS-WEB is suitable for laboratory, classroom, and online testing and is designed for two main types of users: an occasional user and, above all, an experimenter using the toolbox for their own research. This latter user can create a personal account, customise existing experiments, and share them in the form of direct links to further users (e.g., the participants of a hypothetical experiment). Finally, because data storage is centralised, the toolbox offers the potential for creating a database of auditory skills.
Subject(s)
Auditory Threshold , Internet , Psychoacoustics , Software , Adolescent , Adult , Aged , Child , Female , Humans , Male , Middle Aged , Young Adult , Databases, Factual , Sound , Mobile Applications , User-Computer Interface , Acoustic Stimulation , Software/supply & distributionSubject(s)
Developing Countries , Equipment and Supplies/supply & distribution , Problem Solving , Research Design , Research Personnel , Research/organization & administration , Technology/organization & administration , Cell Phone/economics , Cell Phone/supply & distribution , Computers/economics , Computers/supply & distribution , Developing Countries/economics , Electric Power Supplies , Equipment and Supplies/economics , Internet , Mobile Applications/economics , Open Access Publishing , Research/economics , Research Personnel/economics , Research Personnel/psychology , Social Networking , Software/economics , Software/supply & distribution , Technology/economics , Technology/instrumentationSubject(s)
Cloud Computing/supply & distribution , DNA , Genomics/instrumentation , Genomics/methods , Research Personnel/education , Software/supply & distribution , DNA/analysis , DNA/genetics , Gene Expression Profiling/instrumentation , Gene Expression Profiling/methods , Genomics/education , Molecular Sequence Annotation/methods , TranscriptomeSubject(s)
Astronomy , Physics , Registries , Software/supply & distribution , Access to Information , Information Dissemination , InternetSubject(s)
Information Dissemination , Peer Review, Research/methods , Research Personnel , Sexism/prevention & control , Software/supply & distribution , Archives , Education/statistics & numerical data , Italy , Minority Groups/psychology , Minority Groups/statistics & numerical data , Peer Review, Research/ethics , Reproducibility of Results , Research Personnel/education , Research Personnel/psychology , Work Schedule Tolerance/psychologyABSTRACT
Proteogenomic strategies aim to refine genome-wide annotations of protein coding features by using actual protein level observations. Most of the currently applied proteogenomic approaches include integrative analysis of multiple types of high-throughput omics data, e.g., genomics, transcriptomics, proteomics, etc. Recent efforts towards creating a human proteome map were primarily targeted to experimentally detect at least one protein product for each gene in the genome and extensively utilized proteogenomic approaches. The 14 year long wait to get a draft human proteome map, after completion of similar efforts to sequence the genome, explains the huge complexity and technical hurdles of such efforts. Further, the integrative analysis of large-scale multi-omics datasets inherent to these studies becomes a major bottleneck to their success. However, recent developments of various analysis tools and pipelines dedicated to proteogenomics reduce both the time and complexity of such analysis. Here, we summarize notable approaches, studies, software developments and their potential applications towards eukaryotic genome annotation and clinical proteogenomics.
Subject(s)
Chromosome Mapping/methods , Genome , Open Reading Frames , Proteogenomics/methods , Software/supply & distribution , Animals , Chromosome Mapping/instrumentation , Datasets as Topic , Eukaryotic Cells/metabolism , Humans , Molecular Sequence Annotation , Proteogenomics/instrumentation , ProteomeABSTRACT
UNLABELLED: Collection of data concerning case histories is not yet common in homeopathy despite its great importance for this method. Computer program development progresses slowly and discussion about requirements is scarce. Two Dutch projects assessed Materia Medica of some homeopathic medicines and six homeopathic symptoms. Especially the second project relied heavily on data collection. In both projects much effort was spent on consensus between participating doctors. There was much variance between doctors despite our consensus efforts. Assessing causality seems the most important source of bias, there is also much variance in assessing symptoms. CONCLUSION: Data collection software should be developed step-by-step, guided by close monitoring and feedback of participating practitioners.
Subject(s)
Data Collection/methods , Decision Making, Computer-Assisted , Homeopathy/methods , Materia Medica/standards , Software/supply & distribution , Consensus , Data Collection/standards , Homeopathy/standards , Humans , Materia Medica/therapeutic use , Practice Patterns, Physicians' , Software/standardsABSTRACT
BACKGROUND: Use of smartphones and medical mHealth applications (apps) within the clinical environment provides a potential means for delivering elements of vascular care. This article reviews the contemporary availability of apps specifically themed to major vascular diseases and the opportunities and concerns regarding their integration into practice. METHODS: Smartphone apps relating to major vascular diseases were identified from the app stores for the 6 most popular smartphone platforms, including iPhone, Android, Blackberry, Nokia, Windows, and Samsung. Search terms included peripheral artery (arterial) disease, varicose veins, aortic aneurysm, carotid artery disease, amputation, ulcers, hyperhydrosis, thoracic outlet syndrome, vascular malformation, and lymphatic disorders. RESULTS: Forty-nine vascular-themed apps were identified. Sixteen (33%) were free of charge. Fifteen apps (31%) had customer satisfaction ratings, but only 3 (6%) had greater than 100. Only 13 apps (27%) had documented medical professional involvement in their design or content. CONCLUSIONS: The integration of apps into the delivery of care has the potential to benefit vascular health care workers and patients. However, high-quality apps designed by clinicians with vascular expertise are currently lacking and represent an area of concern in the mHealth market. Improvement in the quality and reliability of these apps will require the development of robust regulation.
Subject(s)
Cell Phone , Computers, Handheld , Internet , Monitoring, Physiologic/instrumentation , Peripheral Vascular Diseases/diagnosis , Software Design , Software/supply & distribution , Equipment Design , Humans , Reproducibility of Results , Retrospective StudiesABSTRACT
The identification of proteins by mass spectrometry is a standard technique in the field of proteomics, relying on search engines to perform the identifications of the acquired spectra. Here, we present a user-friendly, lightweight and open-source graphical user interface called SearchGUI (http://searchgui.googlecode.com), for configuring and running the freely available OMSSA (open mass spectrometry search algorithm) and X!Tandem search engines simultaneously. Freely available under the permissible Apache2 license, SearchGUI is supported on Windows, Linux and OSX.
Subject(s)
Databases, Protein/supply & distribution , Proteins/analysis , Software/supply & distribution , Algorithms , Mass Spectrometry , Proteomics/instrumentation , Proteomics/methods , Tandem Mass Spectrometry , User-Computer InterfaceABSTRACT
PURPOSE: Recent years have witnessed tremendous progress in image guide radiotherapy technology and a growing interest in the possibilities for adapting treatment planning and delivery over the course of treatment. One obstacle faced by the research community has been the lack of a comprehensive open-source software toolkit dedicated for adaptive radiotherapy (ART). To address this need, the authors have developed a software suite called the Deformable Image Registration and Adaptive Radiotherapy Toolkit (DIRART). METHODS: DIRART is an open-source toolkit developed in MATLAB. It is designed in an object-oriented style with focus on user-friendliness, features, and flexibility. It contains four classes of DIR algorithms, including the newer inverse consistency algorithms to provide consistent displacement vector field in both directions. It also contains common ART functions, an integrated graphical user interface, a variety of visualization and image-processing features, dose metric analysis functions, and interface routines. These interface routines make DIRART a powerful complement to the Computational Environment for Radiotherapy Research (CERR) and popular image-processing toolkits such as ITK. RESULTS: DIRART provides a set of image processing/registration algorithms and postprocessing functions to facilitate the development and testing of DIR algorithms. It also offers a good amount of options for DIR results visualization, evaluation, and validation. CONCLUSIONS: By exchanging data with treatment planning systems via DICOM-RT files and CERR, and by bringing image registration algorithms closer to radiotherapy applications, DIRART is potentially a convenient and flexible platform that may facilitate ART and DIR research. 0 2011 Ameri-