Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Más filtros

Bases de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
J Evol Biol ; 36(10): 1347-1356, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37812156

RESUMEN

Code review increases reliability and improves reproducibility of research. As such, code review is an inevitable step in software development and is common in fields such as computer science. However, despite its importance, code review is noticeably lacking in ecology and evolutionary biology. This is problematic as it facilitates the propagation of coding errors and a reduction in reproducibility and reliability of published results. To address this, we provide a detailed commentary on how to effectively review code, how to set up your project to enable this form of review and detail its possible implementation at several stages throughout the research process. This guide serves as a primer for code review, and adoption of the principles and advice here will go a long way in promoting more open, reliable, and transparent ecology and evolutionary biology.


Asunto(s)
Evolución Biológica , Ecología , Reproducibilidad de los Resultados , Flujo de Trabajo , Reproducción
2.
Ecol Appl ; 33(1): e2728, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36053922

RESUMEN

Monitoring vegetation restoration is challenging because monitoring is costly, requires long-term funding, and involves monitoring multiple vegetation variables that are often not linked back to learning about progress toward objectives. There is a clear need for the development of targeted monitoring programs that focus on a reduced set of variables that are tied to specific restoration objectives. In this paper, we present a method to progress the development of a targeted monitoring program, using a pre-existing state-and-transition model. We (1) use field data to validate an expert-derived classification of woodland vegetation states; (2) use these data to identify which variable(s) help differentiate woodland states; and (3) identify the target threshold (for the variable) that signifies if the desired transition has been achieved. The measured vegetation variables from each site in this study were good predictors of the different states. We show that by measuring only a few of these variables, it is possible to assign the vegetation state for a collection of sites, and monitor if and when a transition to another state has occurred. For this ecosystem and state-and-transition models, out of nine vegetation variables considered, the density of immature trees and percentage of exotic understory vegetation cover were the variables most frequently specified as effective to define a threshold or transition. We synthesize findings by presenting a decision tree that provides practical guidance for the development of targeted monitoring strategies for woodland vegetation.


Asunto(s)
Ecosistema , Bosques
3.
BMC Biol ; 19(1): 68, 2021 04 09.
Artículo en Inglés | MEDLINE | ID: mdl-33836762

RESUMEN

Unreliable research programmes waste funds, time, and even the lives of the organisms we seek to help and understand. Reducing this waste and increasing the value of scientific evidence require changing the actions of both individual researchers and the institutions they depend on for employment and promotion. While ecologists and evolutionary biologists have somewhat improved research transparency over the past decade (e.g. more data sharing), major obstacles remain. In this commentary, we lift our gaze to the horizon to imagine how researchers and institutions can clear the path towards more credible and effective research programmes.


Asunto(s)
Evolución Biológica , Ecosistema
4.
R Soc Open Sci ; 10(6): 221553, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-37293358

RESUMEN

This paper explores judgements about the replicability of social and behavioural sciences research and what drives those judgements. Using a mixed methods approach, it draws on qualitative and quantitative data elicited from groups using a structured approach called the IDEA protocol ('investigate', 'discuss', 'estimate' and 'aggregate'). Five groups of five people with relevant domain expertise evaluated 25 research claims that were subject to at least one replication study. Participants assessed the probability that each of the 25 research claims would replicate (i.e. that a replication study would find a statistically significant result in the same direction as the original study) and described the reasoning behind those judgements. We quantitatively analysed possible correlates of predictive accuracy, including self-rated expertise and updating of judgements after feedback and discussion. We qualitatively analysed the reasoning data to explore the cues, heuristics and patterns of reasoning used by participants. Participants achieved 84% classification accuracy in predicting replicability. Those who engaged in a greater breadth of reasoning provided more accurate replicability judgements. Some reasons were more commonly invoked by more accurate participants, such as 'effect size' and 'reputation' (e.g. of the field of research). There was also some evidence of a relationship between statistical literacy and accuracy.

5.
PLoS One ; 18(1): e0274429, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36701303

RESUMEN

As replications of individual studies are resource intensive, techniques for predicting the replicability are required. We introduce the repliCATS (Collaborative Assessments for Trustworthy Science) process, a new method for eliciting expert predictions about the replicability of research. This process is a structured expert elicitation approach based on a modified Delphi technique applied to the evaluation of research claims in social and behavioural sciences. The utility of processes to predict replicability is their capacity to test scientific claims without the costs of full replication. Experimental data supports the validity of this process, with a validation study producing a classification accuracy of 84% and an Area Under the Curve of 0.94, meeting or exceeding the accuracy of other techniques used to predict replicability. The repliCATS process provides other benefits. It is highly scalable, able to be deployed for both rapid assessment of small numbers of claims, and assessment of high volumes of claims over an extended period through an online elicitation platform, having been used to assess 3000 research claims over an 18 month period. It is available to be implemented in a range of ways and we describe one such implementation. An important advantage of the repliCATS process is that it collects qualitative data that has the potential to provide insight in understanding the limits of generalizability of scientific claims. The primary limitation of the repliCATS process is its reliance on human-derived predictions with consequent costs in terms of participant fatigue although careful design can minimise these costs. The repliCATS process has potential applications in alternative peer review and in the allocation of effort for replication studies.


Asunto(s)
Ciencias de la Conducta , Exactitud de los Datos , Humanos , Reproducibilidad de los Resultados , Costos y Análisis de Costo , Revisión por Pares
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA