Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Publication year range
1.
R Soc Open Sci ; 11(1): 231003, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38234442

ABSTRACT

Results of simulation studies evaluating the performance of statistical methods can have a major impact on the way empirical research is implemented. However, so far there is limited evidence of the replicability of simulation studies. Eight highly cited statistical simulation studies were selected, and their replicability was assessed by teams of replicators with formal training in quantitative methodology. The teams used information in the original publications to write simulation code with the aim of replicating the results. The primary outcome was to determine the feasibility of replicability based on reported information in the original publications and supplementary materials. Replicasility varied greatly: some original studies provided detailed information leading to almost perfect replication of results, whereas other studies did not provide enough information to implement any of the reported simulations. Factors facilitating replication included availability of code, detailed reporting or visualization of data-generating procedures and methods, and replicator expertise. Replicability of statistical simulation studies was mainly impeded by lack of information and sustainability of information sources. We encourage researchers publishing simulation studies to transparently report all relevant implementation details either in the research paper itself or in easily accessible supplementary material and to make their simulation code publicly available using permanent links.

2.
Stat Med ; 38(27): 5182-5196, 2019 11 30.
Article in English | MEDLINE | ID: mdl-31478240

ABSTRACT

In randomised trials, continuous endpoints are often measured with some degree of error. This study explores the impact of ignoring measurement error and proposes methods to improve statistical inference in the presence of measurement error. Three main types of measurement error in continuous endpoints are considered: classical, systematic, and differential. For each measurement error type, a corrected effect estimator is proposed. The corrected estimators and several methods for confidence interval estimation are tested in a simulation study. These methods combine information about error-prone and error-free measurements of the endpoint in individuals not included in the trial (external calibration sample). We show that, if measurement error in continuous endpoints is ignored, the treatment effect estimator is unbiased when measurement error is classical, while Type-II error is increased at a given sample size. Conversely, the estimator can be substantially biased when measurement error is systematic or differential. In those cases, bias can largely be prevented and inferences improved upon using information from an external calibration sample, of which the required sample size increases as the strength of the association between the error-prone and error-free endpoint decreases. Measurement error correction using already a small (external) calibration sample is shown to improve inferences and should be considered in trials with error-prone endpoints. Implementation of the proposed correction methods is accommodated by a new software package for R.


Subject(s)
Endpoint Determination , Randomized Controlled Trials as Topic/methods , Scientific Experimental Error , Computer Simulation , Data Interpretation, Statistical , Endpoint Determination/methods , Endpoint Determination/statistics & numerical data , Hemoglobins/analysis , Humans , Randomized Controlled Trials as Topic/standards , Sample Size , Scientific Experimental Error/statistics & numerical data
3.
Odontol Chil ; 38(1): 24-7, 1990 Apr.
Article in Spanish | MEDLINE | ID: mdl-1965988
SELECTION OF CITATIONS
SEARCH DETAIL
...