RESUMEN
Many randomized controlled trials (RCTs) are biased and difficult to reproduce due to methodological flaws and poor reporting. There is increasing attention for responsible research practices and implementation of reporting guidelines, but whether these efforts have improved the methodological quality of RCTs (e.g., lower risk of bias) is unknown. We, therefore, mapped risk-of-bias trends over time in RCT publications in relation to journal and author characteristics. Meta-information of 176,620 RCTs published between 1966 and 2018 was extracted. The risk-of-bias probability (random sequence generation, allocation concealment, blinding of patients/personnel, and blinding of outcome assessment) was assessed using a risk-of-bias machine learning tool. This tool was simultaneously validated using 63,327 human risk-of-bias assessments obtained from 17,394 RCTs evaluated in the Cochrane Database of Systematic Reviews (CDSR). Moreover, RCT registration and CONSORT Statement reporting were assessed using automated searches. Publication characteristics included the number of authors, journal impact factor (JIF), and medical discipline. The annual number of published RCTs substantially increased over 4 decades, accompanied by increases in authors (5.2 to 7.8) and institutions (2.9 to 4.8). The risk of bias remained present in most RCTs but decreased over time for allocation concealment (63% to 51%), random sequence generation (57% to 36%), and blinding of outcome assessment (58% to 52%). Trial registration (37% to 47%) and the use of the CONSORT Statement (1% to 20%) also rapidly increased. In journals with a higher impact factor (>10), the risk of bias was consistently lower with higher levels of RCT registration and the use of the CONSORT Statement. Automated risk-of-bias predictions had accuracies above 70% for allocation concealment (70.7%), random sequence generation (72.1%), and blinding of patients/personnel (79.8%), but not for blinding of outcome assessment (62.7%). In conclusion, the likelihood of bias in RCTs has generally decreased over the last decades. This optimistic trend may be driven by increased knowledge augmented by mandatory trial registration and more stringent reporting guidelines and journal requirements. Nevertheless, relatively high probabilities of bias remain, particularly in journals with lower impact factors. This emphasizes that further improvement of RCT registration, conduct, and reporting is still urgently needed.
Asunto(s)
Publicaciones , Ensayos Clínicos Controlados Aleatorios como Asunto/métodos , Ensayos Clínicos Controlados Aleatorios como Asunto/estadística & datos numéricos , Sesgo , Bibliometría , Exactitud de los Datos , Manejo de Datos/historia , Manejo de Datos/métodos , Manejo de Datos/normas , Manejo de Datos/tendencias , Bases de Datos Bibliográficas/historia , Bases de Datos Bibliográficas/normas , Bases de Datos Bibliográficas/tendencias , Historia del Siglo XX , Historia del Siglo XXI , Humanos , Evaluación de Resultado en la Atención de Salud , Reportes Públicos de Datos en Atención de Salud , Publicaciones/historia , Publicaciones/normas , Publicaciones/estadística & datos numéricos , Publicaciones/tendencias , Mejoramiento de la Calidad/historia , Mejoramiento de la Calidad/tendencias , Ensayos Clínicos Controlados Aleatorios como Asunto/historia , Revisiones Sistemáticas como AsuntoRESUMEN
Data management is essential in a flow cytometry (FCM) shared resource laboratory (SRL) for the integrity of collected data and its long-term preservation, as described in the Cytometry publication from 2016, ISAC Flow Cytometry Shared Resource Laboratory (SRL) Best Practices (Barsky et al.: Cytometry Part A 89A(2016): 1017-1030). The SARS-CoV-2 pandemic introduced an array of challenges in the operation of SRLs. The subsequent laboratory shutdowns and access restrictions brought to the forefront well-established practices that withstood the impact of a sudden change in operations and illuminated areas that need improvement. The most significant challenges from a data management perspective were data access for remote analysis and workstation management. Notably, lessons learned from this challenge emphasize the importance of safeguarding collected data from loss in various emergencies such as fire or natural disasters where the physical hardware storing data could be directly affected. Here, we describe two data management systems that have been successful during the current emergency created by the pandemic, specifically remote access and automated data transfer. We will discuss other situations that could arise and lead to data loss or challenges in interpreting data. © 2020 International Society for Advancement of Cytometry.
Asunto(s)
COVID-19/epidemiología , Manejo de Datos/tendencias , Citometría de Flujo/tendencias , Laboratorios/tendencias , Teletrabajo/tendencias , COVID-19/prevención & control , Manejo de Datos/normas , Citometría de Flujo/normas , Humanos , Laboratorios/normas , Teletrabajo/normasRESUMEN
OBJECTIVES: To explore the use of data dashboards to convey information about a drug's value, and reduce the need to collapse dimensions of value to a single measure. METHODS: Review of the literature on US Drug Value Assessment Frameworks, and discussion of the value of data dashboards to improve the manner in which information on value is displayed. RESULTS: The incremental cost per quality-adjusted life-year ratio is a useful starting point for conversation about a drug's value, but it cannot reflect all of the elements of value about which different audiences care deeply. Data dashboards for drug value assessments can draw from other contexts. Decision makers should be presented with well-designed value dashboards containing various metrics, including conventional cost per quality-adjusted life-year ratios as well as measures of a drug's impact on clinical and patient-centric outcomes, and on budgetary and distributional consequences, to convey a drug's value along different dimensions. CONCLUSIONS: The advent of US drug value frameworks in health care has forced a concomitant effort to develop appropriate information displays. Researchers should formally test different formats and elements.
Asunto(s)
Manejo de Datos/métodos , Preparaciones Farmacéuticas/economía , Presupuestos , Manejo de Datos/normas , Manejo de Datos/tendencias , Humanos , Medios de Comunicación Sociales/instrumentación , Medios de Comunicación Sociales/normas , Medios de Comunicación Sociales/estadística & datos numéricos , Estados UnidosRESUMEN
Retrospective observational research relies on databases that do not routinely record lines of therapy or reasons for treatment change. Standardized approaches to estimate lines of therapy were developed and evaluated in this study. A number of rules were developed, assumptions varied and macros developed to apply to large datasets. Results were investigated in an iterative process to refine line of therapy algorithms in three different cancers (lung, colorectal and gastric). Three primary factors were evaluated and included in the estimation of lines of therapy in oncology: defining a treatment regimen, addition/removal of drugs and gap periods. Algorithms and associated Statistical Analysis Software (SAS®) macros for line of therapy identification are provided to facilitate and standardize the use of real-world databases for oncology research.
Lay abstract Most, if not all, real-world healthcare databases do not contain data explaining treatment changes, requiring that rules be applied to estimate when treatment changes may reflect advancement of underlying disease. This study investigated three tumor types (lung, colorectal and gastric cancer) to develop and provide rules that researchers can apply to real-world databases. The resulting algorithms and associated SAS® macros from this work are provided for use in the Supplementary data.
Asunto(s)
Protocolos de Quimioterapia Combinada Antineoplásica/uso terapéutico , Neoplasias Colorrectales/tratamiento farmacológico , Manejo de Datos/métodos , Neoplasias Pulmonares/tratamiento farmacológico , Oncología Médica/normas , Neoplasias Gástricas/tratamiento farmacológico , Algoritmos , Manejo de Datos/normas , Bases de Datos Factuales/normas , Bases de Datos Factuales/estadística & datos numéricos , Conjuntos de Datos como Asunto/normas , Humanos , Oncología Médica/estadística & datos numéricos , Estudios Observacionales como Asunto/normas , Estudios Observacionales como Asunto/estadística & datos numéricos , Estudios Retrospectivos , Programas InformáticosRESUMEN
A focused effort is needed to capture the utility and usability of the electronic health record for providing usable and reusable data while reducing documentation burden. A collaborative effort of nurse leaders and experts was able to generate national consensus recommendations on documentation elements related to admission history. The process used in this effort is summarized in a framework that can be used by other groups to develop content that reduces documentation burden while maximizing the creation of usable and reusable data.
Asunto(s)
Manejo de Datos/normas , Documentación/normas , Registros Electrónicos de Salud/normas , Colaboración Intersectorial , Objetivos Organizacionales , Guías de Práctica Clínica como Asunto/normas , Humanos , Estados UnidosRESUMEN
Aim: We assessed the extent to which chemotherapy cycles recorded in Hospital Episode Statistics (HES) Admitted Patient Care (APC) were captured in National Cancer Registration & Analysis Service Systemic Anti-Cancer Therapy (SACT) for a cohort of lung cancer patients. Methods: All chemotherapy cycles recorded for linkage eligible lung cancer patients with a National Cancer Registration & Analysis Service diagnosis between 2012 and 2015 were identified in HES APC and SACT. Results: Among a population of 4070 lung cancer patients, 6076 chemotherapy cycles were observed in HES APC data. A total of 61% of cycles were recorded in SACT on the same day, 8% on a different day and 31% were not recorded in SACT. Conclusion: Our results suggest that SACT may not capture all chemotherapy cycles administered to a patient between 2012 and 2016; however, administrative changes mean data after this period may be more complete.
Asunto(s)
Protocolos de Quimioterapia Combinada Antineoplásica/administración & dosificación , Manejo de Datos/estadística & datos numéricos , Bases de Datos Factuales/estadística & datos numéricos , Neoplasias Pulmonares/tratamiento farmacológico , Sistema de Registros/estadística & datos numéricos , Estudios de Cohortes , Manejo de Datos/normas , Bases de Datos Factuales/normas , Conjuntos de Datos como Asunto , Esquema de Medicación , Inglaterra , Hospitalización/estadística & datos numéricos , Humanos , Sistema de Registros/normas , Medicina Estatal/estadística & datos numéricosRESUMEN
INTRODUCTION: Between 0.02% and 0.04% of articles are retracted. We aim to: (a) describe the reasons for retraction of genetics articles and the time elapsed between the publication of an article and that of the retraction notice because of research misconduct (ie, fabrication, falsification, plagiarism); and (b) compare all these variables between retracted medical genetics (MG) and non-medical genetics (NMG) articles. METHODS: All retracted genetics articles published between 1970 and 2018 were retrieved from the Retraction Watch database. The reasons for retraction were fabrication/falsification, plagiarism, duplication, unreliability, and authorship issues. Articles subject to investigation by company/institution, journal, US Office for Research Integrity or third party were also retrieved. RESULTS: 1582 retracted genetics articles (MG, n=690; NMG, n=892) were identified . Research misconduct and duplication were involved in 33% and 24% of retracted papers, respectively; 37% were subject to investigation. Only 0.8% of articles involved both fabrication/falsification and plagiarism. In this century the incidence of both plagiarism and duplication increased statistically significantly in genetics retracted articles; conversely, fabrication/falsification was significantly reduced. Time to retraction due to scientific misconduct was statistically significantly shorter in the period 2006-2018 compared with 1970-2000. Fabrication/falsification was statistically significantly more common in NMG (28%) than in MG (19%) articles. MG articles were significantly more frequently investigated (45%) than NMG articles (31%). Time to retraction of articles due to fabrication/falsification was significantly shorter for MG (mean 4.7 years) than for NMG (mean 6.4 years) articles; no differences for plagiarism (mean 2.3 years) were found. The USA (mainly NMG articles) and China (mainly MG articles) accounted for the largest number of retracted articles. CONCLUSION: Genetics is a discipline with a high article retraction rate (estimated retraction rate 0.15%). Fabrication/falsification and plagiarism were almost mutually exclusive reasons for article retraction. Retracted MG articles were more frequently subject to investigation than NMG articles. Retracted articles due to fabrication/falsification required 2.0-2.8 times longer to retract than when plagiarism was involved.
Asunto(s)
Investigación Biomédica/normas , Mala Conducta Científica/estadística & datos numéricos , China , Manejo de Datos/normas , Bases de Datos Factuales , Humanos , Retractación de Publicación como AsuntoRESUMEN
With the rapid development of modern information technology, the health care industry is entering a critical stage of intelligence. Faced with the growing health care big data, information security issues are becoming more and more prominent in the management of smart health care, especially the problem of patient privacy leakage is the most serious. Therefore, strengthening the information management of intelligent health care in the era of big data is an important part of the long-term sustainable development of hospitals. This paper first identified the key indicators affecting the privacy disclosure of big data in health management, and then established the risk access control model based on the fuzzy theory, which was used for the management of big data in intelligent medical treatment, and solves the problem of inaccurate experimental results due to the lack of real data when dealing with actual problems. Finally, the model is compared with the results calculated by the fuzzy tool set in Matlab. The results verify that the model is effective in assessing the current safety risks and predicting the range of different risk factors, and the prediction accuracy can reach more than 90%.
Asunto(s)
Macrodatos , Confidencialidad , Manejo de Datos/métodos , Seguridad Computacional , Anonimización de la Información , Manejo de Datos/normas , Lógica Difusa , Sector de Atención de Salud , Humanos , PrivacidadRESUMEN
This article aims at opening discussions and promoting future research about key elements that should be taken into account when considering new ways to organise access to personal data for scientific research in the perspective of developing innovative medicines. It provides an overview of these key elements: the different ways of accessing data, the theory of the essential facilities, the Regulation on the Free Flow of Non-personal Data, the Directive on Open Data and the re-use of public sector information, and the General Data Protection Regulation (GDPR) rules on accessing personal data for scientific research. In the perspective of fostering research, promoting innovative medicines, and having all the raw data centralised in big databases localised in Europe, we suggest to further investigate the possibility to find acceptable and balanced solutions with complete respect of fundamental rights, as well as for private life and data protection.
Asunto(s)
Investigación Biomédica , Confidencialidad/legislación & jurisprudencia , Manejo de Datos/normas , Difusión de la Información , Terapias en Investigación , Difusión de Innovaciones , Europa (Continente) , Humanos , Sector PúblicoRESUMEN
The American College of Surgeons requires that trauma centers collect and enter data into the National Trauma Data Registry in compliance with the National Trauma Data Standard. ProMedica supports employment of 4 trauma data analysts who are responsible for entering information in a timely manner, validating the data, and analyzing data to evaluate established benchmarks and support the performance improvement and patient safety process. Historically, these analysts were located on-site at ProMedica Toledo Hospital. In 2017, a proposal was developed including modifications to data collection to streamline processes, move toward paperless documentation, and allow for the analysts to telecommute. To measure the effect of these changes, the timeliness of data entry, rate of data validation, productivity, and staff satisfaction were measured. After the transition to electronic data management and home-based workstations, registry data were being entered within 30 days and 100% of cases were being validated, without sacrificing effective and efficient communication between in-hospital and home-based staff. The institution also benefitted from reduced expense for physical space, employee turnover, and decreased employee absenteeism. The analysts appreciated benefits related to time, travel, environment, and job satisfaction.It is feasible to transition trauma data analysts to a work-from-home situation. An all-electronic system of data management and communication makes such an arrangement possible and sustainable. This quality improvement project solved a workspace issue and was beneficial to the trauma program overall, with the timeliness and validation of data entry vastly improved.
Asunto(s)
Manejo de Datos/normas , Eficiencia Organizacional/normas , Registros Electrónicos de Salud/normas , Garantía de la Calidad de Atención de Salud/normas , Sistema de Registros/normas , Teletrabajo/normas , Centros Traumatológicos/normas , Manejo de Datos/estadística & datos numéricos , Eficiencia Organizacional/estadística & datos numéricos , Registros Electrónicos de Salud/estadística & datos numéricos , Guías como Asunto , Humanos , Garantía de la Calidad de Atención de Salud/estadística & datos numéricos , Mejoramiento de la Calidad/normas , Mejoramiento de la Calidad/estadística & datos numéricos , Sistema de Registros/estadística & datos numéricos , Teletrabajo/estadística & datos numéricos , Centros Traumatológicos/estadística & datos numéricos , Estados UnidosRESUMEN
AIMS: Monitoring risk-based approaches in clinical trials are encouraged by regulatory guidance. However, the impact of a targeted source data verification (SDV) on data-management (DM) workload and on final data quality needs to be addressed. METHODS: MONITORING was a prospective study aiming at comparing full SDV (100% of data verified for all patients) and targeted SDV (only key data verified for all patients) followed by the same DM program (detecting missing data and checking consistency) on final data quality, global workload and staffing costs. RESULTS: In all, 137 008 data including 18 124 key data were collected for 126 patients from 6 clinical trials. Compared to the final database obtained using the full SDV monitoring process, the final database obtained using the targeted SDV monitoring process had a residual error rate of 1.47% (95% confidence interval, 1.41-1.53%) on overall data and 0.78% (95% confidence interval, 0.65-0.91%) on key data. There were nearly 4 times more queries per study with targeted SDV than with full SDV (mean ± standard deviation: 132 ± 101 vs 34 ± 26; P = .03). For a handling time of 15 minutes per query, the global workload of the targeted SDV monitoring strategy remained below that of the full SDV monitoring strategy. From 25 minutes per query it was above, increasing progressively to represent a 50% increase for 45 minutes per query. CONCLUSION: Targeted SDV monitoring is accompanied by increased workload for DM, which allows to obtain a small proportion of remaining errors on key data (<1%), but may substantially increase trial costs.
Asunto(s)
Exactitud de los Datos , Recolección de Datos/normas , Manejo de Datos/normas , Bases de Datos Factuales/normas , Registros Electrónicos de Salud/normas , Control de Formularios y Registros/métodos , Ensayos Clínicos Controlados Aleatorios como Asunto/normas , Carga de Trabajo/normas , Análisis Costo-Beneficio , Control de Formularios y Registros/economía , Control de Formularios y Registros/normas , Humanos , Estudios ProspectivosRESUMEN
The convergence of multiple recent developments in health care information technology and monitoring devices has made possible the creation of remote patient surveillance systems that increase the timeliness and quality of patient care. More convenient, less invasive monitoring devices, including patches, wearables, and biosensors, now allow for continuous physiological data to be gleaned from patients in a variety of care settings across the perioperative experience. These data can be bound into a single data repository, creating so-called data lakes. The high volume and diversity of data in these repositories must be processed into standard formats that can be queried in real time. These data can then be used by sophisticated prediction algorithms currently under development, enabling the early recognition of patterns of clinical deterioration otherwise undetectable to humans. Improved predictions can reduce alarm fatigue. In addition, data are now automatically queriable on a real-time basis such that they can be fed back to clinicians in a time frame that allows for meaningful intervention. These advancements are key components of successful remote surveillance systems. Anesthesiologists have the opportunity to be at the forefront of remote surveillance in the care they provide in the operating room, postanesthesia care unit, and intensive care unit, while also expanding their scope to include high-risk preoperative and postoperative patients on the general care wards. These systems hold the promise of enabling anesthesiologists to detect and intervene upon changes in the clinical status of the patient before adverse events have occurred. Importantly, however, significant barriers still exist to the effective deployment of these technologies and their study in impacting patient outcomes. Studies demonstrating the impact of remote surveillance on patient outcomes are limited. Critical to the impact of the technology are strategies of implementation, including who should receive and respond to alerts and how they should respond. Moreover, the lack of cost-effectiveness data and the uncertainty of whether clinical activities surrounding these technologies will be financially reimbursed remain significant challenges to future scale and sustainability. This narrative review will discuss the evolving technical components of remote surveillance systems, the clinical use cases relevant to the anesthesiologist's practice, the existing evidence for their impact on patients, the barriers that exist to their effective implementation and study, and important considerations regarding sustainability and cost-effectiveness.
Asunto(s)
Anestesiología/métodos , Manejo de Datos/métodos , Informática Médica/métodos , Calidad de la Atención de Salud , Tecnología de Sensores Remotos/métodos , Anestesiología/economía , Anestesiología/normas , Análisis Costo-Beneficio/métodos , Análisis Costo-Beneficio/normas , Manejo de Datos/economía , Manejo de Datos/normas , Humanos , Informática Médica/economía , Informática Médica/normas , Calidad de la Atención de Salud/economía , Calidad de la Atención de Salud/normas , Tecnología de Sensores Remotos/economía , Tecnología de Sensores Remotos/normas , Factores de TiempoRESUMEN
Over two decades, the Australian and New Zealand Society of Cardiac and Thoracic Surgeons (ANZSCTS) cardiac surgery database program has evolved from a single state-based database to a national clinical quality registry program and is now the most comprehensive cardiac surgical registry in Australia. We report the current structure and governance of the program and its key activities.
Asunto(s)
Manejo de Datos/normas , Garantía de la Calidad de Atención de Salud/estadística & datos numéricos , Sistema de Registros , Sociedades Médicas , Cirugía Torácica/estadística & datos numéricos , Procedimientos Quirúrgicos Torácicos/normas , Australia , Humanos , Nueva ZelandaRESUMEN
Datasets consist of measurement data and metadata. Metadata provides context, essential for understanding and (re-)using data. Various metadata standards exist for different methods, systems and contexts. However, relevant information resides at differing stages across the data-lifecycle. Often, this information is defined and standardized only at publication stage, which can lead to data loss and workload increase. In this study, we developed Metadatasheet, a metadata standard based on interviews with members of two biomedical consortia and systematic screening of data repositories. It aligns with the data-lifecycle allowing synchronous metadata recording within Microsoft Excel, a widespread data recording software. Additionally, we provide an implementation, the Metadata Workbook, that offers user-friendly features like automation, dynamic adaption, metadata integrity checks, and export options for various metadata standards. By design and due to its extensive documentation, the proposed metadata standard simplifies recording and structuring of metadata for biomedical scientists, promoting practicality and convenience in data management. This framework can accelerate scientific progress by enhancing collaboration and knowledge transfer throughout the intermediate steps of data creation.
Asunto(s)
Manejo de Datos , Metadatos , Investigación Biomédica , Manejo de Datos/normas , Metadatos/normas , Programas InformáticosRESUMEN
BACKGROUND: The availability and accuracy of data on a patient's race/ethnicity varies across databases. Discrepancies in data quality can negatively impact attempts to study health disparities. METHODS: We conducted a systematic review to organize information on the accuracy of race/ethnicity data stratified by database type and by specific race/ethnicity categories. RESULTS: The review included 43 studies. Disease registries showed consistently high levels of data completeness and accuracy. EHRs frequently showed incomplete and/or inaccurate data on the race/ethnicity of patients. Databases had high levels of accurate data for White and Black patients but relatively high levels of misclassification and incomplete data for Hispanic/Latinx patients. Asians, Pacific Islanders, and AI/ANs are the most misclassified. Systems-based interventions to increase self-reported data showed improvement in data quality. CONCLUSION: Data on race/ethnicity that is collected with the purpose of research and quality improvement appears most reliable. Data accuracy can vary by race/ethnicity status and better collection standards are needed.
Asunto(s)
Manejo de Datos , Etnicidad , Grupos Raciales , Humanos , Asiático , Manejo de Datos/organización & administración , Manejo de Datos/normas , Manejo de Datos/estadística & datos numéricos , Etnicidad/estadística & datos numéricos , Disparidades en Atención de Salud/etnología , Disparidades en Atención de Salud/normas , Disparidades en Atención de Salud/estadística & datos numéricos , Hispánicos o Latinos , Grupos Raciales/etnología , Grupos Raciales/estadística & datos numéricos , Blanco , Negro o Afroamericano , Pueblos Isleños del Pacífico , Indio Americano o Nativo de AlaskaRESUMEN
ABSTRACT: There are no standardized methods for collecting and reporting coronavirus disease-2019 (COVID-19) data. We aimed to compare the proportion of patients admitted for COVID-19-related symptoms and those admitted for other reasons who incidentally tested positive for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2).Retrospective cohort studyData were sampled twice weekly between March 26 and June 6, 2020 from a "COVID-19 dashboard," a system-wide administrative database that includes the number of hospitalized patients with a positive SARS-CoV-2 polymerase chain reaction test. Patient charts were subsequently reviewed and the principal reason for hospitalization abstracted.Data collected during a statewide lockdown revealed that 92 hospitalized patients had positive SARS-CoV-2 test results. Among these individuals, 4.3% were hospitalized for reasons other than COVID-19-related symptoms but were incidentally found to be SARS-CoV-2-positive. After the lockdown was suspended, the total inpatient census of SARS-CoV-2-positive patients increased to 128, 20.3% of whom were hospitalized for non-COVID-19-related complaints.In the absence of a statewide lockdown, there was a significant increase in the proportion of patients admitted for non-COVID-19-related complaints who were incidentally found to be SARS-CoV-2-positive. In order to ensure data integrity, coding should distinguish between patients with COVID-19-related symptoms and asymptomatic patients carrying the SARS-CoV-2 virus.
Asunto(s)
Infecciones Asintomáticas/epidemiología , COVID-19/epidemiología , Manejo de Datos/normas , Hospitalización/estadística & datos numéricos , Prueba de Ácido Nucleico para COVID-19/estadística & datos numéricos , Femenino , Humanos , Hallazgos Incidentales , Masculino , Pandemias , Mejoramiento de la Calidad , Estudios Retrospectivos , SARS-CoV-2 , ConfianzaRESUMEN
OBJECTIVE: To evaluate the reporting quality of randomized controlled trials (RCTs) regarding patients with COVID-19 and analyse the influence factors. METHODS: PubMed, Embase, Web of Science and the Cochrane Library databases were searched to collect RCTs regarding patients with COVID-19. The retrieval time was from the inception to December 1, 2020. The CONSORT 2010 statement was used to evaluate the overall reporting quality of these RCTs. RESULTS: 53 RCTs were included. The study showed that the average reporting rate for 37 items in CONSORT checklist was 53.85% with mean overall adherence score of 13.02±3.546 (ranged: 7 to 22). The multivariate linear regression analysis showed the overall adherence score to the CONSORT guideline was associated with journal impact factor (P = 0.006), and endorsement of CONSORT statement (P = 0.014). CONCLUSION: Although many RCTs of COVID-19 have been published in different journals, the overall reporting quality of these articles was suboptimal, it can not provide valid evidence for clinical decision-making and systematic reviews. Therefore, more journals should endorse the CONSORT statement, authors should strictly follow the relevant provisions of the CONSORT guideline when reporting articles. Future RCTs should particularly focus on improvement of detailed reporting in allocation concealment, blinding and estimation of sample size.