Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 47
Filter
Add more filters

Publication year range
1.
PLoS Biol ; 19(4): e3001162, 2021 04.
Article in English | MEDLINE | ID: mdl-33872298

ABSTRACT

Many randomized controlled trials (RCTs) are biased and difficult to reproduce due to methodological flaws and poor reporting. There is increasing attention for responsible research practices and implementation of reporting guidelines, but whether these efforts have improved the methodological quality of RCTs (e.g., lower risk of bias) is unknown. We, therefore, mapped risk-of-bias trends over time in RCT publications in relation to journal and author characteristics. Meta-information of 176,620 RCTs published between 1966 and 2018 was extracted. The risk-of-bias probability (random sequence generation, allocation concealment, blinding of patients/personnel, and blinding of outcome assessment) was assessed using a risk-of-bias machine learning tool. This tool was simultaneously validated using 63,327 human risk-of-bias assessments obtained from 17,394 RCTs evaluated in the Cochrane Database of Systematic Reviews (CDSR). Moreover, RCT registration and CONSORT Statement reporting were assessed using automated searches. Publication characteristics included the number of authors, journal impact factor (JIF), and medical discipline. The annual number of published RCTs substantially increased over 4 decades, accompanied by increases in authors (5.2 to 7.8) and institutions (2.9 to 4.8). The risk of bias remained present in most RCTs but decreased over time for allocation concealment (63% to 51%), random sequence generation (57% to 36%), and blinding of outcome assessment (58% to 52%). Trial registration (37% to 47%) and the use of the CONSORT Statement (1% to 20%) also rapidly increased. In journals with a higher impact factor (>10), the risk of bias was consistently lower with higher levels of RCT registration and the use of the CONSORT Statement. Automated risk-of-bias predictions had accuracies above 70% for allocation concealment (70.7%), random sequence generation (72.1%), and blinding of patients/personnel (79.8%), but not for blinding of outcome assessment (62.7%). In conclusion, the likelihood of bias in RCTs has generally decreased over the last decades. This optimistic trend may be driven by increased knowledge augmented by mandatory trial registration and more stringent reporting guidelines and journal requirements. Nevertheless, relatively high probabilities of bias remain, particularly in journals with lower impact factors. This emphasizes that further improvement of RCT registration, conduct, and reporting is still urgently needed.


Subject(s)
Publications , Randomized Controlled Trials as Topic/methods , Randomized Controlled Trials as Topic/statistics & numerical data , Bias , Bibliometrics , Data Accuracy , Data Management/history , Data Management/methods , Data Management/standards , Data Management/trends , Databases, Bibliographic/history , Databases, Bibliographic/standards , Databases, Bibliographic/trends , History, 20th Century , History, 21st Century , Humans , Outcome Assessment, Health Care , Public Reporting of Healthcare Data , Publications/history , Publications/standards , Publications/statistics & numerical data , Publications/trends , Quality Improvement/history , Quality Improvement/trends , Randomized Controlled Trials as Topic/history , Systematic Reviews as Topic
4.
Cytometry A ; 99(1): 60-67, 2021 01.
Article in English | MEDLINE | ID: mdl-33197114

ABSTRACT

Data management is essential in a flow cytometry (FCM) shared resource laboratory (SRL) for the integrity of collected data and its long-term preservation, as described in the Cytometry publication from 2016, ISAC Flow Cytometry Shared Resource Laboratory (SRL) Best Practices (Barsky et al.: Cytometry Part A 89A(2016): 1017-1030). The SARS-CoV-2 pandemic introduced an array of challenges in the operation of SRLs. The subsequent laboratory shutdowns and access restrictions brought to the forefront well-established practices that withstood the impact of a sudden change in operations and illuminated areas that need improvement. The most significant challenges from a data management perspective were data access for remote analysis and workstation management. Notably, lessons learned from this challenge emphasize the importance of safeguarding collected data from loss in various emergencies such as fire or natural disasters where the physical hardware storing data could be directly affected. Here, we describe two data management systems that have been successful during the current emergency created by the pandemic, specifically remote access and automated data transfer. We will discuss other situations that could arise and lead to data loss or challenges in interpreting data. © 2020 International Society for Advancement of Cytometry.


Subject(s)
COVID-19/epidemiology , Data Management/trends , Flow Cytometry/trends , Laboratories/trends , Teleworking/trends , COVID-19/prevention & control , Data Management/standards , Flow Cytometry/standards , Humans , Laboratories/standards , Teleworking/standards
5.
Value Health ; 24(10): 1484-1489, 2021 10.
Article in English | MEDLINE | ID: mdl-34593172

ABSTRACT

OBJECTIVES: To explore the use of data dashboards to convey information about a drug's value, and reduce the need to collapse dimensions of value to a single measure. METHODS: Review of the literature on US Drug Value Assessment Frameworks, and discussion of the value of data dashboards to improve the manner in which information on value is displayed. RESULTS: The incremental cost per quality-adjusted life-year ratio is a useful starting point for conversation about a drug's value, but it cannot reflect all of the elements of value about which different audiences care deeply. Data dashboards for drug value assessments can draw from other contexts. Decision makers should be presented with well-designed value dashboards containing various metrics, including conventional cost per quality-adjusted life-year ratios as well as measures of a drug's impact on clinical and patient-centric outcomes, and on budgetary and distributional consequences, to convey a drug's value along different dimensions. CONCLUSIONS: The advent of US drug value frameworks in health care has forced a concomitant effort to develop appropriate information displays. Researchers should formally test different formats and elements.


Subject(s)
Data Management/methods , Pharmaceutical Preparations/economics , Budgets , Data Management/standards , Data Management/trends , Humans , Social Media/instrumentation , Social Media/standards , Social Media/statistics & numerical data , United States
6.
Future Oncol ; 17(15): 1865-1877, 2021 May.
Article in English | MEDLINE | ID: mdl-33629590

ABSTRACT

Retrospective observational research relies on databases that do not routinely record lines of therapy or reasons for treatment change. Standardized approaches to estimate lines of therapy were developed and evaluated in this study. A number of rules were developed, assumptions varied and macros developed to apply to large datasets. Results were investigated in an iterative process to refine line of therapy algorithms in three different cancers (lung, colorectal and gastric). Three primary factors were evaluated and included in the estimation of lines of therapy in oncology: defining a treatment regimen, addition/removal of drugs and gap periods. Algorithms and associated Statistical Analysis Software (SAS®) macros for line of therapy identification are provided to facilitate and standardize the use of real-world databases for oncology research.


Lay abstract Most, if not all, real-world healthcare databases do not contain data explaining treatment changes, requiring that rules be applied to estimate when treatment changes may reflect advancement of underlying disease. This study investigated three tumor types (lung, colorectal and gastric cancer) to develop and provide rules that researchers can apply to real-world databases. The resulting algorithms and associated SAS® macros from this work are provided for use in the Supplementary data.


Subject(s)
Antineoplastic Combined Chemotherapy Protocols/therapeutic use , Colorectal Neoplasms/drug therapy , Data Management/methods , Lung Neoplasms/drug therapy , Medical Oncology/standards , Stomach Neoplasms/drug therapy , Algorithms , Data Management/standards , Databases, Factual/standards , Databases, Factual/statistics & numerical data , Datasets as Topic/standards , Humans , Medical Oncology/statistics & numerical data , Observational Studies as Topic/standards , Observational Studies as Topic/statistics & numerical data , Retrospective Studies , Software
7.
J Nurs Adm ; 51(3): 162-167, 2021 Mar 01.
Article in English | MEDLINE | ID: mdl-33570374

ABSTRACT

A focused effort is needed to capture the utility and usability of the electronic health record for providing usable and reusable data while reducing documentation burden. A collaborative effort of nurse leaders and experts was able to generate national consensus recommendations on documentation elements related to admission history. The process used in this effort is summarized in a framework that can be used by other groups to develop content that reduces documentation burden while maximizing the creation of usable and reusable data.


Subject(s)
Data Management/standards , Documentation/standards , Electronic Health Records/standards , Intersectoral Collaboration , Organizational Objectives , Practice Guidelines as Topic/standards , Humans , United States
8.
Future Oncol ; 16(3): 4455-4460, 2020 Jan.
Article in English | MEDLINE | ID: mdl-31820657

ABSTRACT

Aim: We assessed the extent to which chemotherapy cycles recorded in Hospital Episode Statistics (HES) Admitted Patient Care (APC) were captured in National Cancer Registration & Analysis Service Systemic Anti-Cancer Therapy (SACT) for a cohort of lung cancer patients. Methods: All chemotherapy cycles recorded for linkage eligible lung cancer patients with a National Cancer Registration & Analysis Service diagnosis between 2012 and 2015 were identified in HES APC and SACT. Results: Among a population of 4070 lung cancer patients, 6076 chemotherapy cycles were observed in HES APC data. A total of 61% of cycles were recorded in SACT on the same day, 8% on a different day and 31% were not recorded in SACT. Conclusion: Our results suggest that SACT may not capture all chemotherapy cycles administered to a patient between 2012 and 2016; however, administrative changes mean data after this period may be more complete.


Subject(s)
Antineoplastic Combined Chemotherapy Protocols/administration & dosage , Data Management/statistics & numerical data , Databases, Factual/statistics & numerical data , Lung Neoplasms/drug therapy , Registries/statistics & numerical data , Cohort Studies , Data Management/standards , Databases, Factual/standards , Datasets as Topic , Drug Administration Schedule , England , Hospitalization/statistics & numerical data , Humans , Registries/standards , State Medicine/statistics & numerical data
9.
J Med Genet ; 56(11): 734-740, 2019 11.
Article in English | MEDLINE | ID: mdl-31300549

ABSTRACT

INTRODUCTION: Between 0.02% and 0.04% of articles are retracted. We aim to: (a) describe the reasons for retraction of genetics articles and the time elapsed between the publication of an article and that of the retraction notice because of research misconduct (ie, fabrication, falsification, plagiarism); and (b) compare all these variables between retracted medical genetics (MG) and non-medical genetics (NMG) articles. METHODS: All retracted genetics articles published between 1970 and 2018 were retrieved from the Retraction Watch database. The reasons for retraction were fabrication/falsification, plagiarism, duplication, unreliability, and authorship issues. Articles subject to investigation by company/institution, journal, US Office for Research Integrity or third party were also retrieved. RESULTS: 1582 retracted genetics articles (MG, n=690; NMG, n=892) were identified . Research misconduct and duplication were involved in 33% and 24% of retracted papers, respectively; 37% were subject to investigation. Only 0.8% of articles involved both fabrication/falsification and plagiarism. In this century the incidence of both plagiarism and duplication increased statistically significantly in genetics retracted articles; conversely, fabrication/falsification was significantly reduced. Time to retraction due to scientific misconduct was statistically significantly shorter in the period 2006-2018 compared with 1970-2000. Fabrication/falsification was statistically significantly more common in NMG (28%) than in MG (19%) articles. MG articles were significantly more frequently investigated (45%) than NMG articles (31%). Time to retraction of articles due to fabrication/falsification was significantly shorter for MG (mean 4.7 years) than for NMG (mean 6.4 years) articles; no differences for plagiarism (mean 2.3 years) were found. The USA (mainly NMG articles) and China (mainly MG articles) accounted for the largest number of retracted articles. CONCLUSION: Genetics is a discipline with a high article retraction rate (estimated retraction rate 0.15%). Fabrication/falsification and plagiarism were almost mutually exclusive reasons for article retraction. Retracted MG articles were more frequently subject to investigation than NMG articles. Retracted articles due to fabrication/falsification required 2.0-2.8 times longer to retract than when plagiarism was involved.


Subject(s)
Biomedical Research/standards , Scientific Misconduct/statistics & numerical data , China , Data Management/standards , Databases, Factual , Humans , Retraction of Publication as Topic
10.
Health Care Manag Sci ; 23(3): 427-442, 2020 Sep.
Article in English | MEDLINE | ID: mdl-31338637

ABSTRACT

With the rapid development of modern information technology, the health care industry is entering a critical stage of intelligence. Faced with the growing health care big data, information security issues are becoming more and more prominent in the management of smart health care, especially the problem of patient privacy leakage is the most serious. Therefore, strengthening the information management of intelligent health care in the era of big data is an important part of the long-term sustainable development of hospitals. This paper first identified the key indicators affecting the privacy disclosure of big data in health management, and then established the risk access control model based on the fuzzy theory, which was used for the management of big data in intelligent medical treatment, and solves the problem of inaccurate experimental results due to the lack of real data when dealing with actual problems. Finally, the model is compared with the results calculated by the fuzzy tool set in Matlab. The results verify that the model is effective in assessing the current safety risks and predicting the range of different risk factors, and the prediction accuracy can reach more than 90%.


Subject(s)
Big Data , Confidentiality , Data Management/methods , Computer Security , Data Anonymization , Data Management/standards , Fuzzy Logic , Health Care Sector , Humans , Privacy
11.
Eur J Health Law ; 27(3): 195-212, 2020 05 18.
Article in English | MEDLINE | ID: mdl-33652399

ABSTRACT

This article aims at opening discussions and promoting future research about key elements that should be taken into account when considering new ways to organise access to personal data for scientific research in the perspective of developing innovative medicines. It provides an overview of these key elements: the different ways of accessing data, the theory of the essential facilities, the Regulation on the Free Flow of Non-personal Data, the Directive on Open Data and the re-use of public sector information, and the General Data Protection Regulation (GDPR) rules on accessing personal data for scientific research. In the perspective of fostering research, promoting innovative medicines, and having all the raw data centralised in big databases localised in Europe, we suggest to further investigate the possibility to find acceptable and balanced solutions with complete respect of fundamental rights, as well as for private life and data protection.


Subject(s)
Biomedical Research , Confidentiality/legislation & jurisprudence , Data Management/standards , Information Dissemination , Therapies, Investigational , Diffusion of Innovation , Europe , Humans , Public Sector
12.
J Trauma Nurs ; 27(3): 170-176, 2020.
Article in English | MEDLINE | ID: mdl-32371736

ABSTRACT

The American College of Surgeons requires that trauma centers collect and enter data into the National Trauma Data Registry in compliance with the National Trauma Data Standard. ProMedica supports employment of 4 trauma data analysts who are responsible for entering information in a timely manner, validating the data, and analyzing data to evaluate established benchmarks and support the performance improvement and patient safety process. Historically, these analysts were located on-site at ProMedica Toledo Hospital. In 2017, a proposal was developed including modifications to data collection to streamline processes, move toward paperless documentation, and allow for the analysts to telecommute. To measure the effect of these changes, the timeliness of data entry, rate of data validation, productivity, and staff satisfaction were measured. After the transition to electronic data management and home-based workstations, registry data were being entered within 30 days and 100% of cases were being validated, without sacrificing effective and efficient communication between in-hospital and home-based staff. The institution also benefitted from reduced expense for physical space, employee turnover, and decreased employee absenteeism. The analysts appreciated benefits related to time, travel, environment, and job satisfaction.It is feasible to transition trauma data analysts to a work-from-home situation. An all-electronic system of data management and communication makes such an arrangement possible and sustainable. This quality improvement project solved a workspace issue and was beneficial to the trauma program overall, with the timeliness and validation of data entry vastly improved.


Subject(s)
Data Management/standards , Efficiency, Organizational/standards , Electronic Health Records/standards , Quality Assurance, Health Care/standards , Registries/standards , Teleworking/standards , Trauma Centers/standards , Data Management/statistics & numerical data , Efficiency, Organizational/statistics & numerical data , Electronic Health Records/statistics & numerical data , Guidelines as Topic , Humans , Quality Assurance, Health Care/statistics & numerical data , Quality Improvement/standards , Quality Improvement/statistics & numerical data , Registries/statistics & numerical data , Teleworking/statistics & numerical data , Trauma Centers/statistics & numerical data , United States
13.
Br J Clin Pharmacol ; 85(12): 2784-2792, 2019 12.
Article in English | MEDLINE | ID: mdl-31471967

ABSTRACT

AIMS: Monitoring risk-based approaches in clinical trials are encouraged by regulatory guidance. However, the impact of a targeted source data verification (SDV) on data-management (DM) workload and on final data quality needs to be addressed. METHODS: MONITORING was a prospective study aiming at comparing full SDV (100% of data verified for all patients) and targeted SDV (only key data verified for all patients) followed by the same DM program (detecting missing data and checking consistency) on final data quality, global workload and staffing costs. RESULTS: In all, 137 008 data including 18 124 key data were collected for 126 patients from 6 clinical trials. Compared to the final database obtained using the full SDV monitoring process, the final database obtained using the targeted SDV monitoring process had a residual error rate of 1.47% (95% confidence interval, 1.41-1.53%) on overall data and 0.78% (95% confidence interval, 0.65-0.91%) on key data. There were nearly 4 times more queries per study with targeted SDV than with full SDV (mean ± standard deviation: 132 ± 101 vs 34 ± 26; P = .03). For a handling time of 15 minutes per query, the global workload of the targeted SDV monitoring strategy remained below that of the full SDV monitoring strategy. From 25 minutes per query it was above, increasing progressively to represent a 50% increase for 45 minutes per query. CONCLUSION: Targeted SDV monitoring is accompanied by increased workload for DM, which allows to obtain a small proportion of remaining errors on key data (<1%), but may substantially increase trial costs.


Subject(s)
Data Accuracy , Data Collection/standards , Data Management/standards , Databases, Factual/standards , Electronic Health Records/standards , Forms and Records Control/methods , Randomized Controlled Trials as Topic/standards , Workload/standards , Cost-Benefit Analysis , Forms and Records Control/economics , Forms and Records Control/standards , Humans , Prospective Studies
14.
Anesth Analg ; 129(3): 726-734, 2019 09.
Article in English | MEDLINE | ID: mdl-31425213

ABSTRACT

The convergence of multiple recent developments in health care information technology and monitoring devices has made possible the creation of remote patient surveillance systems that increase the timeliness and quality of patient care. More convenient, less invasive monitoring devices, including patches, wearables, and biosensors, now allow for continuous physiological data to be gleaned from patients in a variety of care settings across the perioperative experience. These data can be bound into a single data repository, creating so-called data lakes. The high volume and diversity of data in these repositories must be processed into standard formats that can be queried in real time. These data can then be used by sophisticated prediction algorithms currently under development, enabling the early recognition of patterns of clinical deterioration otherwise undetectable to humans. Improved predictions can reduce alarm fatigue. In addition, data are now automatically queriable on a real-time basis such that they can be fed back to clinicians in a time frame that allows for meaningful intervention. These advancements are key components of successful remote surveillance systems. Anesthesiologists have the opportunity to be at the forefront of remote surveillance in the care they provide in the operating room, postanesthesia care unit, and intensive care unit, while also expanding their scope to include high-risk preoperative and postoperative patients on the general care wards. These systems hold the promise of enabling anesthesiologists to detect and intervene upon changes in the clinical status of the patient before adverse events have occurred. Importantly, however, significant barriers still exist to the effective deployment of these technologies and their study in impacting patient outcomes. Studies demonstrating the impact of remote surveillance on patient outcomes are limited. Critical to the impact of the technology are strategies of implementation, including who should receive and respond to alerts and how they should respond. Moreover, the lack of cost-effectiveness data and the uncertainty of whether clinical activities surrounding these technologies will be financially reimbursed remain significant challenges to future scale and sustainability. This narrative review will discuss the evolving technical components of remote surveillance systems, the clinical use cases relevant to the anesthesiologist's practice, the existing evidence for their impact on patients, the barriers that exist to their effective implementation and study, and important considerations regarding sustainability and cost-effectiveness.


Subject(s)
Anesthesiology/methods , Data Management/methods , Medical Informatics/methods , Quality of Health Care , Remote Sensing Technology/methods , Anesthesiology/economics , Anesthesiology/standards , Cost-Benefit Analysis/methods , Cost-Benefit Analysis/standards , Data Management/economics , Data Management/standards , Humans , Medical Informatics/economics , Medical Informatics/standards , Quality of Health Care/economics , Quality of Health Care/standards , Remote Sensing Technology/economics , Remote Sensing Technology/standards , Time Factors
15.
Heart Lung Circ ; 28(10): 1459-1462, 2019 Oct.
Article in English | MEDLINE | ID: mdl-30962063

ABSTRACT

Over two decades, the Australian and New Zealand Society of Cardiac and Thoracic Surgeons (ANZSCTS) cardiac surgery database program has evolved from a single state-based database to a national clinical quality registry program and is now the most comprehensive cardiac surgical registry in Australia. We report the current structure and governance of the program and its key activities.


Subject(s)
Data Management/standards , Quality Assurance, Health Care/statistics & numerical data , Registries , Societies, Medical , Thoracic Surgery/statistics & numerical data , Thoracic Surgical Procedures/standards , Australia , Humans , New Zealand
17.
Sci Data ; 11(1): 524, 2024 May 22.
Article in English | MEDLINE | ID: mdl-38778016

ABSTRACT

Datasets consist of measurement data and metadata. Metadata provides context, essential for understanding and (re-)using data. Various metadata standards exist for different methods, systems and contexts. However, relevant information resides at differing stages across the data-lifecycle. Often, this information is defined and standardized only at publication stage, which can lead to data loss and workload increase. In this study, we developed Metadatasheet, a metadata standard based on interviews with members of two biomedical consortia and systematic screening of data repositories. It aligns with the data-lifecycle allowing synchronous metadata recording within Microsoft Excel, a widespread data recording software. Additionally, we provide an implementation, the Metadata Workbook, that offers user-friendly features like automation, dynamic adaption, metadata integrity checks, and export options for various metadata standards. By design and due to its extensive documentation, the proposed metadata standard simplifies recording and structuring of metadata for biomedical scientists, promoting practicality and convenience in data management. This framework can accelerate scientific progress by enhancing collaboration and knowledge transfer throughout the intermediate steps of data creation.


Subject(s)
Data Management , Metadata , Biomedical Research , Data Management/standards , Metadata/standards , Software
18.
Am J Surg ; 226(4): 463-470, 2023 10.
Article in English | MEDLINE | ID: mdl-37230870

ABSTRACT

BACKGROUND: The availability and accuracy of data on a patient's race/ethnicity varies across databases. Discrepancies in data quality can negatively impact attempts to study health disparities. METHODS: We conducted a systematic review to organize information on the accuracy of race/ethnicity data stratified by database type and by specific race/ethnicity categories. RESULTS: The review included 43 studies. Disease registries showed consistently high levels of data completeness and accuracy. EHRs frequently showed incomplete and/or inaccurate data on the race/ethnicity of patients. Databases had high levels of accurate data for White and Black patients but relatively high levels of misclassification and incomplete data for Hispanic/Latinx patients. Asians, Pacific Islanders, and AI/ANs are the most misclassified. Systems-based interventions to increase self-reported data showed improvement in data quality. CONCLUSION: Data on race/ethnicity that is collected with the purpose of research and quality improvement appears most reliable. Data accuracy can vary by race/ethnicity status and better collection standards are needed.


Subject(s)
Data Management , Ethnicity , Racial Groups , Humans , Asian , Data Management/organization & administration , Data Management/standards , Data Management/statistics & numerical data , Ethnicity/statistics & numerical data , Healthcare Disparities/ethnology , Healthcare Disparities/standards , Healthcare Disparities/statistics & numerical data , Hispanic or Latino , Racial Groups/ethnology , Racial Groups/statistics & numerical data , White , Black or African American , Pacific Island People , American Indian or Alaska Native
19.
PLoS One ; 16(9): e0257093, 2021.
Article in English | MEDLINE | ID: mdl-34555033

ABSTRACT

OBJECTIVE: To evaluate the reporting quality of randomized controlled trials (RCTs) regarding patients with COVID-19 and analyse the influence factors. METHODS: PubMed, Embase, Web of Science and the Cochrane Library databases were searched to collect RCTs regarding patients with COVID-19. The retrieval time was from the inception to December 1, 2020. The CONSORT 2010 statement was used to evaluate the overall reporting quality of these RCTs. RESULTS: 53 RCTs were included. The study showed that the average reporting rate for 37 items in CONSORT checklist was 53.85% with mean overall adherence score of 13.02±3.546 (ranged: 7 to 22). The multivariate linear regression analysis showed the overall adherence score to the CONSORT guideline was associated with journal impact factor (P = 0.006), and endorsement of CONSORT statement (P = 0.014). CONCLUSION: Although many RCTs of COVID-19 have been published in different journals, the overall reporting quality of these articles was suboptimal, it can not provide valid evidence for clinical decision-making and systematic reviews. Therefore, more journals should endorse the CONSORT statement, authors should strictly follow the relevant provisions of the CONSORT guideline when reporting articles. Future RCTs should particularly focus on improvement of detailed reporting in allocation concealment, blinding and estimation of sample size.


Subject(s)
COVID-19/epidemiology , Publications/standards , Publishing/standards , Randomized Controlled Trials as Topic/standards , Data Management/standards , Guideline Adherence/standards , Humans , Journal Impact Factor , PubMed/standards , SARS-CoV-2/pathogenicity
20.
Medicine (Baltimore) ; 100(35): e26972, 2021 Sep 03.
Article in English | MEDLINE | ID: mdl-34477127

ABSTRACT

ABSTRACT: There are no standardized methods for collecting and reporting coronavirus disease-2019 (COVID-19) data. We aimed to compare the proportion of patients admitted for COVID-19-related symptoms and those admitted for other reasons who incidentally tested positive for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2).Retrospective cohort studyData were sampled twice weekly between March 26 and June 6, 2020 from a "COVID-19 dashboard," a system-wide administrative database that includes the number of hospitalized patients with a positive SARS-CoV-2 polymerase chain reaction test. Patient charts were subsequently reviewed and the principal reason for hospitalization abstracted.Data collected during a statewide lockdown revealed that 92 hospitalized patients had positive SARS-CoV-2 test results. Among these individuals, 4.3% were hospitalized for reasons other than COVID-19-related symptoms but were incidentally found to be SARS-CoV-2-positive. After the lockdown was suspended, the total inpatient census of SARS-CoV-2-positive patients increased to 128, 20.3% of whom were hospitalized for non-COVID-19-related complaints.In the absence of a statewide lockdown, there was a significant increase in the proportion of patients admitted for non-COVID-19-related complaints who were incidentally found to be SARS-CoV-2-positive. In order to ensure data integrity, coding should distinguish between patients with COVID-19-related symptoms and asymptomatic patients carrying the SARS-CoV-2 virus.


Subject(s)
Asymptomatic Infections/epidemiology , COVID-19/epidemiology , Data Management/standards , Hospitalization/statistics & numerical data , COVID-19 Nucleic Acid Testing/statistics & numerical data , Female , Humans , Incidental Findings , Male , Pandemics , Quality Improvement , Retrospective Studies , SARS-CoV-2 , Trust
SELECTION OF CITATIONS
SEARCH DETAIL