Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 87
Filtrar
Más filtros

Tipo del documento
Intervalo de año de publicación
1.
J Obstet Gynaecol Can ; 46(6): 102343, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38160795

RESUMEN

We investigated the validity of the 10th Revision Canadian modification of International Statistical Classification of Disease and Related Health Problems (ICD-10-CA) diagnostic codes for surgery for benign gynaecologic conditions in the Canadian Institute for Health Information Discharge Abstract Database (CIHI-DAD), the main source of routinely collected data in Canada. Reabstracted data from patient charts was compared to ICD-10-CA codes and measures of validity were calculated with 95% confidence intervals. A total of 1068 procedures were identified. More objective, structural diagnoses (fibroids, prolapse) had higher sensitivity and near-perfect Kappa coefficients, while more subjective, symptomatic diagnoses (abnormal uterine bleeding, pelvic pain) had lower sensitivity and moderate-substantial Kappa coefficients. Specificity, positive predictive values, and negative predictive values were generally high for all diagnoses. These findings support the use of CIHI-DAD data for gynaecologic research.


Asunto(s)
Enfermedades de los Genitales Femeninos , Clasificación Internacional de Enfermedades , Humanos , Femenino , Canadá , Enfermedades de los Genitales Femeninos/cirugía , Enfermedades de los Genitales Femeninos/diagnóstico , Procedimientos Quirúrgicos Ginecológicos , Bases de Datos Factuales
2.
Sensors (Basel) ; 24(12)2024 Jun 09.
Artículo en Inglés | MEDLINE | ID: mdl-38931549

RESUMEN

This paper introduces a cutting-edge data architecture designed for a smart advertising context, prioritizing efficient data flow and performance, robust security, while guaranteeing data privacy and integrity. At the core of this study lies the application of federated learning (FL) as the primary methodology, which emphasizes the authenticity and privacy of data while promptly discarding irrelevant or fraudulent information. Our innovative data model employs a semi-random role assignment strategy based on a variety of criteria to efficiently collect and amalgamate data. The architecture is composed of model nodes, data nodes, and validator nodes, where the role of each node is determined by factors such as computational capability, interconnection quality, and historical performance records. A key feature of our proposed system is the selective engagement of a subset of nodes for modeling and validation, optimizing resource use and minimizing data loss. The AROUND social network platform serves as a real-world case study, illustrating the efficacy of our data architecture in a practical setting. Both simulated and real implementations of our architecture showcase its potential to dramatically curtail network traffic and average CPU usage, while preserving the accuracy of the FL model. Remarkably, the system is capable of achieving over a 50% reduction in both network traffic and average CPU usage even when the user count escalates by twenty-fold. The click rate, user engagement, and other parameters have also been evaluated, proving that the proposed architecture's advantages do not affect the smart advertising accuracy. These findings highlight the proposed architecture's capacity to scale efficiently and maintain high performance in smart advertising environments, making it a valuable contribution to the evolving landscape of digital marketing and FL.

3.
Behav Res Methods ; 56(6): 6258-6275, 2024 09.
Artículo en Inglés | MEDLINE | ID: mdl-38561551

RESUMEN

The standard approach for detecting and preventing bots from doing harm online involves CAPTCHAs. However, recent AI research, including our own in this manuscript, suggests that bots can complete many common CAPTCHAs with ease. The most effective methodology for identifying potential bots involves completing image-processing, causal-reasoning based, free-response questions that are hand coded by human analysts. However, this approach is labor intensive, slow, and inefficient. Moreover, with the advent of Generative AI such as GPT and Bard, it may soon be obsolete. Here, we develop and test various automated, bot-screening questions, grounded in psychological research, to serve as a proactive screen against bots. Utilizing hand coded free-response questions in the naturalistic domain of MTurkers recruited for a Qualtrics survey, we identify 18.9% of our sample to be potential bots, whereas Google's reCAPTCHA V3 identified only 1.7% to be potential bots. We then look at the performance of these potential bots on our novel bot-screeners, each of which has different strengths and weaknesses but all of which outperform CAPTCHAs.


Asunto(s)
Inteligencia Artificial , Humanos , Seguridad Computacional
4.
Environ Sci Technol ; 57(46): 18058-18066, 2023 Nov 21.
Artículo en Inglés | MEDLINE | ID: mdl-37582237

RESUMEN

Machine learning (ML) techniques promise to revolutionize environmental research and management, but collecting the necessary volumes of high-quality data remains challenging. Environmental sensors are often deployed under harsh conditions, requiring labor-intensive quality assurance and control (QAQC) processes. The need for manual QAQC is a major impediment to the scalability of these sensor networks. Existing techniques for automated QAQC make strong assumptions about noise profiles in the data they filter that do not necessarily hold for broadly deployed environmental sensors, however. Toward the goal of increasing the volume of high-quality environmental data, we introduce an ML-assisted QAQC methodology that is robust to low signal-to-noise ratio data. Our approach embeds sensor measurements into a dynamical feature space and trains a binary classification algorithm (Support Vector Machine) to detect deviation from expected process dynamics, indicating whether a sensor has become compromised and requires maintenance. This strategy enables the automated detection of a wide variety of nonphysical signals. We apply the methodology to three novel data sets produced by 136 low-cost environmental sensors (stream level, drinking water pH, and drinking water electroconductivity), deployed by our group across 250,000 km2 in Michigan, USA. The proposed methodology achieved accuracy scores of up to 0.97 and consistently outperformed state-of-the-art anomaly detection techniques.


Asunto(s)
Agua Potable , Aprendizaje Automático , Algoritmos , Michigan
5.
Sensors (Basel) ; 23(11)2023 May 29.
Artículo en Inglés | MEDLINE | ID: mdl-37299901

RESUMEN

Recently, with the increasing application of the Internet of Things (IoT), various IoT environments such as smart factories, smart homes, and smart grids are being generated. In the IoT environment, a lot of data are generated in real time, and the generated IoT data can be used as source data for various services such as artificial intelligence, remote medical care, and finance, and can also be used for purposes such as electricity bill generation. Therefore, data access control is required to grant access rights to various data users in the IoT environment who need such IoT data. In addition, IoT data contain sensitive information such as personal information, so privacy protection is also essential. Ciphertext-policy attribute-based encryption (CP-ABE) technology has been utilized to address these requirements. Furthermore, system structures applying blockchains with CP-ABE are being studied to prevent bottlenecks and single failures of cloud servers, as well as to support data auditing. However, these systems do not stipulate authentication and key agreement to ensure the security of the data transmission process and data outsourcing. Accordingly, we propose a data access control and key agreement scheme using CP-ABE to ensure data security in a blockchain-based system. In addition, we propose a system that can provide data nonrepudiation, data accountability, and data verification functions by utilizing blockchains. Both formal and informal security verifications are performed to demonstrate the security of the proposed system. We also compare the security, functional aspects, and computational and communication costs of previous systems. Furthermore, we perform cryptographic calculations to analyze the system in practical terms. As a result, our proposed protocol is safer against attacks such as guessing attacks and tracing attacks than other protocols, and can provide mutual authentication and key agreement functions. In addition, the proposed protocol is more efficient than other protocols, so it can be applied to practical IoT environments.


Asunto(s)
Cadena de Bloques , Inteligencia Artificial , Comunicación , Electricidad , Internet , Seguridad Computacional
6.
Environ Monit Assess ; 195(10): 1187, 2023 Sep 12.
Artículo en Inglés | MEDLINE | ID: mdl-37698727

RESUMEN

Ambient PM2.5 (particles less than 2.5 µm in diameter) is monitored in many countries including Australia. Occasionally PM2.5 instruments may report negative measurements, although in realty the ambient air can never contain negative amounts of particles. Some negative readings are caused by instrument faults or procedural errors, thus can be simply invalidated from air quality reporting. There are occasions, however, when negative readings occur due to other factors including technological or procedural limitations. Treatment of such negative data requires consideration of factors such as measurement uncertainty, instrument noise and risk for significant bias in air quality reporting. There is very limited documentation on handling negative PM2.5 data in the literature. This paper demonstrates how a threshold is determined for controlling negative hourly PM2.5 readings in the New South Wales (NSW) air quality data system. The investigation involved a review of thresholds used in different data systems and an assessment of instrument measurement uncertainties, zero air test data and impacts on key reporting statistics when applying different thresholds to historical datasets. The results show that a threshold of -10.0 µg/m3 appears optimal for controlling negative PM2.5 data in public reporting. This choice is consistent with the measurement uncertainty estimates and the zero air test data statistics calculated for the NSW Air Quality Monitoring Network, and is expected not to have significant impacts on key compliance reporting statistics such as data availability and annual average pollution levels. The analysis can be useful for air quality monitoring in other Australian jurisdictions or wider context.


Asunto(s)
Contaminación del Aire , Monitoreo del Ambiente , Australia , Contaminación Ambiental , Material Particulado
7.
J Sch Nurs ; : 10598405221130701, 2022 Oct 12.
Artículo en Inglés | MEDLINE | ID: mdl-36221975

RESUMEN

Recent trends in vaccine hesitancy have brought to light the importance of using accurate school vaccination data. This study evaluated the accuracy of a pilot statewide kindergarten vaccination survey in Oklahoma. School vaccination and exemption data were collected from November 2017 to April 2018 via the Research Electronic Data Capture system. A multivariable linear regression model was used to evaluate the relationship between students who are up to date for all vaccines comparing school reported and Oklahoma State Department of Health-validated data. Adjusted vaccination data were overestimated by 1.0% among public schools and 3.3% among private schools. These results were validated by a random audit of participating schools finding the school-reported vaccination data to be overestimated by 0.6% compared to true student immunization records on file. Our analysis indicates that school-reported vaccination data are sufficiently valid. Immunization record audits provide confidence in available data, which drives evidence-based decision-making.

8.
J Proteome Res ; 20(1): 923-931, 2021 01 01.
Artículo en Inglés | MEDLINE | ID: mdl-33016074

RESUMEN

Host cell proteins (HCPs) are a major class of bioprocess-related impurities generated by the host organism and are generally present at low levels in purified biopharmaceutical products. The monitoring of these impurities is identified as an important critical quality attribute of monoclonal antibody (mAb) formulations not only due to the potential risk for the product stability and efficacy but also concerns linked to the immunogenicity of some of them. While overall HCP levels are usually monitored by enzyme-linked immunosorbent assay (ELISA), mass spectrometry (MS)-based approaches have been emerging as powerful and promising alternatives providing qualitative and quantitative information. However, a major challenge for liquid chromatography (LC)-MS-based methods is to deal with the wide dynamic range of drug products and the extreme sensitivity required to detect trace-level HCPs. In this study, we developed powerful and reproducible MS-based analytical workflows coupling optimized and efficient sample preparations, the library-free data-independent acquisition (DIA) method, and stringent validation criteria. The performances of several preparation protocols and DIA versus classical data-dependent acquisition (DDA) were evaluated using a series of four commercially available drug products. Depending on the selected protocols, the user has access to different information: on the one hand, a deep profiling of tens of identified HCPs and on the other hand, accurate and reproducible (coefficients of variation (CVs) < 12%) quantification of major HCPs. Overall, a final global HCP amount of a few tens of ng/mg mAb in these mAb samples was measured, while reaching a sensitivity down to the sub-ng/mg mAb level. Thus, this straightforward and robust approach can be intended as a routine quality control for any drug product analysis.


Asunto(s)
Anticuerpos Monoclonales , Preparaciones Farmacéuticas , Animales , Células CHO , Cromatografía Liquida , Cricetinae , Cricetulus , Ensayo de Inmunoadsorción Enzimática , Espectrometría de Masas
9.
Pharmacoepidemiol Drug Saf ; 29(11): 1456-1464, 2020 11.
Artículo en Inglés | MEDLINE | ID: mdl-32986901

RESUMEN

PURPOSE: The Clinical Practice Research Datalink (CPRD) now provides a new medical record database, CPRD Aurum. This is the second of several studies being undertaken to assess the quality of CPRD Aurum data for research. METHODS: We included patients aged 20+, with at least one lab test result of any type from a random sample of 50 000 patients in CPRD Aurum. We assessed whether diagnosis codes for type 2 diabetes, hyperlipidemia, and iron deficiency or unspecified anemia were accompanied by supporting codes including lab results and treatments (correctness) and whether lab results, treatments, or other codes indicate a missing diagnosis record (completeness). RESULTS: Among 37 502 patients in CPRD Aurum, correctness of type 2 diabetes, hyperlipidemia, and anemia diagnoses was high (99%, 93%, and 97%, respectively). Completeness was only high for type 2 diabetes (94%-98%); completeness for hypercholesterolemia and anemia diagnoses was modest even when the presence of treatments and lab results indicated the conditions were likely present (51%-59% and 58%-70%, respectively). CONCLUSIONS: Our findings indicate that for studies of type 2 diabetes, hyperlipidemia, and iron deficiency or unspecified anemia, the diagnosis code is likely to be correct where present. However, a significant proportion of cases of hyperlipidemia or anemia will be missed if only diagnosis codes are used to select patients with these conditions. Researchers should consider using treatments, supporting codes, and, when available, lab data to supplement diagnosis codes and enhance case capture when including these conditions in studies using CPRD Aurum.


Asunto(s)
Exactitud de los Datos , Diabetes Mellitus Tipo 2 , Manejo de Datos , Bases de Datos Factuales , Diabetes Mellitus Tipo 2/diagnóstico , Diabetes Mellitus Tipo 2/epidemiología , Humanos , Reino Unido
10.
Pharmacoepidemiol Drug Saf ; 29(9): 1134-1140, 2020 09.
Artículo en Inglés | MEDLINE | ID: mdl-32222005

RESUMEN

PURPOSE: The Clinical Practice Research Datalink (CPRD) now provides a new medical record database, CPRD Aurum. This is the first of several studies being undertaken to assess the quality and completeness of CPRD Aurum data for research endeavors. METHODS: We identified patients with a pulmonary embolism (PE) diagnosis from a random sample of 50 000 patients in CPRD Aurum and compared the diagnoses using data from Hospital Episode Statistics (HES). We calculated the proportion of PE cases recorded in CPRD Aurum who also had a PE diagnosis recorded in HES. We also evaluated completeness by identifying all PE diagnoses in HES and calculating the proportion also present in CPRD Aurum. RESULTS: The study included 781 PE patients: 580 had a PE in CPRD Aurum, 632 had a PE in HES, and 431 had a PE in both. The proportion of patients with anticoagulated PE in CPRD Aurum confirmed by HES was 76.8%. The completeness of primary hospitalized PE HES events compared to CPRD Aurum was 79.1%. In most instances, there was a plausible explanation for the presence of a PE in only one of the two data sources. CONCLUSIONS: The results of this study are reassuring and suggest that the correctness (eg, quality, accuracy) and completeness of diagnosis information in CPRD Aurum are promising with respect to serious acute conditions that require medical attention. Evaluation of other data elements will provide additional insight into this new data resource and its utility for medical research.


Asunto(s)
Recolección de Datos/métodos , Bases de Datos Factuales/estadística & datos numéricos , Registros Electrónicos de Salud/estadística & datos numéricos , Sistemas de Información en Hospital/estadística & datos numéricos , Embolia Pulmonar/epidemiología , Humanos , Embolia Pulmonar/diagnóstico , Reino Unido/epidemiología
11.
Appl Microbiol Biotechnol ; 104(21): 9327-9342, 2020 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-32960293

RESUMEN

Temporal regulation of global gene expression in the caeca of chickens infected with Salmonella Typhimurium has not been investigated previously. In this study, we performed the transcriptome analysis of the caeca of Salmonella Typhimurium challenged chicks to understand the regulation of the mucosal immune system in a temporal manner. The Salmonella infection resulted in the activation of the caecal immune system by the upregulation of the differentially expressed genes (DEGs; false discovery rate (FDR) < 0.05; log2 fold change > 1) involved in biological pathways such as Toll-like receptor signaling pathway, Salmonella infection, cytokine-cytokine receptor interaction, phagosome, apoptosis and intestinal immune network for IgA production. The activation of biological pathways such as RIG-I-like receptor signaling pathway, ErbB signaling pathway and cellular senescence showed a time-dependent response of the host immune system. A 49% increase in the DEGs on day 7 compared with day 3 post-infection (p.i.) suggested a time-dependent role of multiple genes such as AvBD1, AvBD2, AvBD7, IL2, IL10, IL21, SIVA1, CD5, CD14 and GPR142 in the regulation of the immune system. Nested network analysis of the individual biological pathways showed that IL6 played a significant role in the immune system regulation by activating the pathways, including Toll-like receptor signaling pathway, Salmonella infection, intestinal immune network for IgA production and C-type lectin receptor signaling pathway. The downregulated DEGs (FDR < 0.05; log2 fold change < -1) showed that Salmonella challenge affected the functions of pathways, such as tryptophan metabolism, retinol metabolism, folate biosynthesis and pentose and glucoronate interconversions, suggesting the disruption of cellular mechanisms involved in nutrient synthesis, absorption and metabolism. Overall, the immune response was temporally regulated through the activation of Toll-like signaling receptor pathway, cytokine-cytokine interactions and Salmonella infection, where IL6 played a significant role in the modulation of caecal immune system against Salmonella Typhimurium. KEY POINTS: • The immune response to Salmonella Typhimurium challenge was temporally regulated in the caeca of chickens. • Many newly identified genes have been shown to be involved in the activation of the immune system. • Toll-like receptors and interleukins played a key role in immune system regulation.


Asunto(s)
Pollos , Salmonella typhimurium , Animales , Ciego , Perfilación de la Expresión Génica , Inmunidad Mucosa , Salmonella typhimurium/genética , Transcriptoma
12.
Sensors (Basel) ; 20(5)2020 Feb 29.
Artículo en Inglés | MEDLINE | ID: mdl-32121462

RESUMEN

In the present study, we assessed for the first time the performance of our custom-designed low-cost Particulate Matter (PM) monitoring devices (Atmos) in measuring PM10 concentrations. We examined the ambient PM10 levels during an intense measurement campaign at two sites in the Delhi National Capital Region (NCR), India. In this study, we validated the un-calibrated Atmos for measuring ambient PM10 concentrations at highly polluted monitoring sites. PM10 concentration from Atmos, containing laser scattering-based Plantower PM sensor, was comparable with that measured from research-grade scanning mobility particle sizers (SMPS) in combination with optical particle sizers (OPS) and aerodynamic particle sizers (APS). The un-calibrated sensors often provided accurate PM10 measurements, particularly in capturing real-time hourly concentrations variations. Quantile-Quantile plots (QQ-plots) for data collected during the selected deployment period showed positively skewed PM10 datasets. Strong Spearman's rank-order correlations (rs = 0.64-0.83) between the studied instruments indicated the utility of low-cost Plantower PM sensors in measuring PM10 in the real-world context. Additionally, the heat map for weekly datasets demonstrated high R2 values, establishing the efficacy of PM sensor in PM10 measurement in highly polluted environmental conditions.

13.
Ecol Modell ; 436: 109288, 2020 Nov 15.
Artículo en Inglés | MEDLINE | ID: mdl-32982015

RESUMEN

In this letter we present comments on the article "A global-scale ecological niche model to predict SARS-CoV-2 coronavirus" by Coro published in 2020.

14.
BMC Med Res Methodol ; 19(1): 241, 2019 12 18.
Artículo en Inglés | MEDLINE | ID: mdl-31852451

RESUMEN

AIM: Subsequent to a three-month pilot phase, recruiting patients for the newly established BFCC (Baltic Fracture Competence Centre) transnational fracture registry, a validation of the data quality needed to be carried out, applying a standardized method. METHOD: During the literature research, the method of "adaptive monitoring" fulfilled the requirements of the registry and was applied. It consisted of a three-step audit process; firstly, scoring of the overall data quality, followed by source data verification of a sample size, relative to the scoring result, and finally, feedback to the registry on measures to improve data quality. Statistical methods for scoring of data quality and visualisation of discrepancies between registry data and source data were developed and applied. RESULTS: Initially, the data quality of the registry scored as medium. During source data verification, missing items in the registry, causing medium data quality, turned out to be absent in the source as well. A subsequent adaptation of the score evaluated the registry's data quality as good. It was suggested to add variables to some items in order to improve the accuracy of the registry. DISCUSSION: The application of the method of adaptive monitoring has only been published by Jacke et al., with a similar improvement of the scoring result following the audit process. Displaying data from the registry in graphs helped to find missing items and discover issues with data formats. Graphically comparing the degree of agreement between the registry and source data allowed to discover systematic faults. CONCLUSIONS: The method of adaptive monitoring gives a substantiated guideline for systematically evaluating and monitoring a registry's data quality and is currently second to none. The resulting transparency of the registry's data quality could be helpful in annual reports, as published by most major registries. As the method has been rarely applied, further successive applications in established registries would be desirable.


Asunto(s)
Exactitud de los Datos , Fracturas Óseas/terapia , Sistema de Registros , Adulto , Fracturas Óseas/epidemiología , Alemania/epidemiología , Humanos , Evaluación de Resultado en la Atención de Salud , Reproducibilidad de los Resultados
15.
Environ Health ; 18(1): 99, 2019 11 21.
Artículo en Inglés | MEDLINE | ID: mdl-31752881

RESUMEN

BACKGROUND: Environmental health and exposure researchers can improve the quality and interpretation of their chemical measurement data, avoid spurious results, and improve analytical protocols for new chemicals by closely examining lab and field quality control (QC) data. Reporting QC data along with chemical measurements in biological and environmental samples allows readers to evaluate data quality and appropriate uses of the data (e.g., for comparison to other exposure studies, association with health outcomes, use in regulatory decision-making). However many studies do not adequately describe or interpret QC assessments in publications, leaving readers uncertain about the level of confidence in the reported data. One potential barrier to both QC implementation and reporting is that guidance on how to integrate and interpret QC assessments is often fragmented and difficult to find, with no centralized repository or summary. In addition, existing documents are typically written for regulatory scientists rather than environmental health researchers, who may have little or no experience in analytical chemistry. OBJECTIVES: We discuss approaches for implementing quality assurance/quality control (QA/QC) in environmental exposure measurement projects and describe our process for interpreting QC results and drawing conclusions about data validity. DISCUSSION: Our methods build upon existing guidance and years of practical experience collecting exposure data and analyzing it in collaboration with contract and university laboratories, as well as the Centers for Disease Control and Prevention. With real examples from our data, we demonstrate problems that would not have come to light had we not engaged with our QC data and incorporated field QC samples in our study design. Our approach focuses on descriptive analyses and data visualizations that have been compatible with diverse exposure studies with sample sizes ranging from tens to hundreds of samples. Future work could incorporate additional statistically grounded methods for larger datasets with more QC samples. CONCLUSIONS: This guidance, along with example table shells, graphics, and some sample R code, provides a useful set of tools for getting the best information from valuable environmental exposure datasets and enabling valid comparison and synthesis of exposure data across studies.


Asunto(s)
Exposición a Riesgos Ambientales/análisis , Control de Calidad , Proyectos de Investigación/estadística & datos numéricos , Monitoreo del Ambiente , Humanos , Proyectos de Investigación/normas
16.
Clin Trials ; 16(1): 81-89, 2019 02.
Artículo en Inglés | MEDLINE | ID: mdl-30445841

RESUMEN

BACKGROUND/AIMS: Electronic medical records are now frequently used for capturing patient-level data in clinical trials. Within the Veterans Affairs health care system, electronic medical record data have been widely used in clinical trials to assess eligibility, facilitate referrals for recruitment, and conduct follow-up and safety monitoring. Despite the potential for increased efficiency in using electronic medical records to capture safety data via a centralized algorithm, it is important to evaluate the integrity and accuracy of electronic medical record-captured data. To this end, this investigation assesses data collection, both for general and study-specific safety endpoints, by comparing electronic medical record-based safety monitoring versus safety data collected during the course of the Veterans Affairs Nephropathy in Diabetes (VA NEPHRON-D) clinical trial. METHODS: The VA NEPHRON-D study was a multicenter, double-blind, randomized clinical trial designed to compare the effect of combination therapy (losartan plus lisinopril) versus monotherapy (losartan) on the progression of kidney disease in individuals with diabetes and proteinuria. The trial's safety outcomes included serious adverse events, hyperkalemia, and acute kidney injury. A subset of the participants (~62%, n = 895) enrolled in the trial's long-term follow-up sub-study and consented to electronic medical record data collection. We applied an automated algorithm to search and capture safety data using the VA Corporate Data Warehouse which houses electronic medical record data. Using study safety data reported during the trial as the gold standard, we evaluated the sensitivity and precision of electronic medical record-based safety data and related treatment effects. RESULTS: The sensitivity of the electronic medical record-based safety for hospitalizations was 65.3% without non-VA hospitalization events and 92.3% with the non-VA hospitalization events included. The sensitivity was only 54.3% for acute kidney injury and 87.3% for hyperkalemia. The precision of electronic medical record-based safety data was 89.4%, 38%, and 63.2% for hospitalization, acute kidney injury, and hyperkalemia, respectively. Relative treatment differences under the study and electronic medical record settings were 15% and 3% for hospitalization, 123% and 29% for acute kidney injury, and 238% and 140% for hyperkalemia, respectively. CONCLUSION: The accuracy of using automated electronic medical record safety data depends on the events of interest. Identification of all-cause hospitalizations would be reliable if search methods could, in addition to VA hospitalizations, also capture non-VA hospitalizations. However, hospitalization is different from a cause-specific serious adverse event that could be more sensitive to treatment effects. In addition, some study-specific safety events were not easily identified using the electronic medical records. This limits the effectiveness of the automated central database search for purposes of safety monitoring. Hence, this data captured approach should be carefully considered when implementing endpoint data collection in future pragmatic trials.


Asunto(s)
Exactitud de los Datos , Bases de Datos Factuales/normas , Registros Electrónicos de Salud/normas , Humanos , Estudios Multicéntricos como Asunto , Ensayos Clínicos Controlados Aleatorios como Asunto , Estados Unidos , United States Department of Veterans Affairs
17.
Adv Exp Med Biol ; 1188: 181-201, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31820389

RESUMEN

Reverse-phase protein array (RPPA) technology is a high-throughput antibody- and microarray-based approach for the rapid profiling of levels of proteins and protein posttranslational modifications in biological specimens. The technology consumes small amounts of samples, can sensitively detect low-abundance proteins and posttranslational modifications, enables measurements of multiple signaling pathways in parallel, has the capacity to analyze large sample numbers, and offers robust interexperimental reproducibility. These features of RPPA experiments have motivated and enabled the use of RPPA technology in various biomedical, translational, and clinical applications, including the delineation of molecular mechanisms of disease, profiling of druggable signaling pathway activation, and search for new prognostic markers. Owing to the complexity of many of these applications, such as developing multiplex protein assays for diagnostic laboratories or integrating posttranslational modification-level data using large-scale proteogenomic approaches, robust and well-validated data are essential. There are many distinct components of an RPPA workflow, and numerous possible technical setups and analysis parameter options exist. The differences between RPPA platform setups around the world offer opportunities to assess and minimize interplatform variation. Crossplatform validation may also aid in the evaluation of robust, platform-independent protein markers of disease and response to therapy.


Asunto(s)
Análisis por Matrices de Proteínas , Proteómica , Biomarcadores/análisis , Humanos , Análisis por Matrices de Proteínas/normas , Proteínas/química , Reproducibilidad de los Resultados
18.
Sensors (Basel) ; 19(13)2019 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-31266206

RESUMEN

The validation of significant wave height (SWH) data measured by the Sentinel-3A/3B SAR Altimeter (SRAL) is essential for the application of the data in ocean wave monitoring, forecasting and wave climate studies. Sentinel-3A/3B SWH data are validated by comparisons with U. S. National Data Buoy Center (NDBC) buoys, using a spatial scale of 25 km and a temporal scale of 30 min, and with Jason-3 data at their crossovers, using a time difference of less than 30 min. The comparisons with NDBC buoy data show that the root-mean-square error (RMSE) of Sentinel-3A SWH is 0.30 m, and that of Sentinel-3B is no more than 0.31 m. The pseudo-Low-Resolution Mode (PLRM) SWH is slightly better than that of the Synthetic Aperture Radar (SAR) mode. The statistical analysis of Sentinel-3A/3B SWH in the bin of 0.5 m wave height shows that the accuracy of Sentinel-3A/3B SWH data decreases with increasing wave height. The analysis of the monthly biases and RMSEs of Sentinel-3A SWH shows that Sentinel-3A SWH are stable and have a slight upward trend with time. The comparisons with Jason-3 data show that SWH of Sentinel-3A and Jason-3 are consistent in the global ocean. Finally, the piecewise calibration functions are given for the calibration of Sentinel-3A/3B SWH. The results of the study show that Sentinel-3A/3B SWH data have high accuracy and remain stable.

19.
Proteomics ; 18(23): e1800222, 2018 12.
Artículo en Inglés | MEDLINE | ID: mdl-30352137

RESUMEN

Western blotting as an orthogonal validation tool for quantitative proteomics data has rapidly become a de facto requirement for publication. In this viewpoint article, the pros and cons of western blotting as a validation approach are discussed, using examples from our own published work, and how to best apply it to improve the quality of data published is outlined. Further, suggestions and guidelines for some other experimental approaches are provided, which can be used for validation of quantitative proteomics data in addition to, or in place of, western blotting.


Asunto(s)
Proteómica/métodos , Western Blotting , Exactitud de los Datos
20.
Environ Res ; 160: 183-194, 2018 01.
Artículo en Inglés | MEDLINE | ID: mdl-28987729

RESUMEN

Quick validation and detection of faults in measured air quality data is a crucial step towards achieving the objectives of air quality networks. Therefore, the objectives of this paper are threefold: (i) to develop a modeling technique that can be used to predict the normal behavior of air quality variables and help provide accurate reference for monitoring purposes; (ii) to develop fault detection method that can effectively and quickly detect any anomalies in measured air quality data. For this purpose, a new fault detection method that is based on the combination of generalized likelihood ratio test (GLRT) and exponentially weighted moving average (EWMA) will be developed. GLRT is a well-known statistical fault detection method that relies on maximizing the detection probability for a given false alarm rate. In this paper, we propose to develop GLRT-based EWMA fault detection method that will be able to detect the changes in the values of certain air quality variables; (iii) to develop fault isolation and identification method that allows defining the fault source(s) in order to properly apply appropriate corrective actions. In this paper, reconstruction approach that is based on Midpoint-Radii Principal Component Analysis (MRPCA) model will be developed to handle the types of data and models associated with air quality monitoring networks. All air quality modeling, fault detection, fault isolation and reconstruction methods developed in this paper will be validated using real air quality data (such as particulate matter, ozone, nitrogen and carbon oxides measurement).


Asunto(s)
Contaminación del Aire , Monitoreo del Ambiente , Modelos Teóricos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA