Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 452
Filtrar
Más filtros

Intervalo de año de publicación
1.
PLoS Biol ; 18(9): e3000860, 2020 09.
Artículo en Inglés | MEDLINE | ID: mdl-32960891

RESUMEN

Engagement with scientific manuscripts is frequently facilitated by Twitter and other social media platforms. As such, the demographics of a paper's social media audience provide a wealth of information about how scholarly research is transmitted, consumed, and interpreted by online communities. By paying attention to public perceptions of their publications, scientists can learn whether their research is stimulating positive scholarly and public thought. They can also become aware of potentially negative patterns of interest from groups that misinterpret their work in harmful ways, either willfully or unintentionally, and devise strategies for altering their messaging to mitigate these impacts. In this study, we collected 331,696 Twitter posts referencing 1,800 highly tweeted bioRxiv preprints and leveraged topic modeling to infer the characteristics of various communities engaging with each preprint on Twitter. We agnostically learned the characteristics of these audience sectors from keywords each user's followers provide in their Twitter biographies. We estimate that 96% of the preprints analyzed are dominated by academic audiences on Twitter, suggesting that social media attention does not always correspond to greater public exposure. We further demonstrate how our audience segmentation method can quantify the level of interest from nonspecialist audience sectors such as mental health advocates, dog lovers, video game developers, vegans, bitcoin investors, conspiracy theorists, journalists, religious groups, and political constituencies. Surprisingly, we also found that 10% of the preprints analyzed have sizable (>5%) audience sectors that are associated with right-wing white nationalist communities. Although none of these preprints appear to intentionally espouse any right-wing extremist messages, cases exist in which extremist appropriation comprises more than 50% of the tweets referencing a given preprint. These results present unique opportunities for improving and contextualizing the public discourse surrounding scientific research.


Asunto(s)
Bases de Datos como Asunto , Publicaciones , Ciencia , Cambio Social , Medios de Comunicación Sociales , Academias e Institutos/organización & administración , Academias e Institutos/normas , Academias e Institutos/estadística & datos numéricos , Acceso a la Información , Bases de Datos como Asunto/organización & administración , Bases de Datos como Asunto/normas , Bases de Datos como Asunto/estadística & datos numéricos , Procesamiento Automatizado de Datos/organización & administración , Procesamiento Automatizado de Datos/normas , Procesamiento Automatizado de Datos/estadística & datos numéricos , Humanos , Alfabetización Informacional , Internet/organización & administración , Internet/normas , Internet/estadística & datos numéricos , Activismo Político , Publicaciones/clasificación , Publicaciones/normas , Publicaciones/estadística & datos numéricos , Publicaciones/provisión & distribución , Ciencia/organización & administración , Ciencia/normas , Ciencia/estadística & datos numéricos , Medios de Comunicación Sociales/organización & administración , Medios de Comunicación Sociales/normas , Medios de Comunicación Sociales/estadística & datos numéricos
2.
Neuroimage ; 263: 119612, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-36070839

RESUMEN

Multimodal magnetic resonance imaging (MRI) has accelerated human neuroscience by fostering the analysis of brain microstructure, geometry, function, and connectivity across multiple scales and in living brains. The richness and complexity of multimodal neuroimaging, however, demands processing methods to integrate information across modalities and to consolidate findings across different spatial scales. Here, we present micapipe, an open processing pipeline for multimodal MRI datasets. Based on BIDS-conform input data, micapipe can generate i) structural connectomes derived from diffusion tractography, ii) functional connectomes derived from resting-state signal correlations, iii) geodesic distance matrices that quantify cortico-cortical proximity, and iv) microstructural profile covariance matrices that assess inter-regional similarity in cortical myelin proxies. The above matrices can be automatically generated across established 18 cortical parcellations (100-1000 parcels), in addition to subcortical and cerebellar parcellations, allowing researchers to replicate findings easily across different spatial scales. Results are represented on three different surface spaces (native, conte69, fsaverage5), and outputs are BIDS-conform. Processed outputs can be quality controlled at the individual and group level. micapipe was tested on several datasets and is available at https://github.com/MICA-MNI/micapipe, documented at https://micapipe.readthedocs.io/, and containerized as a BIDS App http://bids-apps.neuroimaging.io/apps/. We hope that micapipe will foster robust and integrative studies of human brain microstructure, morphology, function, cand connectivity.


Asunto(s)
Conectoma , Procesamiento Automatizado de Datos , Neuroimagen , Programas Informáticos , Humanos , Encéfalo/diagnóstico por imagen , Encéfalo/anatomía & histología , Conectoma/métodos , Imagen de Difusión Tensora , Imagen por Resonancia Magnética/métodos , Neuroimagen/métodos , Programas Informáticos/normas , Procesamiento Automatizado de Datos/métodos , Procesamiento Automatizado de Datos/normas
3.
BMC Med ; 17(1): 102, 2019 05 30.
Artículo en Inglés | MEDLINE | ID: mdl-31146736

RESUMEN

BACKGROUND: Verbal autopsy is an increasingly important methodology for assigning causes to otherwise uncertified deaths, which amount to around 50% of global mortality and cause much uncertainty for health planning. The World Health Organization sets international standards for the structure of verbal autopsy interviews and for cause categories that can reasonably be derived from verbal autopsy data. In addition, computer models are needed to efficiently process large quantities of verbal autopsy interviews to assign causes of death in a standardised manner. Here, we present the InterVA-5 model, developed to align with the WHO-2016 verbal autopsy standard. This is a harmonising model that can process input data from WHO-2016, as well as earlier WHO-2012 and Tariff-2 formats, to generate standardised cause-specific mortality profiles for diverse contexts. The software development involved building on the earlier InterVA-4 model, and the expanded knowledge base required for InterVA-5 was informed by analyses from a training dataset drawn from the Population Health Metrics Research Collaboration verbal autopsy reference dataset, as well as expert input. RESULTS: The new model was evaluated against a test dataset of 6130 cases from the Population Health Metrics Research Collaboration and 4009 cases from the Afghanistan National Mortality Survey dataset. Both of these sources contained around three quarters of the input items from the WHO-2016, WHO-2012 and Tariff-2 formats. Cause-specific mortality fractions across all applicable WHO cause categories were compared between causes assigned in participating tertiary hospitals and InterVA-5 in the test dataset, with concordance correlation coefficients of 0.92 for children and 0.86 for adults. The InterVA-5 model's capacity to handle different input formats was evaluated in the Afghanistan dataset, with concordance correlation coefficients of 0.97 and 0.96 between the WHO-2016 and the WHO-2012 format for children and adults respectively, and 0.92 and 0.87 between the WHO-2016 and the Tariff-2 format respectively. CONCLUSIONS: Despite the inherent difficulties of determining "truth" in assigning cause of death, these findings suggest that the InterVA-5 model performs well and succeeds in harmonising across a range of input formats. As more primary data collected under WHO-2016 become available, it is likely that InterVA-5 will undergo minor re-versioning in the light of practical experience. The model is an important resource for measuring and evaluating cause-specific mortality globally.


Asunto(s)
Autopsia/métodos , Simulación por Computador , Procesamiento Automatizado de Datos , Entrevistas como Asunto , Integración de Sistemas , Adulto , Afganistán/epidemiología , Autopsia/normas , Causas de Muerte , Niño , Simulación por Computador/normas , Conjuntos de Datos como Asunto , Procesamiento Automatizado de Datos/métodos , Procesamiento Automatizado de Datos/normas , Femenino , Humanos , Entrevistas como Asunto/métodos , Entrevistas como Asunto/normas , Masculino , Salud Poblacional , Indicadores de Calidad de la Atención de Salud , Programas Informáticos , Centros de Atención Terciaria , Incertidumbre , Conducta Verbal , Organización Mundial de la Salud
4.
Transfusion ; 59(12): 3776-3782, 2019 12.
Artículo en Inglés | MEDLINE | ID: mdl-31565803

RESUMEN

Traceability is essential to any quality program for medical products of human origin (MPHO). Standardized terminology, coding, and labeling systems that include key elements for traceability support electronically readable information on product labels and improve the accuracy and efficiency of data collection. ISBT 128 is such a system. The first specification for ISBT 128 was published 25 years ago, and since that time it has become the global standard for labeling and information transfer for MPHO. Additionally, standardization of granular product description codes has supported hemovigilance and other activities that depend on aggregated data. This review looks back over the development, current status, and potential future applications of the ISBT 128 Standard.


Asunto(s)
Procesamiento Automatizado de Datos/métodos , Procesamiento Automatizado de Datos/normas , Bancos de Sangre/normas , Transfusión Sanguínea/normas , Etiquetado de Medicamentos/métodos , Etiquetado de Medicamentos/normas , Humanos , Programas Informáticos
5.
Comput Inform Nurs ; 36(3): 154-159, 2018 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-29522423

RESUMEN

The purpose of this study was to examine nursing informatics competency and the quality of information processing among nurses in Jordan. The study was conducted in a large hospital with 380 registered nurses. The hospital introduced the electronic health record in 2010. The measures used in this study were personal and job characteristics, self-efficacy, Self-Assessment Nursing Informatics Competencies, and Health Information System Monitoring Questionnaire. The convenience sample consisted of 99 nurses who used the electronic health record for at least 3 months. The analysis showed that nine predictors explained 22% of the variance in the quality of information processing, whereas the statistically significant predictors were nursing informatics competency, clinical specialty, and years of nursing experience. There is a need for policies that advocate for every nurse to be educated in nursing informatics and the quality of information processing.


Asunto(s)
Procesamiento Automatizado de Datos/normas , Informática Aplicada a la Enfermería , Competencia Profesional , Adulto , Registros Electrónicos de Salud/estadística & datos numéricos , Femenino , Humanos , Jordania , Masculino , Persona de Mediana Edad , Enfermeras y Enfermeros , Autoeficacia , Encuestas y Cuestionarios
6.
Comput Inform Nurs ; 36(12): 596-602, 2018 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-30015644

RESUMEN

When a medication administration error occurs, patient safety is endangered. Barcode medication administration system usage has been implemented to reduce medication errors. The purpose of this study was to evaluate barcode medication administration system usage outcomes. A survey based on DeLone and McLean's model of information systems success was utilized. The questionnaire, composed of 27 items, explored system quality, information quality, service quality, user satisfaction, and usage benefits. It was completed by 232 nurses. User satisfaction received the highest average score, and quality of information was the most critical factor related to this result (r = 0.83, P < .01). Medication errors occurring before and after barcode medication administration use were collected, and the reasons for errors related to work process were explored. Medication errors decreased from 405 at preimplementation to 314 at postimplementation (t = 77.62, P < .001). The main reason for medication errors related to work process was "not following the standard procedure," followed by "other factors." While technology is deployed to support individual practice, organizational elements also remain important to technology adoption.


Asunto(s)
Procesamiento Automatizado de Datos/normas , Errores de Medicación/estadística & datos numéricos , Sistemas de Medicación en Hospital/organización & administración , Adulto , Atención a la Salud , Femenino , Humanos , Masculino , Errores de Medicación/prevención & control , Seguridad del Paciente , Garantía de la Calidad de Atención de Salud , Encuestas y Cuestionarios
7.
J Nurs Care Qual ; 33(1): 79-85, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-28658190

RESUMEN

In an effort to prevent medication errors, barcode medication administration technology has been implemented in many health care organizations. An integrative review was conducted to understand the effect of barcode medication administration technology on medication errors, and characteristics of use demonstrated by nurses contribute to medication safety. Addressing poor system use may support improved patient safety through the reduction of medication administration errors.


Asunto(s)
Sistemas de Información en Farmacia Clínica/estadística & datos numéricos , Procesamiento Automatizado de Datos/normas , Errores de Medicación/prevención & control , Seguridad del Paciente , Humanos , Sistemas de Medicación en Hospital/organización & administración , Garantía de la Calidad de Atención de Salud
8.
J Nurs Care Qual ; 33(4): 341-347, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-29319594

RESUMEN

Achieving optimal compliance for bar code medication administration (BCMA) in mature medication use systems is challenging due to the iterative system refinements over time. A nursing leadership initiative increased BCMA compliance, measured as a composite across all hospital units, from 95% to 98%, discovering unanticipated benefits and unintended consequences in the process. The methodology used provides valuable insight into effective strategies for BCMA optimization with applicability for other, similar quality improvement initiatives.


Asunto(s)
Procesamiento Automatizado de Datos/normas , Hospitales Comunitarios , Liderazgo , Cumplimiento de la Medicación , Sistemas de Medicación en Hospital/organización & administración , Enfermeras Clínicas , Procesamiento Automatizado de Datos/estadística & datos numéricos , Humanos , Errores de Medicación/prevención & control , Preparaciones Farmacéuticas/administración & dosificación , Administración de la Seguridad/métodos
9.
Behav Res Methods ; 50(1): 39-56, 2018 02.
Artículo en Inglés | MEDLINE | ID: mdl-29340967

RESUMEN

If the automatic item generation is used for generating test items, the question of how the equivalence among different instances may be tested is fundamental to assure an accurate assessment. In the present research, the question was dealt by using the knowledge space theory framework. Two different ways of considering the equivalence among instances are proposed: The former is at a deterministic level and it requires that all the instances of an item template must belong to exactly the same knowledge states; the latter adds a probabilistic level to the deterministic one. The former type of equivalence can be modeled by using the BLIM with a knowledge structure assuming equally informative instances; the latter can be modeled by a constrained BLIM. This model assumes equality constraints among the error parameters of the equivalent instances. An approach is proposed for testing the equivalence among instances, which is based on a series of model comparisons. A simulation study and an empirical application show the viability of the approach.


Asunto(s)
Procesamiento Automatizado de Datos/normas , Bases del Conocimiento , Modelos Estadísticos , Probabilidad , Estudios de Evaluación como Asunto , Humanos , Investigación
10.
BMC Health Serv Res ; 17(1): 624, 2017 Sep 05.
Artículo en Inglés | MEDLINE | ID: mdl-28870188

RESUMEN

BACKGROUND: Hospital discharge summaries are a key communication tool ensuring continuity of care between primary and secondary care. Incomplete or untimely communication of information increases risk of hospital readmission and associated complications. The aim of this study was to evaluate whether the introduction of a new electronic discharge system (NewEDS) was associated with improvements in the completeness and timeliness of discharge information, in Nottingham University Hospitals NHS Trust, England. METHODS: A before and after longitudinal study design was used. Data were collected using the gold standard auditing tool from the Royal College of Physicians (RCP). This tool contains a checklist of 57 items grouped into seven categories, 28 of which are classified as mandatory by RCP. Percentage completeness (out of the 28 mandatory items) was considered to be the primary outcome measure. Data from 773 patients discharged directly from the acute medical unit over eight-week long time periods (four before and four after the change to the NewEDS) from August 2010 to May 2012 were extracted and evaluated. Results were summarised by effect size on completeness before and after changeover to NewEDS respectively. The primary outcome variable was represented with percentage of completeness score and a non-parametric technique was used to compare pre-NewEDS and post-NewEDS scores. RESULTS: The changeover to the NewEDS resulted in an increased completeness of discharge summaries from 60.7% to 75.0% (p < 0.001) and the proportion of summaries created under 24 h from discharge increased significantly from 78.0% to 93.0% (p < 0.001). Furthermore, five of the seven grouped checklist categories also showed significant improvements in levels of completeness (p < 0.001), although there were reduced levels of completeness for three items (p < 0.001). CONCLUSION: The introduction of a NewEDS was associated with a significant improvement in the completeness and timeliness of hospital discharge communication.


Asunto(s)
Comunicación , Eficiencia Organizacional/normas , Procesamiento Automatizado de Datos , Sistemas de Información en Hospital , Alta del Paciente , Procesamiento Automatizado de Datos/normas , Procesamiento Automatizado de Datos/tendencias , Registros Electrónicos de Salud , Inglaterra , Sistemas de Información en Hospital/normas , Sistemas de Información en Hospital/tendencias , Humanos , Estudios Longitudinales , Alta del Paciente/normas , Alta del Paciente/tendencias , Mejoramiento de la Calidad , Estudios Retrospectivos
12.
J Appl Clin Med Phys ; 17(1): 387-395, 2016 01 08.
Artículo en Inglés | MEDLINE | ID: mdl-26894365

RESUMEN

Proper quality assurance (QA) of the radiotherapy process can be time-consuming and expensive. Many QA efforts, such as data export and import, are inefficient when done by humans. Additionally, humans can be unreliable, lose attention, and fail to complete critical steps that are required for smooth operations. In our group we have sought to break down the QA tasks into separate steps and to automate those steps that are better done by software running autonomously or at the instigation of a human. A team of medical physicists and software engineers worked together to identify opportunities to streamline and automate QA. Development efforts follow a formal cycle of writing software requirements, developing software, testing and commissioning. The clinical release process is separated into clinical evaluation testing, training, and finally clinical release. We have improved six processes related to QA and safety. Steps that were previously performed by humans have been automated or streamlined to increase first-time quality, reduce time spent by humans doing low-level tasks, and expedite QA tests. Much of the gains were had by automating data transfer, implementing computer-based checking and automation of systems with an event-driven framework. These coordinated efforts by software engineers and clinical physicists have resulted in speed improvements in expediting patient-sensitive QA tests.


Asunto(s)
Procesamiento Automatizado de Datos/normas , Neoplasias/radioterapia , Reconocimiento de Normas Patrones Automatizadas/métodos , Garantía de la Calidad de Atención de Salud/normas , Planificación de la Radioterapia Asistida por Computador/normas , Programas Informáticos , Humanos , Dosificación Radioterapéutica , Planificación de la Radioterapia Asistida por Computador/métodos , Radioterapia de Intensidad Modulada/métodos
13.
J Nurs Adm ; 46(1): 30-7, 2016 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-26641468

RESUMEN

Bar-code medication administration (BCMA) effectiveness is contingent upon compliance with best-practice protocols. We developed a 4-phased BCMA evaluation program to evaluate the degree of integration of current evidence into BCMA policies, procedures, and practices; identify barriers to best-practice BCMA use; and modify BCMA practice in concert with changes to the practice environment. This program provides an infrastructure for frontline nurses to partner with hospital leaders to continually evaluate and improve BCMA using a systematic process.


Asunto(s)
Procesamiento Automatizado de Datos/normas , Práctica Clínica Basada en la Evidencia/normas , Adhesión a Directriz/normas , Errores de Medicación/prevención & control , Sistemas de Medicación en Hospital/normas , Humanos , Cultura Organizacional , Estados Unidos
14.
Fed Regist ; 81(121): 40890-1, 2016 Jun 23.
Artículo en Inglés | MEDLINE | ID: mdl-27373012

RESUMEN

The Food and Drug Administration (FDA) is announcing the availability of its FDA Adverse Event Reporting System (FAERS) Regional Implementation Specifications for the International Conference on Harmonisation (ICH) E2B(R3) Specification. FDA is making this technical specifications document available to assist interested parties in electronically submitting individual case safety reports (ICSRs) (and ICSR attachments) to the Center for Drug Evaluation and Research (CDER) and the Center for Biologics Evaluation and Research (CBER). This document, entitled "FDA Regional Implementation Specifications for ICH E2B(R3) Implementation: Postmarket Submission of Individual Case Safety Reports (ICSRs) for Drugs and Biologics, Excluding Vaccines" supplements the "E2B(R3) Electronic Transmission of Individual Case Safety Reports (ICSRs) Implementation Guide--Data Elements and Message Specification" final guidance for industry and describes FDA's technical approach for receiving ICSRs, for incorporating regionally controlled terminology, and for adding region-specific data elements when reporting to FAERS.


Asunto(s)
Congresos como Asunto , Aprobación de Drogas , Procesamiento Automatizado de Datos/normas , Cooperación Internacional , Seguridad , Productos Biológicos , Humanos , Aplicación de Nuevas Drogas en Investigación , Registros Médicos , Medicamentos bajo Prescripción , Vigilancia de Productos Comercializados , Estados Unidos , United States Food and Drug Administration
15.
BMC Bioinformatics ; 16: 118, 2015 Apr 16.
Artículo en Inglés | MEDLINE | ID: mdl-25888443

RESUMEN

BACKGROUND: Untargeted metabolomics generates a huge amount of data. Software packages for automated data processing are crucial to successfully process these data. A variety of such software packages exist, but the outcome of data processing strongly depends on algorithm parameter settings. If they are not carefully chosen, suboptimal parameter settings can easily lead to biased results. Therefore, parameter settings also require optimization. Several parameter optimization approaches have already been proposed, but a software package for parameter optimization which is free of intricate experimental labeling steps, fast and widely applicable is still missing. RESULTS: We implemented the software package IPO ('Isotopologue Parameter Optimization') which is fast and free of labeling steps, and applicable to data from different kinds of samples and data from different methods of liquid chromatography - high resolution mass spectrometry and data from different instruments. IPO optimizes XCMS peak picking parameters by using natural, stable (13)C isotopic peaks to calculate a peak picking score. Retention time correction is optimized by minimizing relative retention time differences within peak groups. Grouping parameters are optimized by maximizing the number of peak groups that show one peak from each injection of a pooled sample. The different parameter settings are achieved by design of experiments, and the resulting scores are evaluated using response surface models. IPO was tested on three different data sets, each consisting of a training set and test set. IPO resulted in an increase of reliable groups (146% - 361%), a decrease of non-reliable groups (3% - 8%) and a decrease of the retention time deviation to one third. CONCLUSIONS: IPO was successfully applied to data derived from liquid chromatography coupled to high resolution mass spectrometry from three studies with different sample types and different chromatographic methods and devices. We were also able to show the potential of IPO to increase the reliability of metabolomics data. The source code is implemented in R, tested on Linux and Windows and it is freely available for download at https://github.com/glibiseller/IPO . The training sets and test sets can be downloaded from https://health.joanneum.at/IPO .


Asunto(s)
Algoritmos , Cromatografía Liquida/métodos , Procesamiento Automatizado de Datos/métodos , Procesamiento Automatizado de Datos/normas , Espectrometría de Masas/métodos , Metabolómica/métodos , Programas Informáticos , Animales , Radioisótopos de Carbono/análisis , Corazón/fisiología , Humanos , Lípidos/análisis , Pulmón/metabolismo , Ratones , Músculos/metabolismo , Lenguajes de Programación , Reproducibilidad de los Resultados , Saccharomyces cerevisiae/metabolismo
16.
Cytometry A ; 87(1): 86-8, 2015 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-25407887

RESUMEN

Identifying homogenous sets of cell populations in flow cytometry is an important process for sorting and selecting populations of interests for further data acquisition and analysis. Many computational methods are now available to automate this process, with several algorithms partitioning cells based on high-dimensional separation versus the traditional pairwise two-dimensional visualization approach of manual gating. ISAC's classification results file format was developed to exchange the results of both manual gating and algorithmic classification approaches in a standardized way based on per event based classifications, including the potential for soft classifications expressed as the probability of an event being a member of a class. © 2014 International Society for Advancement of Cytometry.


Asunto(s)
Procesamiento Automatizado de Datos/normas , Citometría de Flujo/normas , Programas Informáticos/normas , Algoritmos , Humanos , Guías de Práctica Clínica como Asunto
18.
Pharmacoepidemiol Drug Saf ; 24(4): 335-42, 2015 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-25627986

RESUMEN

PURPOSE: To describe methods reported in the literature to estimate the beginning or duration of pregnancy in automated health care data, and to present results of validation exercises where available. METHODS: Papers reporting methods for determining the beginning or duration of pregnancy were identified based on Pubmed searches, by consulting investigators with expertise in the field and by reviewing conference abstracts and reference lists of relevant papers. From each paper or abstract, we extracted information to characterize the study population, data sources, and estimation algorithm. We then grouped these studies into categories reflecting their general methodological approach. RESULTS: Methods were classified into 5 categories: (i) methods that assign a uniform duration for all pregnancies, (ii) methods that assign pregnancy duration based on preterm-delivery or health care related codes, or codes for other pregnancy outcomes, (iii) methods based on the timing of prenatal care, (iv) methods based on birth weight, and (v) methods that combine elements from 2 and 3. Validation studies evaluating these methods used varied approaches, with results generally reporting on the mistiming of the start of pregnancy, incorrect estimation of the duration of pregnancy, or misclassification of drug exposure during pregnancy or early pregnancy. CONCLUSIONS: In the absence of accurate information on the beginning or duration of pregnancy, several methods of varying complexity are available to estimate them. Validation studies have been performed for many of them and can serve as a guide for method selection for a particular study.


Asunto(s)
Bases de Datos Factuales , Atención a la Salud/métodos , Procesamiento Automatizado de Datos/métodos , Procesamiento Automatizado de Datos/normas , Embarazo/estadística & datos numéricos , Femenino , Humanos , Recién Nacido , Resultado del Embarazo , Reproducibilidad de los Resultados
19.
J Comput Chem ; 35(3): 260-9, 2014 Jan 30.
Artículo en Inglés | MEDLINE | ID: mdl-24258850

RESUMEN

Molecular dynamics simulations is an important application in theoretical chemistry, and with the large high-performance computing resources available today the programs also generate huge amounts of output data. In particular in life sciences, with complex biomolecules such as proteins, simulation projects regularly deal with several terabytes of data. Apart from the need for more cost-efficient storage, it is increasingly important to be able to archive data, secure the integrity against disk or file transfer errors, to provide rapid access, and facilitate exchange of data through open interfaces. There is already a whole range of different formats used, but few if any of them (including our previous ones) fulfill all these goals. To address these shortcomings, we present "Trajectory Next Generation" (TNG)--a flexible but highly optimized and efficient file format designed with interoperability in mind. TNG both provides state-of-the-art multiframe compression as well as a container framework that will make it possible to extend it with new compression algorithms without modifications in programs using it. TNG will be the new file format in the next major release of the GROMACS package, but it has been implemented as a separate library and API with liberal licensing to enable wide adoption both in academic and commercial codes.


Asunto(s)
Procesamiento Automatizado de Datos/normas , Simulación de Dinámica Molecular , Programas Informáticos , 2-Naftilamina/análogos & derivados , 2-Naftilamina/química , Acetamidas/química , Algoritmos , Animales , Etanol/química , Canal de Potasio Kv.1.2/química , Ribonucleasas/química , Pez Cebra , Proteínas de Pez Cebra/química
20.
BMC Med ; 12: 22, 2014 Feb 04.
Artículo en Inglés | MEDLINE | ID: mdl-24495312

RESUMEN

BACKGROUND: Computer-coded verbal autopsy (CCVA) methods to assign causes of death (CODs) for medically unattended deaths have been proposed as an alternative to physician-certified verbal autopsy (PCVA). We conducted a systematic review of 19 published comparison studies (from 684 evaluated), most of which used hospital-based deaths as the reference standard. We assessed the performance of PCVA and five CCVA methods: Random Forest, Tariff, InterVA, King-Lu, and Simplified Symptom Pattern. METHODS: The reviewed studies assessed methods' performance through various metrics: sensitivity, specificity, and chance-corrected concordance for coding individual deaths, and cause-specific mortality fraction (CSMF) error and CSMF accuracy at the population level. These results were summarized into means, medians, and ranges. RESULTS: The 19 studies ranged from 200 to 50,000 deaths per study (total over 116,000 deaths). Sensitivity of PCVA versus hospital-assigned COD varied widely by cause, but showed consistently high specificity. PCVA and CCVA methods had an overall chance-corrected concordance of about 50% or lower, across all ages and CODs. At the population level, the relative CSMF error between PCVA and hospital-based deaths indicated good performance for most CODs. Random Forest had the best CSMF accuracy performance, followed closely by PCVA and the other CCVA methods, but with lower values for InterVA-3. CONCLUSIONS: There is no single best-performing coding method for verbal autopsies across various studies and metrics. There is little current justification for CCVA to replace PCVA, particularly as physician diagnosis remains the worldwide standard for clinical diagnosis on live patients. Further assessments and large accessible datasets on which to train and test combinations of methods are required, particularly for rural deaths without medical attention.


Asunto(s)
Autopsia/normas , Causas de Muerte , Procesamiento Automatizado de Datos/normas , Hospitalización , Rol del Médico , Pobreza , Autopsia/métodos , Procesamiento Automatizado de Datos/métodos , Humanos , Ensayos Clínicos Controlados Aleatorios como Asunto
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA