Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 19 de 19
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Pathol Int ; 66(2): 63-74, 2016 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-26778830

RESUMEN

Pathologists are required to integrate data from multiple sources when making a diagnosis. Furthermore, whole slide imaging (WSI) and next generation sequencing will escalate data size and complexity. Development of well-designed databases that can allow efficient navigation between multiple data types is necessary for both clinical and research purposes. We developed and evaluated an interactive, web-based database that integrates clinical, histologic, immunohistochemical and genetic information to aid in pathologic diagnosis and interpretation with nine lung adenocarcinoma cases. To minimize sectioning artifacts, representative blocks were serially sectioned using automated tissue sectioning (Kurabo Industries, Osaka Japan) and selected slides were stained by multiple techniques, (hematoxylin and eosin [H&E], immunohistochemistry [IHC] or fluorescence in situ hybridization [FISH]). Slides were digitized by WSI scanners. An interactive relational database was designed based on a list of proposed fields covering a variety of clinical, pathologic and molecular parameters. By focusing on the three main tasks of 1.) efficient management of textual information, 2.) effective viewing of all varieties of stained whole slide images (WSI), and 3.) assistance in evaluating WSI with computer-aided diagnosis, this database prototype shows great promise for multi-modality research and diagnosis.


Asunto(s)
Adenocarcinoma/patología , Bases de Datos Factuales , Neoplasias Pulmonares/patología , Patología Clínica , Adenocarcinoma/genética , Anciano , Anciano de 80 o más Años , Sistemas de Administración de Bases de Datos , Femenino , Secuenciación de Nucleótidos de Alto Rendimiento , Humanos , Procesamiento de Imagen Asistido por Computador , Inmunohistoquímica , Hibridación Fluorescente in Situ , Japón , Neoplasias Pulmonares/genética , Masculino , Persona de Mediana Edad , Estudios Retrospectivos , Análisis de Secuencia de ADN
2.
iScience ; 26(7): 107039, 2023 Jul 21.
Artículo en Inglés | MEDLINE | ID: mdl-37416460

RESUMEN

Face recognition is widely used for security and access control. Its performance is limited when working with highly pigmented skin tones due to training bias caused by the under-representation of darker-skinned individuals in existing datasets and the fact that darker skin absorbs more light and therefore reflects less discernible detail in the visible spectrum. To improve performance, this work incorporated the infrared (IR) spectrum, which is perceived by electronic sensors. We augmented existing datasets with images of highly pigmented individuals captured using the visible, IR, and full spectra and fine-tuned existing face recognition systems to compare the performance of these three. We found a marked improvement in accuracy and AUC values of the receiver operating characteristic (ROC) curves when including the IR spectrum, increasing performance from 97.5% to 99.0% for highly pigmented faces. Different facial orientations and narrow cropping also improved performance, and the nose region was the most important feature for recognition.

3.
Dialogues Health ; 1: 100008, 2022 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-38515917

RESUMEN

Background: Siddha Medicine system is one among the oldest traditional systems of medicines in India and has its entire literature in the Tamil language in the form of poems (padal in tamil). Even if the siddha poems are available in public domain, they are not known to other parts of the world because, researchers of other languages are not able to understand the contents of these poems and there exists a language barrier. Hence there is a need to develop a system to extract structured information from these texts to facilitate searching, comparing, analysis and implementing. Objective: This study aimed at creating a comprehensive digital database system that systematically stores information from classical Siddha poems and to develop a web portal to facilitate information retrieval for comparative and logical analysis of Siddha content. Methods: We developed an expert system for siddha (eSS) that can collect, annotate classical siddha text, and visualizes the pattern in siddha medical prescriptions (Siddha Formulations) that can be useful for exploration in this system using modern techniques like machine learning and artificial learning. eSS has the following three aspects: (1) extracting data from Siddha classical text (2) defining the annotation method and (3) visualizing the patterns in the medical prescriptions based on multiple factors mentioned in the Siddha system of medicine. The data from three books were extracted, annotated and integrated into the developed eSS database. The annotations were used for analyzing the pattern in the drug prescriptions as a pilot work. Results: Overall, 110 medicinal preparations from 2 Siddhars (Agathiyar and Theran) were extracted and annotated. The generated annotations were indexed into the data repository created in eSS. The system can compare and visualize individual and multiple prescriptions to generate a hypothesis for siddha practitioners and researchers. Conclusions: We propose an eSS framework using standard siddha terminologies created by WHO to have a standard expert system for siddha. This proof-of-concept work demonstrated that the database can effectively process and visualize data from siddha formulations which can help students, researchers from siddha and other various fields to expand their research in herbal medicines.

4.
Int J Med Inform ; 162: 104739, 2022 Mar 16.
Artículo en Inglés | MEDLINE | ID: mdl-35325663

RESUMEN

BACKGROUND: The national increase in opioid use and misuse has become a public health crisis in the U.S. To tackle this crisis, the systematic evaluation and monitoring of opioid prescribing patterns is necessary. Thus, opioid prescriptions from electronic health records (EHRs) must be standardized to morphine milligram equivalent (MME) to facilitate monitoring and surveillance. While most studies report MMEs to describe opioid prescribing patterns, there is a lack of transparency regarding their data pre-processing and conversion processes for replication or comparison purposes. METHODS: In this work, we developed Opioid2MME, a SQL-based open-source framework, to convert opioid prescriptions to MMEs using EHR prescription data. The MME conversions were validated internally using F-measures through manual chart review; were compared with two existing tools, as MedEx and MedXN; and the framework was tested in an external academic EHR system. RESULTS: We identified 232,913 prescriptions for 49,060 unique patients in the EHRs, 2008-2019. We manually annotated a sample of prescriptions to assess the performance of the framework. The internal evaluation for medication information extraction achieved F-measures from 0.98 to 1.00 for each piece of the extracted information, outperforming MedEx and MedXN (F-Scores 0.98 and 0.94, respectively). MME values in the internal EHR system obtained a F-measure of 0.97 and identified 3% of the data as outliers and 7% missing values. The MME conversion in the external EHR system obtained 78.3% agreement between the MME values obtained with the development site. CONCLUSIONS: The results demonstrated that the framework is replicable and capable of converting opioid prescriptions to MMEs across different medical institutions. In summary, this work sets the groundwork for the systematic evaluation and monitoring of opioid prescribing patterns across healthcare systems.

5.
NanoImpact ; 21: 100288, 2021 01.
Artículo en Inglés | MEDLINE | ID: mdl-35559777

RESUMEN

Engineered nanomaterials (ENMs) are intentionally designed and produced by humans to revolutionize the manufacturing sector, such as electronic goods, paints, tires, clothes, cosmetic products, and biomedicine. With the spread of these ENMs in our daily lives, scientific research have generated a huge amount of data related to their potential impacts on human and environment health. To date, these data are gathered in databases mainly focused on the (eco)toxicity and occupational exposure to ENMs. These databases are therefore not suitable to build well-informed environmental exposure scenarios covering the life cycle of ENMs. In this paper, we report the construction of one of the first centralized mesocosm database management system for environmental nanosafety (called MESOCOSM) containing experimental data collected from mesocosm experiments suited for understanding and quantifying both the environmental hazard and exposure. The database, which is publicly available through https://aliayadi.github.io/MESOCOSM-database/, contains 5200 entities covering tens of unique experiments investigating Ag, CeO2, CuO, TiO2-based ENMs as well as nano-enabled products. These entities are divided into different groups i.e. physicochemical properties of ENMS, environmental, exposure and hazard endpoints, and other general information about the mesocosm testing, resulting in more than forty parameters in the database. The MESOCOSM database is equipped with a powerful application, consisting of a graphical user interface (GUI), allowing users to manage and search data using complex queries without relying on programmers. MESOCOSM aims to predict and explain ENMs behavior and fate in different ecosystems as well as their potential impacts on the environment at different stages of the nanoproducts lifecycle. MESOCOSM is expected to benefit the nanosafety community by providing a continuous source of critical information and additional characterization factors for predicting ENMs interactions with the environment and their risks.


Asunto(s)
Ecosistema , Nanoestructuras , Sistemas de Administración de Bases de Datos , Exposición a Riesgos Ambientales , Humanos , Nanoestructuras/efectos adversos , Pintura
6.
JMIR Med Inform ; 8(10): e19267, 2020 Oct 27.
Artículo en Inglés | MEDLINE | ID: mdl-33107829

RESUMEN

BACKGROUND: To help reduce expenses, shorten timelines, and improve the quality of final deliverables, the Veterans Health Administration (VA) and other health care systems promote sharing of expertise among informatics user groups. Traditional barriers to time-efficient sharing of expertise include difficulties in finding potential collaborators and availability of a mechanism to share expertise. OBJECTIVE: We aim to describe how the VA shares expertise among its informatics groups by describing a custom-built tool, the Data Object Exchange (DOEx), along with statistics on its usage. METHODS: A centrally managed web application was developed in the VA to share informatics expertise using database objects. Visitors to the site can view a catalog of objects published by other informatics user groups. Requests for subscription and publication made through the site are routed to database administrators, who then actualize the resource requests through modifications of database object permissions. RESULTS: As of April 2019, the DOEx enabled the publication of 707 database objects to 1202 VA subscribers from 758 workgroups. Overall, over 10,000 requests are made each year regarding permissions on these shared database objects, involving diverse information. Common "flavors" of shared data include disease-specific study populations (eg, patients with asthma), common data definitions (eg, hemoglobin laboratory results), and results of complex analyses (eg, models of anticipated resource utilization). Shared database objects also enable construction of community-built data pipelines. CONCLUSIONS: To increase the efficiency of informatics user groups, a method was developed to facilitate intraorganizational collaboration by managed data sharing. The advantages of this system include (1) reduced duplication of work (thereby reducing expenses and shortening timelines) and (2) higher quality of work based on simplifying the adoption of specialized knowledge among groups.

7.
Stud Health Technol Inform ; 262: 190-193, 2019 Jul 04.
Artículo en Inglés | MEDLINE | ID: mdl-31349299

RESUMEN

The article presents the semantic model of diagnostics and treatment of patients with gastrointestinal bleedings when the reasons of bleeding cannot be establihed by means of a laboratory tests, endoscopy and colonoscopy.


Asunto(s)
Ontologías Biológicas , Colonoscopía , Hemorragia Gastrointestinal , Hemorragia Gastrointestinal/diagnóstico , Hemorragia Gastrointestinal/terapia , Humanos , Semántica
8.
Health Inf Manag ; 48(3): 135-143, 2019 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-30126291

RESUMEN

BACKGROUND: Three-quarters of non-communicable disease (NCD) mortality occurs in low- and middle-income countries. However, in most developing countries, quality and reliable data on morbidity, mortality and risk factors for NCD to predict its burden and prevalence are less well understood and availability of these data is limited. To better inform policymakers and improve healthcare systems in developing countries, it is also important that these factors be understood within the context of the particular country in question. Objective: The aim of this study is to further inform practitioners in Ethiopia about the availability and status of NCD information within the Ethiopian healthcare system. METHOD: A mixed method research design was used with data collected from 13 public referral hospitals in Ethiopia. In phase 1 quantitative data were collected from 312 health professionals (99 physicians; 213 nurses) using a cross-sectional survey. In phase 2, qualitative data were collected using: interviews (n = 13 physician hospital managers); and one focus group (n = 6 national health bureau officers). RESULTS: Results highlighted the lack of NCD morbidity, mortality and risk factor data, periodic evaluation of NCD data and standardised protocols for NCD data collection in hospitals. The study also identified similar discrepancies in the availability of NCD data and standardised protocols for NCD data collection among the regions of Ethiopia. CONCLUSION: This study highlighted important deficiencies in NCD data and standardised protocols for data collection in the Ethiopian healthcare system. These deficiencies were also observed among regions of Ethiopia, indicating the need to strengthen both the healthcare system and health information systems to improve evidence-based decision-making. IMPLICATIONS: Identifying the status of NCD data in the Ethiopian healthcare system could assist policymakers, healthcare organisations, healthcare providers and health beneficiaries to reform and strengthen the existing healthcare system.


Asunto(s)
Personal de Salud/psicología , Informática Médica/normas , Enfermedades no Transmisibles/prevención & control , Adulto , Estudios Transversales , Etiopía , Femenino , Conocimientos, Actitudes y Práctica en Salud , Humanos , Masculino , Persona de Mediana Edad , Encuestas y Cuestionarios
9.
Infect Dis Poverty ; 7(1): 125, 2018 Nov 16.
Artículo en Inglés | MEDLINE | ID: mdl-30541626

RESUMEN

BACKGROUND: Developing and sustaining a data collection and management system (DCMS) is difficult in malaria-endemic countries because of limitations in internet bandwidth, computer resources and numbers of trained personnel. The premise of this paper is that development of a DCMS in West Africa was a critically important outcome of the West African International Centers of Excellence for Malaria Research. The purposes of this paper are to make that information available to other investigators and to encourage the linkage of DCMSs to international research and Ministry of Health data systems and repositories. METHODS: We designed and implemented a DCMS to link study sites in Mali, Senegal and The Gambia. This system was based on case report forms for epidemiologic, entomologic, clinical and laboratory aspects of plasmodial infection and malarial disease for a longitudinal cohort study and included on-site training for Principal Investigators and Data Managers. Based on this experience, we propose guidelines for the design and sustainability of DCMSs in environments with limited resources and personnel. RESULTS: From 2012 to 2017, we performed biannual thick smear surveys for plasmodial infection, mosquito collections for anopheline biting rates and sporozoite rates and year-round passive case detection for malarial disease in four longitudinal cohorts with 7708 individuals and 918 households in Senegal, The Gambia and Mali. Major challenges included the development of uniform definitions and reporting, assessment of data entry error rates, unstable and limited internet access and software and technology maintenance. Strengths included entomologic collections linked to longitudinal cohort studies, on-site data centres and a cloud-based data repository. CONCLUSIONS: At a time when research on diseases of poverty in low and middle-income countries is a global priority, the resources available to ensure accurate data collection and the electronic availability of those data remain severely limited. Based on our experience, we suggest the development of a regional DCMS. This approach is more economical than separate data centres and has the potential to improve data quality by encouraging shared case definitions, data validation strategies and analytic approaches including the molecular analysis of treatment successes and failures.


Asunto(s)
Gestión de la Información/métodos , Gestión de la Información/normas , Malaria/epidemiología , Animales , Culicidae/parasitología , Recolección de Datos , Gambia , Humanos , Malí , Senegal , Encuestas y Cuestionarios
10.
Stud Health Technol Inform ; 245: 554-558, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-29295156

RESUMEN

The aim of this work is to share our experience in relevant data extraction from a hospital information system in preparation for a research study using process mining techniques. The steps performed were: research definition, mapping the normative processes, identification of tables and fields names of the database, and extraction of data. We then offer lessons learned during data extraction phase. Any errors made in the extraction phase will propagate and have implications on subsequent analyses. Thus, it is essential to take the time needed and devote sufficient attention to detail to perform all activities with the goal of ensuring high quality of the extracted data. We hope this work will be informative for other researchers to plan and execute extraction of data for process mining research studies.


Asunto(s)
Minería de Datos , Sistemas de Información en Hospital , Bases de Datos Factuales , Humanos
11.
Nat Prod Res ; 31(11): 1228-1236, 2017 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-27681445

RESUMEN

Medicinal plants are the main natural pools for the discovery and development of new drugs. In the modern era of computer-aided drug designing (CADD), there is need of prompt efforts to design and construct useful database management system that allows proper data storage, retrieval and management with user-friendly interface. An inclusive database having information about classification, activity and ready-to-dock library of medicinal plant's phytochemicals is therefore required to assist the researchers in the field of CADD. The present work was designed to merge activities of phytochemicals from medicinal plants, their targets and literature references into a single comprehensive database named as Medicinal Plants Database for Drug Designing (MPD3). The newly designed online and downloadable MPD3 contains information about more than 5000 phytochemicals from around 1000 medicinal plants with 80 different activities, more than 900 literature references and 200 plus targets. The designed database is deemed to be very useful for the researchers who are engaged in medicinal plants research, CADD and drug discovery/development with ease of operation and increased efficiency. The designed MPD3 is a comprehensive database which provides most of the information related to the medicinal plants at a single platform. MPD3 is freely available at: http://bioinform.info .


Asunto(s)
Bases de Datos Factuales , Plantas Medicinales/química , Diseño de Fármacos , Descubrimiento de Drogas , Sistemas en Línea , Fitoquímicos
12.
Clin Epidemiol ; 8: 719-723, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27822119

RESUMEN

AIM: The Danish Ventral Hernia Database (DVHD) provides national surveillance of current surgical practice and clinical postoperative outcomes. The intention is to reduce postoperative morbidity and hernia recurrence, evaluate new treatment strategies, and facilitate nationwide implementation of evidence-based treatment strategies. This paper describes the design and purpose of DVHD. STUDY POPULATION: Adult (≥18 years) patients with a Danish Civil Registration Number and undergoing surgery under elective or emergency conditions for ventral hernia in a Danish surgical department from 2007 and beyond. A total of 80% of all ventral hernia repairs performed in Denmark were reported to the DVHD. MAIN VARIABLES: Demographic data (age, sex, and center), detailed hernia description (eg, type, size, surgical priority), and technical aspects (open/laparoscopic and mesh related factors) related to the surgical repair are recorded. Data registration is mandatory. Data may be merged with other Danish health registries and information from patient questionnaires or clinical examinations. DESCRIPTIVE DATA: More than 37,000 operations have been registered. Data have demonstrated high agreement with patient files. The data allow technical proposals for surgical improvement with special emphasis on reduced incidences of postoperative complications, hernia recurrence, and chronic pain. CONCLUSION: DVHD is a prospective and mandatory registration system for Danish surgeons. It has collected a high number of operations and is an excellent tool for observing changes over time, including adjustment of several confounders. This national database registry has impacted on clinical practice in Denmark and led to a high number of scientific publications in recent years.

13.
Bladder Cancer ; 2(1): 65-76, 2016 Jan 07.
Artículo en Inglés | MEDLINE | ID: mdl-27376128

RESUMEN

BACKGROUND: Bladder Cancer (BC) has two clearly distinct phenotypes. Non-muscle invasive BC has good prognosis and is treated with tumor resection and intravesical therapy whereas muscle invasive BC has poor prognosis and requires usually systemic cisplatin based chemotherapy either prior to or after radical cystectomy. Neoadjuvant chemotherapy is not often used for patients undergoing cystectomy. High-throughput analytical omics techniques are now available that allow the identification of individual molecular signatures to characterize the invasive phenotype. However, a large amount of data produced by omics experiments is not easily accessible since it is often scattered over many publications or stored in supplementary files. OBJECTIVE: To develop a novel open-source database, BcCluster (http://www.bccluster.org/), dedicated to the comprehensive molecular characterization of muscle invasive bladder carcinoma. MATERIALS: A database was created containing all reported molecular features significant in invasive BC. The query interface was developed in Ruby programming language (version 1.9.3) using the web-framework Rails (version 4.1.5) (http://rubyonrails.org/). RESULTS: BcCluster contains the data from 112 published references, providing 1,559 statistically significant features relative to BC invasion. The database also holds 435 protein-protein interaction data and 92 molecular pathways significant in BC invasion. The database can be used to retrieve binding partners and pathways for any protein of interest. We illustrate this possibility using survivin, a known BC biomarker. CONCLUSIONS: BcCluster is an online database for retrieving molecular signatures relative to BC invasion. This application offers a comprehensive view of BC invasiveness at the molecular level and allows formulation of research hypotheses relevant to this phenotype.

14.
Inform Health Soc Care ; 41(3): 286-306, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-25710606

RESUMEN

OBJECTIVES: In this paper, we present ProstateAnalyzer, a new web-based medical tool for prostate cancer diagnosis. ProstateAnalyzer allows the visualization and analysis of magnetic resonance images (MRI) in a single framework. METHODS: ProstateAnalyzer recovers the data from a PACS server and displays all the associated MRI images in the same framework, usually consisting of 3D T2-weighted imaging for anatomy, dynamic contrast-enhanced MRI for perfusion, diffusion-weighted imaging in the form of an apparent diffusion coefficient (ADC) map and MR Spectroscopy. ProstateAnalyzer allows annotating regions of interest in a sequence and propagates them to the others. RESULTS: From a representative case, the results using the four visualization platforms are fully detailed, showing the interaction among them. The tool has been implemented as a Java-based applet application to facilitate the portability of the tool to the different computer architectures and software and allowing the possibility to work remotely via the web. CONCLUSION: ProstateAnalyzer enables experts to manage prostate cancer patient data set more efficiently. The tool allows delineating annotations by experts and displays all the required information for use in diagnosis. According to the current European Society of Urogenital Radiology guidelines, it also includes the PI-RADS structured reporting scheme.


Asunto(s)
Interpretación de Imagen Asistida por Computador/métodos , Internet , Imagen por Resonancia Magnética/métodos , Neoplasias de la Próstata/diagnóstico por imagen , Neoplasias de la Próstata/patología , Humanos , Masculino
15.
J Infect Public Health ; 9(3): 331-8, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-26631432

RESUMEN

The scope of the Human Disease Insight (HDI) database is not limited to researchers or physicians as it also provides basic information to non-professionals and creates disease awareness, thereby reducing the chances of patient suffering due to ignorance. HDI is a knowledge-based resource providing information on human diseases to both scientists and the general public. Here, our mission is to provide a comprehensive human disease database containing most of the available useful information, with extensive cross-referencing. HDI is a knowledge management system that acts as a central hub to access information about human diseases and associated drugs and genes. In addition, HDI contains well-classified bioinformatics tools with helpful descriptions. These integrated bioinformatics tools enable researchers to annotate disease-specific genes and perform protein analysis, search for biomarkers and identify potential vaccine candidates. Eventually, these tools will facilitate the analysis of disease-associated data. The HDI provides two types of search capabilities and includes provisions for downloading, uploading and searching disease/gene/drug-related information. The logistical design of the HDI allows for regular updating. The database is designed to work best with Mozilla Firefox and Google Chrome and is freely accessible at http://humandiseaseinsight.com.


Asunto(s)
Bases de Datos Factuales , Enfermedad/genética , Quimioterapia , Farmacología , Biología Computacional , Humanos
16.
Artículo en Inglés | MEDLINE | ID: mdl-28077912

RESUMEN

There are two types of high-performance graph processing engines: low- and high-level engines. Low-level engines (Galois, PowerGraph, Snap) provide optimized data structures and computation models but require users to write low-level imperative code, hence ensuring that efficiency is the burden of the user. In high-level engines, users write in query languages like datalog (SociaLite) or SQL (Grail). High-level engines are easier to use but are orders of magnitude slower than the low-level graph engines. We present EmptyHeaded, a high-level engine that supports a rich datalog-like query language and achieves performance comparable to that of low-level engines. At the core of EmptyHeaded's design is a new class of join algorithms that satisfy strong theoretical guarantees but have thus far not achieved performance comparable to that of specialized graph processing engines. To achieve high performance, EmptyHeaded introduces a new join engine architecture, including a novel query optimizer and data layouts that leverage single-instruction multiple data (SIMD) parallelism. With this architecture, EmptyHeaded outperforms high-level approaches by up to three orders of magnitude on graph pattern queries, PageRank, and Single-Source Shortest Paths (SSSP) and is an order of magnitude faster than many low-level baselines. We validate that EmptyHeaded competes with the best-of-breed low-level engine (Galois), achieving comparable performance on PageRank and at most 3× worse performance on SSSP.

17.
J Registry Manag ; 42(3): 111-4, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-26779306

RESUMEN

The National Institutes of Health Alzheimer's Disease Center consortium requires member institutions to build and maintain a longitudinally characterized cohort with a uniform standard data set. Increasingly, centers are employing electronic data capture to acquire data at annual evaluations. In this paper, the University of Kansas Alzheimer's Disease Center reports on an open-source system of electronic data collection and reporting to improve efficiency. This Center capitalizes on the speed, flexibility and accessibility of the system to enhance the evaluation process while rapidly transferring data to the National Alzheimer's Coordinating Center. This framework holds promise for other consortia that regularly use and manage large, standardized datasets.

18.
Acta Inform Med ; 23(4): 224-7, 2015 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-26483596

RESUMEN

BACKGROUND AND OBJECTIVES: In Intensive Care Units, the amount of data to be processed for patients care, the turn over of the patients, the necessity for reliability and for review processes indicate the use of Patient Data Management Systems (PDMS) and electronic health records (EHR). To respond to the needs of an Intensive Care Unit and not to be locked with proprietary software, we developed an EHR based on usual software and components. METHODS: The software was designed as a client-server architecture running on the Windows operating system and powered by the access data base system. The client software was developed using Visual Basic interface library. The application offers to the users the following functions: medical notes captures, observations and treatments, nursing charts with administration of medications, scoring systems for classification, and possibilities to encode medical activities for billing processes. RESULTS: Since his deployment in September 2004, the EHR was used to care more than five thousands patients with the expected software reliability and facilitated data management and review processes. Communications with other medical software were not developed from the start, and are realized by the use of basic functionalities communication engine. Further upgrade of the system will include multi-platform support, use of typed language with static analysis, and configurable interface. CONCLUSION: The developed system based on usual software components was able to respond to the medical needs of the local ICU environment. The use of Windows for development allowed us to customize the software to the preexisting organization and contributed to the acceptability of the whole system.

19.
Bioinformation ; 11(4): 165-72, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-26124554

RESUMEN

UNLABELLED: Next-generation sequencing projects have underappreciated information management tasks requiring detailed attention to specimen curation, nucleic acid sample preparation and sequence production methods required for downstream data processing, comparison, interpretation, sharing and reuse. The few existing metadata management tools for genome-based studies provide weak curatorial frameworks for experimentalists to store and manage idiosyncratic, project-specific information, typically offering no automation supporting unified naming and numbering conventions for sequencing production environments that routinely deal with hundreds, if not thousands of samples at a time. Moreover, existing tools are not readily interfaced with bioinformatics executables, (e.g., BLAST, Bowtie2, custom pipelines). Our application, the Omics Metadata Management Software (OMMS), answers both needs, empowering experimentalists to generate intuitive, consistent metadata, and perform analyses and information management tasks via an intuitive web-based interface. Several use cases with short-read sequence datasets are provided to validate installation and integrated function, and suggest possible methodological road maps for prospective users. Provided examples highlight possible OMMS workflows for metadata curation, multistep analyses, and results management and downloading. The OMMS can be implemented as a stand alone-package for individual laboratories, or can be configured for webbased deployment supporting geographically-dispersed projects. The OMMS was developed using an open-source software base, is flexible, extensible and easily installed and executed. The OMMS can be obtained at http://omms.sandia.gov. AVAILABILITY: The OMMS can be obtained at http://omms.sandia.gov.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA