Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 20 de 101
Filtrar
1.
J Med Syst ; 44(4): 69, 2020 Feb 17.
Artículo en Inglés | MEDLINE | ID: mdl-32072322

RESUMEN

Medical Markup Language (MML) is a standard format for exchange of healthcare data among healthcare providers. Following the last major update (version 3), we developed new modules and discussed the requirements for the next major updates. Subsequently, in 2016 we released MML version 4 and used it to obtain clinical data from healthcare providers for a nationwide electronic health records (EHR) system. In this article we provide an overview of this major update of MML version 4 and discuss its interoperability for clinical data.


Asunto(s)
Registro Médico Coordinado/normas , Sistemas de Registros Médicos Computarizados/organización & administración , Lenguajes de Programación , Humanos , Sistemas de Registros Médicos Computarizados/normas
2.
Dev Genes Evol ; 229(4): 137-145, 2019 07.
Artículo en Inglés | MEDLINE | ID: mdl-31119364

RESUMEN

Computer-assisted 4D manual cell tracking has been a valuable method for understanding spatial-temporal dynamics of embryogenesis (e.g., Stach & Anselmi BMC Biol, 13(113), 1-11 2015; Vellutini et al. BMC Biol, 15(33), 1-28 2017; Wolff et al. eLife, 7, e34410 2018) since the method was introduced in the late 1990s. Since two decades SIMI® BioCell (Schnabel et al. Dev Biol, 184, 234-265 1997), a software which initially was developed for analyzing data coming from the, at that time new technique of 4D microscopy, is in use. Many laboratories around the world use SIMI BioCell for the manual tracing of cells in embryonic development of various species to reconstruct cell genealogies with high precision. However, the software has several disadvantages: limits in handling very large data sets, the virtually no maintenance over the last 10 years (bound to older Windows versions), the difficulty to access the created cell lineage data for analyses outside SIMI BioCell, and the high cost of the program. Recently, bioinformatics, in close collaboration with biologists, developed new lineaging tools that are freely available through the open source image processing platform Fiji. Here we introduce a software tool that allows conversion of SIMI BioCell lineage data to a format that is compatible with the Fiji plugin MaMuT (Wolff et al. eLife, 7, e34410 2018). Hereby we intend to maintain the usability of SIMI BioCell created cell lineage data for the future and, for investigators who wish to do so, facilitate the transition from this software to a more convenient program.


Asunto(s)
Invertebrados/citología , Programas Informáticos , Animales , Linaje de la Célula , Desarrollo Embrionario , Invertebrados/clasificación , Invertebrados/embriología , Masculino , Mitosis
3.
Adv Exp Med Biol ; 1137: 17-43, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31183818

RESUMEN

This chapter starts by introducing an example of how we can retrieve text, where every step is done manually. The chapter will describe step-by-step how we can automatize each step of the example using shell script commands, which will be introduced and explained as long as they are required. The goal is to equip the reader with a basic set of skills to retrieve data from any online database and follow the links to retrieve more information from other sources, such as literature.


Asunto(s)
Bases de Datos Factuales , Almacenamiento y Recuperación de la Información , Lenguajes de Programación , Internet
4.
BMC Bioinformatics ; 19(1): 134, 2018 04 11.
Artículo en Inglés | MEDLINE | ID: mdl-29642841

RESUMEN

BACKGROUND: Systems biologists study interaction data to understand the behaviour of whole cell systems, and their environment, at a molecular level. In order to effectively achieve this goal, it is critical that researchers have high quality interaction datasets available to them, in a standard data format, and also a suite of tools with which to analyse such data and form experimentally testable hypotheses from them. The PSI-MI XML standard interchange format was initially published in 2004, and expanded in 2007 to enable the download and interchange of molecular interaction data. PSI-XML2.5 was designed to describe experimental data and to date has fulfilled this basic requirement. However, new use cases have arisen that the format cannot properly accommodate. These include data abstracted from more than one publication such as allosteric/cooperative interactions and protein complexes, dynamic interactions and the need to link kinetic and affinity data to specific mutational changes. RESULTS: The Molecular Interaction workgroup of the HUPO-PSI has extended the existing, well-used XML interchange format for molecular interaction data to meet new use cases and enable the capture of new data types, following extensive community consultation. PSI-MI XML3.0 expands the capabilities of the format beyond simple experimental data, with a concomitant update of the tool suite which serves this format. The format has been implemented by key data producers such as the International Molecular Exchange (IMEx) Consortium of protein interaction databases and the Complex Portal. CONCLUSIONS: PSI-MI XML3.0 has been developed by the data producers, data users, tool developers and database providers who constitute the PSI-MI workgroup. This group now actively supports PSI-MI XML2.5 as the main interchange format for experimental data, PSI-MI XML3.0 which additionally handles more complex data types, and the simpler, tab-delimited MITAB2.5, 2.6 and 2.7 for rapid parsing and download.


Asunto(s)
Mapas de Interacción de Proteínas , Proteoma/metabolismo , Proteómica , Bases de Datos de Proteínas , Humanos , Mutación/genética , Biología de Sistemas
5.
J Appl Clin Med Phys ; 19(6): 60-67, 2018 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-30188009

RESUMEN

This work shows the feasibility of collecting linear accelerator beam data using just a 1-D water tank and automated couch movements with the goal to maximize the cost effectiveness in resource-limited clinical settings. Two commissioning datasets were acquired: (a) using a standard of practice 3D water tank scanning system (3DS) and (b) using a novel technique to translate a commercial TG-51 complaint 1D water tank via automated couch movements (1DS). The Extensible Markup Language (XML) was used to dynamically move the linear accelerator couch position (and thus the 1D tank) during radiation delivery for the acquisition of inline, crossline, and diagonal profiles. Both the 1DS and 3DS datasets were used to generate beam models (BM1 DS and BM3 DS ) in a commercial treatment planning system (TPS). 98.7% of 1DS measured points had a gamma value (2%/2 mm) < 1 when compared with the 3DS. Static jaw defined field and dynamic MLC field dose distribution comparisons for the TPS beam models BM1 DS and BM3 DS had 3D gamma values (2%/2 mm) < 1 for all 24,900,000 data points tested and >99.5% pass rate with gamma value (1%/1 mm) < 1. In conclusion, automated couch motions and a 1D scanning tank were used to collect commissioning beam data with accuracy comparable to traditionally acquired data using a 3D scanning system. TPS beam models generated directly from 1DS measured data were clinically equivalent to a model derived from 3DS data.


Asunto(s)
Recolección de Datos/métodos , Movimiento , Neoplasias/radioterapia , Aceleradores de Partículas/instrumentación , Fantasmas de Imagen , Planificación de la Radioterapia Asistida por Computador/métodos , Errores de Configuración en Radioterapia/prevención & control , Automatización , Humanos , Modelos Biológicos , Dosificación Radioterapéutica , Radioterapia de Intensidad Modulada/métodos
6.
J Comput Chem ; 38(9): 629-638, 2017 04 05.
Artículo en Inglés | MEDLINE | ID: mdl-28211110

RESUMEN

The force field conversion from one MD program to another one is exhausting and error-prone. Although single conversion tools from one MD program to another exist not every combination and both directions of conversion are available for the favorite MD programs Amber, Charmm, Dl-Poly, Gromacs, and Lammps. We present here a general tool for the force field conversion on the basis of an XML document. The force field is converted to and from this XML structure facilitating the implementation of new MD programs for the conversion. Furthermore, the XML structure is human readable and can be manipulated before continuing the conversion. We report, as testcases, the conversions of topologies for acetonitrile, dimethylformamide, and 1-ethyl-3-methylimidazolium trifluoromethanesulfonate comprising also Urey-Bradley and Ryckaert-Bellemans potentials. © 2017 Wiley Periodicals, Inc.

7.
Postepy Biochem ; 63(1): 1-7, 2017.
Artículo en Polaco | MEDLINE | ID: mdl-28409570

RESUMEN

Modern life sciences become quantitative. Images created by microscopy are therefore the objects of measurement rather than a simple pictures. Saving and further storage of such images cannot change future measurements on them. Such images are the integral part of experiments. Present article try to describe what kind of data we are talking about, how should we store them and how we should not.


Asunto(s)
Diagnóstico por Imagen , Procesamiento de Imagen Asistido por Computador , Almacenamiento y Recuperación de la Información/métodos , Humanos , Microscopía
8.
BMC Bioinformatics ; 17(Suppl 13): 333, 2016 Oct 06.
Artículo en Inglés | MEDLINE | ID: mdl-27766961

RESUMEN

BACKGROUND: The genes that produce antibodies and the immune receptors expressed on lymphocytes are not germline encoded; rather, they are somatically generated in each developing lymphocyte by a process called V(D)J recombination, which assembles specific, independent gene segments into mature composite genes. The full set of composite genes in an individual at a single point in time is referred to as the immune repertoire. V(D)J recombination is the distinguishing feature of adaptive immunity and enables effective immune responses against an essentially infinite array of antigens. Characterization of immune repertoires is critical in both basic research and clinical contexts. Recent technological advances in repertoire profiling via high-throughput sequencing have resulted in an explosion of research activity in the field. This has been accompanied by a proliferation of software tools for analysis of repertoire sequencing data. Despite the widespread use of immune repertoire profiling and analysis software, there is currently no standardized format for output files from V(D)J analysis. Researchers utilize software such as IgBLAST and IMGT/High V-QUEST to perform V(D)J analysis and infer the structure of germline rearrangements. However, each of these software tools produces results in a different file format, and can annotate the same result using different labels. These differences make it challenging for users to perform additional downstream analyses. RESULTS: To help address this problem, we propose a standardized file format for representing V(D)J analysis results. The proposed format, VDJML, provides a common standardized format for different V(D)J analysis applications to facilitate downstream processing of the results in an application-agnostic manner. The VDJML file format specification is accompanied by a support library, written in C++ and Python, for reading and writing the VDJML file format. CONCLUSIONS: The VDJML suite will allow users to streamline their V(D)J analysis and facilitate the sharing of scientific knowledge within the community. The VDJML suite and documentation are available from https://vdjserver.org/vdjml/ . We welcome participation from the community in developing the file format standard, as well as code contributions.


Asunto(s)
Genómica/métodos , Receptores Inmunológicos/genética , Programas Informáticos , Recombinación V(D)J , Humanos , Difusión de la Información
9.
J Biomed Inform ; 60: 352-62, 2016 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-26944737

RESUMEN

INTRODUCTION: In order to further advance research and development on the Clinical Data Interchange Standards Consortium (CDISC) Operational Data Model (ODM) standard, the existing research must be well understood. This paper presents a methodological review of the ODM literature. Specifically, it develops a classification schema to categorize the ODM literature according to how the standard has been applied within the clinical research data lifecycle. This paper suggests areas for future research and development that address ODM's limitations and capitalize on its strengths to support new trends in clinical research informatics. METHODS: A systematic scan of the following databases was performed: (1) ABI/Inform, (2) ACM Digital, (3) AIS eLibrary, (4) Europe Central PubMed, (5) Google Scholar, (5) IEEE Xplore, (7) PubMed, and (8) ScienceDirect. A Web of Science citation analysis was also performed. The search term used on all databases was "CDISC ODM." The two primary inclusion criteria were: (1) the research must examine the use of ODM as an information system solution component, or (2) the research must critically evaluate ODM against a stated solution usage scenario. Out of 2686 articles identified, 266 were included in a title level review, resulting in 183 articles. An abstract review followed, resulting in 121 remaining articles; and after a full text scan 69 articles met the inclusion criteria. RESULTS: As the demand for interoperability has increased, ODM has shown remarkable flexibility and has been extended to cover a broad range of data and metadata requirements that reach well beyond ODM's original use cases. This flexibility has yielded research literature that covers a diverse array of topic areas. A classification schema reflecting the use of ODM within the clinical research data lifecycle was created to provide a categorized and consolidated view of the ODM literature. The elements of the framework include: (1) EDC (Electronic Data Capture) and EHR (Electronic Health Record) infrastructure; (2) planning; (3) data collection; (4) data tabulations and analysis; and (5) study archival. The analysis reviews the strengths and limitations of ODM as a solution component within each section of the classification schema. This paper also identifies opportunities for future ODM research and development, including improved mechanisms for semantic alignment with external terminologies, better representation of the CDISC standards used end-to-end across the clinical research data lifecycle, improved support for real-time data exchange, the use of EHRs for research, and the inclusion of a complete study design. CONCLUSIONS: ODM is being used in ways not originally anticipated, and covers a diverse array of use cases across the clinical research data lifecycle. ODM has been used as much as a study metadata standard as it has for data exchange. A significant portion of the literature addresses integrating EHR and clinical research data. The simplicity and readability of ODM has likely contributed to its success and broad implementation as a data and metadata standard. Keeping the core ODM model focused on the most fundamental use cases, while using extensions to handle edge cases, has kept the standard easy for developers to learn and use.


Asunto(s)
Sistemas de Computación/normas , Recolección de Datos/normas , Registros Electrónicos de Salud/normas , Almacenamiento y Recuperación de la Información/normas , Algoritmos , Investigación Biomédica , Ensayos Clínicos como Asunto , Sistemas de Administración de Bases de Datos , Humanos , Lenguajes de Programación , Reproducibilidad de los Resultados , Semántica
10.
Adv Exp Med Biol ; 939: 259-287, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27807751

RESUMEN

The development of information technology has resulted in its penetration into every area of clinical research. Various clinical systems have been developed, which produce increasing volumes of clinical data. However, saving, exchanging, querying, and exploiting these data are challenging issues. The development of Extensible Markup Language (XML) has allowed the generation of flexible information formats to facilitate the electronic sharing of structured data via networks, and it has been used widely for clinical data processing. In particular, XML is very useful in the fields of data standardization, data exchange, and data integration. Moreover, ontologies have been attracting increased attention in various clinical fields in recent years. An ontology is the basic level of a knowledge representation scheme, and various ontology repositories have been developed, such as Gene Ontology and BioPortal. The creation of these standardized repositories greatly facilitates clinical research in related fields. In this chapter, we discuss the basic concepts of XML and ontologies, as well as their clinical applications.


Asunto(s)
Registros Electrónicos de Salud/estadística & datos numéricos , Sistemas de Información/organización & administración , Informática Médica/métodos , Ensayos Clínicos como Asunto , Bases de Datos Factuales , Humanos , Difusión de la Información , Sistemas de Información/clasificación , Internet , Lenguajes de Programación
11.
Proteomics ; 15(18): 3152-62, 2015 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-26037908

RESUMEN

The mzQuantML standard has been developed by the Proteomics Standards Initiative for capturing, archiving and exchanging quantitative proteomic data, derived from mass spectrometry. It is a rich XML-based format, capable of representing data about two-dimensional features from LC-MS data, and peptides, proteins or groups of proteins that have been quantified from multiple samples. In this article we report the development of an open source Java-based library of routines for mzQuantML, called the mzqLibrary, and associated software for visualising data called the mzqViewer. The mzqLibrary contains routines for mapping (peptide) identifications on quantified features, inference of protein (group)-level quantification values from peptide-level values, normalisation and basic statistics for differential expression. These routines can be accessed via the command line, via a Java programming interface access or a basic graphical user interface. The mzqLibrary also contains several file format converters, including import converters (to mzQuantML) from OpenMS, Progenesis LC-MS and MaxQuant, and exporters (from mzQuantML) to other standards or useful formats (mzTab, HTML, csv). The mzqViewer contains in-built routines for viewing the tables of data (about features, peptides or proteins), and connects to the R statistical library for more advanced plotting options. The mzqLibrary and mzqViewer packages are available from https://code.google.com/p/mzq-lib/.


Asunto(s)
Sistemas de Administración de Bases de Datos , Bases de Datos de Proteínas/normas , Proteómica/métodos , Proteómica/normas , Programas Informáticos
12.
Proteomics ; 14(6): 685-8, 2014 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-24453188

RESUMEN

The mzQuantML standard from the HUPO Proteomics Standards Initiative has recently been released, capturing quantitative data about peptides and proteins, following analysis of MS data. We present a Java application programming interface (API) for mzQuantML called jmzQuantML. The API provides robust bridges between Java classes and elements in mzQuantML files and allows random access to any part of the file. The API provides read and write capabilities, and is designed to be embedded in other software packages, enabling mzQuantML support to be added to proteomics software tools (http://code.google.com/p/jmzquantml/). The mzQuantML standard is designed around a multilevel validation system to ensure that files are structurally and semantically correct for different proteomics quantitative techniques. In this article, we also describe a Java software tool (http://code.google.com/p/mzquantml-validator/) for validating mzQuantML files, which is a formal part of the data standard.


Asunto(s)
Proteínas/química , Proteómica/métodos , Programas Informáticos , Bases de Datos de Proteínas , Espectrometría de Masas/métodos , Péptidos/química , Lenguajes de Programación
13.
Proteomics ; 14(21-22): 2389-99, 2014 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-25092112

RESUMEN

Inferring which protein species have been detected in bottom-up proteomics experiments has been a challenging problem for which solutions have been maturing over the past decade. While many inference approaches now function well in isolation, comparing and reconciling the results generated across different tools remains difficult. It presently stands as one of the greatest barriers in collaborative efforts such as the Human Proteome Project and public repositories such as the PRoteomics IDEntifications (PRIDE) database. Here we present a framework for reporting protein identifications that seeks to improve capabilities for comparing results generated by different inference tools. This framework standardizes the terminology for describing protein identification results, associated with the HUPO-Proteomics Standards Initiative (PSI) mzIdentML standard, while still allowing for differing methodologies to reach that final state. It is proposed that developers of software for reporting identification results will adopt this terminology in their outputs. While the new terminology does not require any changes to the core mzIdentML model, it represents a significant change in practice, and, as such, the rules will be released via a new version of the mzIdentML specification (version 1.2) so that consumers of files are able to determine whether the new guidelines have been adopted by export software.


Asunto(s)
Espectrometría de Masas/normas , Proteínas/análisis , Proteómica/normas , Programas Informáticos/normas , Bases de Datos de Proteínas , Humanos , Espectrometría de Masas/métodos , Proteómica/métodos
14.
J Biomed Inform ; 50: 77-94, 2014 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-24518557

RESUMEN

An ever increasing amount of medical data such as electronic health records, is being collected, stored, shared and managed in large online health information systems and electronic medical record systems (EMR) (Williams et al., 2001; Virtanen, 2009; Huang and Liou, 2007) [1-3]. From such rich collections, data is often published in the form of census and statistical data sets for the purpose of knowledge sharing and enabling medical research. This brings with it an increasing need for protecting individual people privacy, and it becomes an issue of great importance especially when information about patients is exposed to the public. While the concept of data privacy has been comprehensively studied for relational data, models and algorithms addressing the distinct differences and complex structure of XML data are yet to be explored. Currently, the common compromise method is to convert private XML data into relational data for publication. This ad hoc approach results in significant loss of useful semantic information previously carried in the private XML data. Health data often has very complex structure, which is best expressed in XML. In fact, XML is the standard format for exchanging (e.g. HL7 version 3(1)) and publishing health information. Lack of means to deal directly with data in XML format is inevitably a serious drawback. In this paper we propose a novel privacy protection model for XML, and an algorithm for implementing this model. We provide general rules, both for transforming a private XML schema into a published XML schema, and for mapping private XML data to the new privacy-protected published XML data. In addition, we propose a new privacy property, δ-dependency, which can be applied to both relational and XML data, and that takes into consideration the hierarchical nature of sensitive data (as opposed to "quasi-identifiers"). Lastly, we provide an implementation of our model, algorithm and privacy property, and perform an experimental analysis, to demonstrate the proposed privacy scheme in practical application.


Asunto(s)
Registros Electrónicos de Salud , Privacidad , Lenguajes de Programación
15.
ACS Chem Neurosci ; 15(11): 2144-2159, 2024 06 05.
Artículo en Inglés | MEDLINE | ID: mdl-38723285

RESUMEN

The local interpretable model-agnostic explanation (LIME) method was used to interpret two machine learning models of compounds penetrating the blood-brain barrier. The classification models, Random Forest, ExtraTrees, and Deep Residual Network, were trained and validated using the blood-brain barrier penetration dataset, which shows the penetrability of compounds in the blood-brain barrier. LIME was able to create explanations for such penetrability, highlighting the most important substructures of molecules that affect drug penetration in the barrier. The simple and intuitive outputs prove the applicability of this explainable model to interpreting the permeability of compounds across the blood-brain barrier in terms of molecular features. LIME explanations were filtered with a weight equal to or greater than 0.1 to obtain only the most relevant explanations. The results showed several structures that are important for blood-brain barrier penetration. In general, it was found that some compounds with nitrogenous substructures are more likely to permeate the blood-brain barrier. The application of these structural explanations may help the pharmaceutical industry and potential drug synthesis research groups to synthesize active molecules more rationally.


Asunto(s)
Barrera Hematoencefálica , Aprendizaje Automático , Barrera Hematoencefálica/metabolismo , Humanos , Transporte Biológico/fisiología , Permeabilidad
16.
JMIR Form Res ; 8: e50475, 2024 Apr 16.
Artículo en Inglés | MEDLINE | ID: mdl-38625728

RESUMEN

BACKGROUND: Though there has been considerable effort to implement machine learning (ML) methods for health care, clinical implementation has lagged. Incorporating explainable machine learning (XML) methods through the development of a decision support tool using a design thinking approach is expected to lead to greater uptake of such tools. OBJECTIVE: This work aimed to explore how constant engagement of clinician end users can address the lack of adoption of ML tools in clinical contexts due to their lack of transparency and address challenges related to presenting explainability in a decision support interface. METHODS: We used a design thinking approach augmented with additional theoretical frameworks to provide more robust approaches to different phases of design. In particular, in the problem definition phase, we incorporated the nonadoption, abandonment, scale-up, spread, and sustainability of technology in health care (NASSS) framework to assess these aspects in a health care network. This process helped focus on the development of a prognostic tool that predicted the likelihood of admission to an intensive care ward based on disease severity in chest x-ray images. In the ideate, prototype, and test phases, we incorporated a metric framework to assess physician trust in artificial intelligence (AI) tools. This allowed us to compare physicians' assessments of the domain representation, action ability, and consistency of the tool. RESULTS: Physicians found the design of the prototype elegant, and domain appropriate representation of data was displayed in the tool. They appreciated the simplified explainability overlay, which only displayed the most predictive patches that cumulatively explained 90% of the final admission risk score. Finally, in terms of consistency, physicians unanimously appreciated the capacity to compare multiple x-ray images in the same view. They also appreciated the ability to toggle the explainability overlay so that both options made it easier for them to assess how consistently the tool was identifying elements of the x-ray image they felt would contribute to overall disease severity. CONCLUSIONS: The adopted approach is situated in an evolving space concerned with incorporating XML or AI technologies into health care software. We addressed the alignment of AI as it relates to clinician trust, describing an approach to wire framing and prototyping, which incorporates the use of a theoretical framework for trust in the design process itself. Moreover, we proposed that alignment of AI is dependent upon integration of end users throughout the larger design process. Our work shows the importance and value of engaging end users prior to tool development. We believe that the described approach is a unique and valuable contribution that outlines a direction for ML experts, user experience designers, and clinician end users on how to collaborate in the creation of trustworthy and usable XML-based clinical decision support tools.

17.
Int J Med Inform ; 178: 105207, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37688835

RESUMEN

BACKGROUND: Geopolitical and economic crises force a growing number of people to leave their countries and search better employment opportunities abroad. Meanwhile, the highly competitive labor market provides opportunities for employees to change workplaces and job positions. Health assessment data collected during the occupational history is an essential resource for developing efficient occupational disease prevention strategies as well as for ensuring the physical and psychological well-being of newly appointed workers. The diversity in data representation is source for interoperability problems that are insufficiently explored in the existing literature. OBJECTIVES: This research aims to design a worker's occupational health assessment summary (OHAS) dataset that satisfies the requirements of an international standard for semantic interoperability in the use case for exchanging extracts of such data. The focus is on the need for a common OHAS standard at EU level allowing seamless exchange of OHAS at both cross-border and at the worker's country of origin level. RESULTS: This paper proposes a novelty systematic approach ensuring semantic interoperability in the exchange of OHAS. Two use cases are explored in terms of UML sequence diagram. The OHAS dataset reflects common data requirements established in the national legislation of EU countries. Finally, an EN 13606 archetype of OHAS is designed by satisfying the requirements for semantic interoperability in the exchange of clinical data. Semantic interoperability of OHAS is demonstrated with realistic use case data. CONCLUSIONS: The designed static, non-volatile and reusable information model of OHAS developed in this paper allows to create EN 13606 archetype instances that are valid with respect to the Reference model and the datatypes of this standard. Thus, basic activities in the OHAS use case can be implemented in software, for example, by means of a native XML database as well as integrated into existing information systems.


Asunto(s)
Salud Laboral , Semántica , Humanos , Sistemas de Información , Empleo , Ocupaciones
18.
J Pathol Inform ; 14: 100303, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36941960

RESUMEN

Background: Reflexive laboratory testing workflows can improve the assessment of patients receiving pain medications chronically, but complex workflows requiring pathologist input and interpretation may not be well-supported by traditional laboratory information systems. In this work, we describe the development of a web application that improves the efficiency of pathologists and laboratory staff in delivering actionable toxicology results. Method: Before designing the application, we set out to understand the entire workflow including the laboratory workflow and pathologist review. Additionally, we gathered requirements and specifications from stakeholders. Finally, to assess the performance of the implementation of the application, we surveyed stakeholders and documented the approximate amount of time that is required in each step of the workflow. Results: A web-based application was chosen for the ease of access for users. Relevant clinical data was routinely received and displayed in the application. The workflows in the laboratory and during the interpretation process served as the basis of the user interface. With the addition of auto-filing software, the return on investment was significant. The laboratory saved the equivalent of one full-time employee in time by automating file management and result entry. Discussion: Implementation of a purpose-built application to support reflex and interpretation workflows in a clinical pathology practice has led to a significant improvement in laboratory efficiency. Custom- and purpose-built applications can help reduce staff burnout, reduce transcription errors, and allow staff to focus on more critical issues around quality.

19.
Sensors (Basel) ; 12(6): 6802-24, 2012.
Artículo en Inglés | MEDLINE | ID: mdl-22969322

RESUMEN

The key idea underlying many Ambient Intelligence (AmI) projects and applications is context awareness, which is based mainly on their capacity to identify users and their locations. The actual computing capacity should remain in the background, in the periphery of our awareness, and should only move to the center if and when necessary. Computing thus becomes 'invisible', as it is embedded in the environment and everyday objects. The research project described herein aims to realize an Ambient Intelligence-based environment able to improve users' quality of life by learning their habits and anticipating their needs. This environment is part of an adaptive, context-aware framework designed to make today's incompatible heterogeneous domotic systems fully interoperable, not only for connecting sensors and actuators, but for providing comprehensive connections of devices to users. The solution is a middleware architecture based on open and widely recognized standards capable of abstracting the peculiarities of underlying heterogeneous technologies and enabling them to co-exist and interwork, without however eliminating their differences. At the highest level of this infrastructure, the Ambient Intelligence framework, integrated with the domotic sensors, can enable the system to recognize any unusual or dangerous situations and anticipate health problems or special user needs in a technological living environment, such as a house or a public space.


Asunto(s)
Automatización , Ambiente , Necesidades y Demandas de Servicios de Salud , Algoritmos , Inteligencia Artificial , Humanos , Reproducibilidad de los Resultados
20.
J Pathol Inform ; 13: 100154, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36605108

RESUMEN

Context: Analysis of diagnostic information in pathology reports for the purposes of clinical or translational research and quality assessment/control often requires manual data extraction, which can be laborious, time-consuming, and subject to mistakes. Objective: We sought to develop, employ, and evaluate a simple, dictionary- and rule-based natural language processing (NLP) algorithm for generating searchable information on various types of parameters from diverse surgical pathology reports. Design: Data were exported from the pathology laboratory information system (LIS) into extensible markup language (XML) documents, which were parsed by NLP-based Python code into desired data points and delivered to Excel spreadsheets. Accuracy and efficiency were compared to a manual data extraction method with concordance measured by Cohen's κ coefficient and corresponding P values. Results: The automated method was highly concordant (90%-100%, P<.001) with excellent inter-observer reliability (Cohen's κ: 0.86-1.0) compared to the manual method in 3 clinicopathological research scenarios, including squamous dysplasia presence and grade in anal biopsies, epithelial dysplasia grade and location in colonoscopic surveillance biopsies, and adenocarcinoma grade and amount in prostate core biopsies. Significantly, the automated method was 24-39 times faster and inherently contained links for each diagnosis to additional variables such as patient age, location, etc., which would require additional manual processing time. Conclusions: A simple, flexible, and scaleable NLP-based platform can be used to correctly, safely, and quickly extract and deliver linked data from pathology reports into searchable spreadsheets for clinical and research purposes.

SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda