Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 20 de 412
Filtrar
Más filtros

Publication year range
1.
J Med Internet Res ; 26: e54263, 2024 Jul 05.
Artículo en Inglés | MEDLINE | ID: mdl-38968598

RESUMEN

BACKGROUND: The medical knowledge graph provides explainable decision support, helping clinicians with prompt diagnosis and treatment suggestions. However, in real-world clinical practice, patients visit different hospitals seeking various medical services, resulting in fragmented patient data across hospitals. With data security issues, data fragmentation limits the application of knowledge graphs because single-hospital data cannot provide complete evidence for generating precise decision support and comprehensive explanations. It is important to study new methods for knowledge graph systems to integrate into multicenter, information-sensitive medical environments, using fragmented patient records for decision support while maintaining data privacy and security. OBJECTIVE: This study aims to propose an electronic health record (EHR)-oriented knowledge graph system for collaborative reasoning with multicenter fragmented patient medical data, all the while preserving data privacy. METHODS: The study introduced an EHR knowledge graph framework and a novel collaborative reasoning process for utilizing multicenter fragmented information. The system was deployed in each hospital and used a unified semantic structure and Observational Medical Outcomes Partnership (OMOP) vocabulary to standardize the local EHR data set. The system transforms local EHR data into semantic formats and performs semantic reasoning to generate intermediate reasoning findings. The generated intermediate findings used hypernym concepts to isolate original medical data. The intermediate findings and hash-encrypted patient identities were synchronized through a blockchain network. The multicenter intermediate findings were collaborated for final reasoning and clinical decision support without gathering original EHR data. RESULTS: The system underwent evaluation through an application study involving the utilization of multicenter fragmented EHR data to alert non-nephrology clinicians about overlooked patients with chronic kidney disease (CKD). The study covered 1185 patients in nonnephrology departments from 3 hospitals. The patients visited at least two of the hospitals. Of these, 124 patients were identified as meeting CKD diagnosis criteria through collaborative reasoning using multicenter EHR data, whereas the data from individual hospitals alone could not facilitate the identification of CKD in these patients. The assessment by clinicians indicated that 78/91 (86%) patients were CKD positive. CONCLUSIONS: The proposed system was able to effectively utilize multicenter fragmented EHR data for clinical application. The application study showed the clinical benefits of the system with prompt and comprehensive decision support.


Asunto(s)
Sistemas de Apoyo a Decisiones Clínicas , Registros Electrónicos de Salud , Humanos
2.
J Med Internet Res ; 26: e54737, 2024 Sep 16.
Artículo en Inglés | MEDLINE | ID: mdl-39283665

RESUMEN

BACKGROUND: Despite the emerging application of clinical decision support systems (CDSS) in pregnancy care and the proliferation of artificial intelligence (AI) over the last decade, it remains understudied regarding the role of AI in CDSS specialized for pregnancy care. OBJECTIVE: To identify and synthesize AI-augmented CDSS in pregnancy care, CDSS functionality, AI methodologies, and clinical implementation, we reported a systematic review based on empirical studies that examined AI-augmented CDSS in pregnancy care. METHODS: We retrieved studies that examined AI-augmented CDSS in pregnancy care using database queries involved with titles, abstracts, keywords, and MeSH (Medical Subject Headings) terms. Bibliographic records from their inception to 2022 were retrieved from PubMed/MEDLINE (n=206), Embase (n=101), and ACM Digital Library (n=377), followed by eligibility screening and literature review. The eligibility criteria include empirical studies that (1) developed or tested AI methods, (2) developed or tested CDSS or CDSS components, and (3) focused on pregnancy care. Data of studies used for review and appraisal include title, abstract, keywords, MeSH terms, full text, and supplements. Publications with ancillary information or overlapping outcomes were synthesized as one single study. Reviewers independently reviewed and assessed the quality of selected studies. RESULTS: We identified 30 distinct studies of 684 studies from their inception to 2022. Topics of clinical applications covered AI-augmented CDSS from prenatal, early pregnancy, obstetric care, and postpartum care. Topics of CDSS functions include diagnostic support, clinical prediction, therapeutics recommendation, and knowledge base. CONCLUSIONS: Our review acknowledged recent advances in CDSS studies including early diagnosis of prenatal abnormalities, cost-effective surveillance, prenatal ultrasound support, and ontology development. To recommend future directions, we also noted key gaps from existing studies, including (1) decision support in current childbirth deliveries without using observational data from consequential fetal or maternal outcomes in future pregnancies; (2) scarcity of studies in identifying several high-profile biases from CDSS, including social determinants of health highlighted by the American College of Obstetricians and Gynecologists; and (3) chasm between internally validated CDSS models, external validity, and clinical implementation.


Asunto(s)
Inteligencia Artificial , Sistemas de Apoyo a Decisiones Clínicas , Humanos , Embarazo , Femenino , Atención Prenatal/métodos
3.
Sensors (Basel) ; 24(12)2024 Jun 19.
Artículo en Inglés | MEDLINE | ID: mdl-38931761

RESUMEN

This paper concerns the extension of the Heritage Digital Twin Ontology introduced in previous research to describe the reactivity of digital twins used for cultural heritage documentation by including the semantic description of sensors and activators and all of the process of interacting with the real world. After analysing previous work on the use of digital twins in cultural heritage, a summary description of the Heritage Digital Twin Ontology is provided, and the existing applications of digital twins to cultural heritage are overviewed, with references to reviews summarising the large production of scientific contributions on the topic. Then, a novel ontology named the Reactive Digital Twin Ontology is described, in which sensors, activators, and the decision processes are also semantically described, turning the previous synchronic approach to cultural heritage documentation into a diachronic one. Some case studies exemplify this theory.

4.
Sensors (Basel) ; 24(8)2024 Apr 19.
Artículo en Inglés | MEDLINE | ID: mdl-38676238

RESUMEN

In the highly competitive field of material manufacturing, stakeholders strive for the increased quality of the end products, reduced cost of operation, and the timely completion of their business processes. Digital twin (DT) technologies are considered major enablers that can be deployed to assist the development and effective provision of manufacturing processes. Additionally, knowledge graphs (KG) have emerged as efficient tools in the industrial domain and are able to efficiently represent data from various disciplines in a structured manner while also supporting advanced analytics. This paper proposes a solution that integrates a KG and DTs. Through this synergy, we aimed to develop highly autonomous and flexible DTs that utilize the semantic knowledge stored in the KG to better support advanced functionalities. The developed KG stores information about materials and their properties and details about the processes in which they are involved, following a flexible schema that is not domain specific. The DT comprises smaller Virtual Objects (VOs), each one acting as an abstraction of a single step of the Industrial Business Process (IBP), providing the necessary functionalities that simulate the corresponding real-world process. By executing appropriate queries to the KG, the DT can orchestrate the operation of the VOs and their physical counterparts and configure their parameters accordingly, in this way increasing its self-awareness. In this article, the architecture of such a solution is presented and its application in a real laser glass bending process is showcased.

5.
Sensors (Basel) ; 24(6)2024 Mar 07.
Artículo en Inglés | MEDLINE | ID: mdl-38544003

RESUMEN

The modern healthcare landscape is overwhelmed by data derived from heterogeneous IoT data sources and Electronic Health Record (EHR) systems. Based on the advancements in data science and Machine Learning (ML), an improved ability to integrate and process the so-called primary and secondary data fosters the provision of real-time and personalized decisions. In that direction, an innovative mechanism for processing and integrating health-related data is introduced in this article. It describes the details of the mechanism and its internal subcomponents and workflows, together with the results from its utilization, validation, and evaluation in a real-world scenario. It also highlights the potential derived from the integration of primary and secondary data into Holistic Health Records (HHRs) and from the utilization of advanced ML-based and Semantic Web techniques to improve the quality, reliability, and interoperability of the examined data. The viability of this approach is evaluated through heterogeneous healthcare datasets pertaining to personalized risk identification and monitoring related to pancreatic cancer. The key outcomes and innovations of this mechanism are the introduction of the HHRs, which facilitate the capturing of all health determinants in a harmonized way, and a holistic data ingestion mechanism for advanced data processing and analysis.


Asunto(s)
Registros Electrónicos de Salud , Neoplasias Pancreáticas , Humanos , Salud Holística , Reproducibilidad de los Resultados , Semántica , Aprendizaje Automático
6.
Sensors (Basel) ; 24(3)2024 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-38339683

RESUMEN

Managing modern museum content and visitor data analytics to achieve higher levels of visitor experience and overall museum performance is a complex and multidimensional issue involving several scientific aspects, such as exhibits' metadata management, visitor movement tracking and modelling, location/context-aware content provision, etc. In related prior research, most of the efforts have focused individually on some of these aspects and do not provide holistic approaches enhancing both museum performance and visitor experience. This paper proposes an integrated conceptualisation for improving these two aspects, involving four technological components. First, the adoption and parameterisation of four ontologies for the digital documentation and presentation of exhibits and their conservation methods, spatial management, and evaluation. Second, a tool for capturing visitor movement in near real-time, both anonymously (default) and eponymously (upon visitor consent). Third, a mobile application delivers personalised content to eponymous visitors based on static (e.g., demographic) and dynamic (e.g., visitor movement) data. Lastly, a platform assists museum administrators in managing visitor statistics and evaluating exhibits, collections, and routes based on visitors' behaviour and interactions. Preliminary results from a pilot implementation of this holistic approach in a multi-space high-traffic museum (MELTOPENLAB project) indicate that a cost-efficient, fully functional solution is feasible, and achieving an optimal trade-off between technical performance and cost efficiency is possible for museum administrators seeking unfragmented approaches that add value to their cultural heritage organisations.


Asunto(s)
Ciencia de los Datos , Museos , Documentación
7.
Brief Bioinform ; 22(1): 30-44, 2021 01 18.
Artículo en Inglés | MEDLINE | ID: mdl-32496509

RESUMEN

Thousands of new experimental datasets are becoming available every day; in many cases, they are produced within the scope of large cooperative efforts, involving a variety of laboratories spread all over the world, and typically open for public use. Although the potential collective amount of available information is huge, the effective combination of such public sources is hindered by data heterogeneity, as the datasets exhibit a wide variety of notations and formats, concerning both experimental values and metadata. Thus, data integration is becoming a fundamental activity, to be performed prior to data analysis and biological knowledge discovery, consisting of subsequent steps of data extraction, normalization, matching and enrichment; once applied to heterogeneous data sources, it builds multiple perspectives over the genome, leading to the identification of meaningful relationships that could not be perceived by using incompatible data formats. In this paper, we first describe a technological pipeline from data production to data integration; we then propose a taxonomy of genomic data players (based on the distinction between contributors, repository hosts, consortia, integrators and consumers) and apply the taxonomy to describe about 30 important players in genomic data management. We specifically focus on the integrator players and analyse the issues in solving the genomic data integration challenges, as well as evaluate the computational environments that they provide to follow up data integration by means of visualization and analysis tools.


Asunto(s)
Manejo de Datos/métodos , Genoma Humano , Genómica/métodos , Humanos , Metadatos
8.
J Med Internet Res ; 25: e43617, 2023 04 18.
Artículo en Inglés | MEDLINE | ID: mdl-37071460

RESUMEN

BACKGROUND: Digital sensing solutions represent a convenient, objective, relatively inexpensive method that could be leveraged for assessing symptoms of various health conditions. Recent progress in the capabilities of digital sensing products has targeted the measurement of scratching during sleep, traditionally referred to as nocturnal scratching, in patients with atopic dermatitis or other skin conditions. Many solutions measuring nocturnal scratch have been developed; however, a lack of efforts toward standardization of the measure's definition and contextualization of scratching during sleep hampers the ability to compare different technologies for this purpose. OBJECTIVE: We aimed to address this gap and bring forth unified measurement definitions for nocturnal scratch. METHODS: We performed a narrative literature review of definitions of scratching in patients with skin inflammation and a targeted literature review of sleep in the context of the period during which such scratching occurred. Both searches were limited to English language studies in humans. The extracted data were synthesized into themes based on study characteristics: scratch as a behavior, other characterization of the scratching movement, and measurement parameters for both scratch and sleep. We then developed ontologies for the digital measurement of sleep scratching. RESULTS: In all, 29 studies defined inflammation-related scratching between 1996 and 2021. When cross-referenced with the results of search terms describing the sleep period, only 2 of these scratch-related papers also described sleep-related variables. From these search results, we developed an evidence-based and patient-centric definition of nocturnal scratch: an action of rhythmic and repetitive skin contact movement performed during a delimited time period of intended and actual sleep that is not restricted to any specific time of the day or night. Based on the measurement properties identified in the searches, we developed ontologies of relevant concepts that can be used as a starting point to develop standardized outcome measures of scratching during sleep in patients with inflammatory skin conditions. CONCLUSIONS: This work is intended to serve as a foundation for the future development of unified and well-described digital health technologies measuring nocturnal scratching and should enable better communication and sharing of results between various stakeholders taking part in research in atopic dermatitis and other inflammatory skin conditions.


Asunto(s)
Dermatitis Atópica , Prurito , Humanos , Dermatitis Atópica/diagnóstico , Inflamación , Movimiento , Prurito/diagnóstico , Sueño , Calidad de Vida
9.
BMC Med Inform Decis Mak ; 23(Suppl 1): 40, 2023 02 24.
Artículo en Inglés | MEDLINE | ID: mdl-36829139

RESUMEN

BACKGROUND: Two years into the COVID-19 pandemic and with more than five million deaths worldwide, the healthcare establishment continues to struggle with every new wave of the pandemic resulting from a new coronavirus variant. Research has demonstrated that there are variations in the symptoms, and even in the order of symptom presentations, in COVID-19 patients infected by different SARS-CoV-2 variants (e.g., Alpha and Omicron). Textual data in the form of admission notes and physician notes in the Electronic Health Records (EHRs) is rich in information regarding the symptoms and their orders of presentation. Unstructured EHR data is often underutilized in research due to the lack of annotations that enable automatic extraction of useful information from the available extensive volumes of textual data. METHODS: We present the design of a COVID Interface Terminology (CIT), not just a generic COVID-19 terminology, but one serving a specific purpose of enabling automatic annotation of EHRs of COVID-19 patients. CIT was constructed by integrating existing COVID-related ontologies and mining additional fine granularity concepts from clinical notes. The iterative mining approach utilized the techniques of 'anchoring' and 'concatenation' to identify potential fine granularity concepts to be added to the CIT. We also tested the generalizability of our approach on a hold-out dataset and compared the annotation coverage to the coverage obtained for the dataset used to build the CIT. RESULTS: Our experiments demonstrate that this approach results in higher annotation coverage compared to existing ontologies such as SNOMED CT and Coronavirus Infectious Disease Ontology (CIDO). The final version of CIT achieved about 20% more coverage than SNOMED CT and 50% more coverage than CIDO. In the future, the concepts mined and added into CIT could be used as training data for machine learning models for mining even more concepts into CIT and further increasing the annotation coverage. CONCLUSION: In this paper, we demonstrated the construction of a COVID interface terminology that can be utilized for automatically annotating EHRs of COVID-19 patients. The techniques presented can identify frequently documented fine granularity concepts that are missing in other ontologies thereby increasing the annotation coverage.


Asunto(s)
COVID-19 , Registros Electrónicos de Salud , Humanos , Pandemias , SARS-CoV-2
10.
BMC Med Inform Decis Mak ; 23(1): 12, 2023 01 19.
Artículo en Inglés | MEDLINE | ID: mdl-36658526

RESUMEN

BACKGROUND: Intensive Care Unit (ICU) readmissions represent both a health risk for patients,with increased mortality rates and overall health deterioration, and a financial burden for healthcare facilities. As healthcare became more data-driven with the introduction of Electronic Health Records (EHR), machine learning methods have been applied to predict ICU readmission risk. However, these methods disregard the meaning and relationships of data objects and work blindly over clinical data without taking into account scientific knowledge and context. Ontologies and Knowledge Graphs can help bridge this gap between data and scientific context, as they are computational artefacts that represent the entities of a domain and their relationships to each other in a formalized way. METHODS AND RESULTS: We have developed an approach that enriches EHR data with semantic annotations to ontologies to build a Knowledge Graph. A patient's ICU stay is represented by Knowledge Graph embeddings in a contextualized manner, which are used by machine learning models to predict 30-days ICU readmissions. This approach is based on several contributions: (1) an enrichment of the MIMIC-III dataset with patient-oriented annotations to various biomedical ontologies; (2) a Knowledge Graph that defines patient data with biomedical ontologies; (3) a predictive model of ICU readmission risk that uses Knowledge Graph embeddings; (4) a variant of the predictive model that targets different time points during an ICU stay. Our predictive approaches outperformed both a baseline and state-of-the-art works achieving a mean Area Under the Receiver Operating Characteristic Curve of 0.827 and an Area Under the Precision-Recall Curve of 0.691. The application of this novel approach to help clinicians decide whether a patient can be discharged has the potential to prevent the readmission of [Formula: see text] of Intensive Care Unit patients, without unnecessarily prolonging the stay of those who would not require it. CONCLUSION: The coupling of semantic annotation and Knowledge Graph embeddings affords two clear advantages: they consider scientific context and they are able to build representations of EHR information of different types in a common format. This work demonstrates the potential for impact that integrating ontologies and Knowledge Graphs into clinical machine learning applications can have.


Asunto(s)
Ontologías Biológicas , Readmisión del Paciente , Humanos , Reconocimiento de Normas Patrones Automatizadas , Aprendizaje Automático , Unidades de Cuidados Intensivos
11.
BMC Med Inform Decis Mak ; 23(Suppl 1): 90, 2023 05 10.
Artículo en Inglés | MEDLINE | ID: mdl-37165363

RESUMEN

INTRODUCTION: The Semantic Web community provides a common Resource Description Framework (RDF) that allows representation of resources such that they can be linked. To maximize the potential of linked data - machine-actionable interlinked resources on the Web - a certain level of quality of RDF resources should be established, particularly in the biomedical domain in which concepts are complex and high-quality biomedical ontologies are in high demand. However, it is unclear which quality metrics for RDF resources exist that can be automated, which is required given the multitude of RDF resources. Therefore, we aim to determine these metrics and demonstrate an automated approach to assess such metrics of RDF resources. METHODS: An initial set of metrics are identified through literature, standards, and existing tooling. Of these, metrics are selected that fulfil these criteria: (1) objective; (2) automatable; and (3) foundational. Selected metrics are represented in RDF and semantically aligned to existing standards. These metrics are then implemented in an open-source tool. To demonstrate the tool, eight commonly used RDF resources were assessed, including data models in the healthcare domain (HL7 RIM, HL7 FHIR, CDISC CDASH), ontologies (DCT, SIO, FOAF, ORDO), and a metadata profile (GRDDL). RESULTS: Six objective metrics are identified in 3 categories: Resolvability (1), Parsability (1), and Consistency (4), and represented in RDF. The tool demonstrates that these metrics can be automated, and application in the healthcare domain shows non-resolvable URIs (ranging from 0.3% to 97%) among all eight resources and undefined URIs in HL7 RIM, and FHIR. In the tested resources no errors were found for parsability and the other three consistency metrics for correct usage of classes and properties. CONCLUSION: We extracted six objective and automatable metrics from literature, as the foundational quality requirements of RDF resources to maximize the potential of linked data. Automated tooling to assess resources has shown to be effective to identify quality issues that must be avoided. This approach can be expanded to incorporate more automatable metrics so as to reflect additional quality dimensions with the assessment tool implementing more metrics.


Asunto(s)
Ontologías Biológicas , Humanos , Atención a la Salud
12.
BMC Med Inform Decis Mak ; 23(Suppl 1): 88, 2023 05 09.
Artículo en Inglés | MEDLINE | ID: mdl-37161560

RESUMEN

BACKGROUND: The extensive international research for medications and vaccines for the devastating COVID-19 pandemic requires a standard reference ontology. Among the current COVID-19 ontologies, the Coronavirus Infectious Disease Ontology (CIDO) is the largest one. Furthermore, it keeps growing very frequently. Researchers using CIDO as a reference ontology, need a quick update about the content added in a recent release to know how relevant the new concepts are to their research needs. Although CIDO is only a medium size ontology, it is still a large knowledge base posing a challenge for a user interested in obtaining the "big picture" of content changes between releases. Both a theoretical framework and a proper visualization are required to provide such a "big picture". METHODS: The child-of-based layout of the weighted aggregate partial-area taxonomy summarization network (WAT) provides a "big picture" convenient visualization of the content of an ontology. In this paper we address the "big picture" of content changes between two releases of an ontology. We introduce a new DIFF framework named Diff Weighted Aggregate Taxonomy (DWAT) to display the differences between the WATs of two releases of an ontology. We use a layered approach which consists first of a DWAT of major subjects in CIDO, and then drill down a major subject of interest in the top-level DWAT to obtain a DWAT of secondary subjects and even further refined layers. RESULTS: A visualization of the Diff Weighted Aggregate Taxonomy is demonstrated on the CIDO ontology. The evolution of CIDO between 2020 and 2022 is demonstrated in two perspectives. Drilling down for a DWAT of secondary subject networks is also demonstrated. We illustrate how the DWAT of CIDO provides insight into its evolution. CONCLUSIONS: The new Diff Weighted Aggregate Taxonomy enables a layered approach to view the "big picture" of the changes in the content between two releases of an ontology.


Asunto(s)
COVID-19 , Humanos , Pandemias , Conocimiento , Bases del Conocimiento
13.
Sensors (Basel) ; 23(14)2023 Jul 18.
Artículo en Inglés | MEDLINE | ID: mdl-37514788

RESUMEN

Data provenance means recording data origins and the history of data generation and processing. In healthcare, data provenance is one of the essential processes that make it possible to track the sources and reasons behind any problem with a user's data. With the emergence of the General Data Protection Regulation (GDPR), data provenance in healthcare systems should be implemented to give users more control over data. This SLR studies the impacts of data provenance in healthcare and GDPR-compliance-based data provenance through a systematic review of peer-reviewed articles. The SLR discusses the technologies used to achieve data provenance and various methodologies to achieve data provenance. We then explore different technologies that are applied in the healthcare domain and how they achieve data provenance. In the end, we have identified key research gaps followed by future research directions.


Asunto(s)
Investigación Biomédica , Atención a la Salud/métodos
14.
Neurocomputing (Amst) ; 528: 160-177, 2023 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-36647510

RESUMEN

The connection between humans and digital technologies has been documented extensively in the past decades but needs to be evaluated through the current global pandemic. Artificial Intelligence(AI), with its two strands, Machine Learning (ML) and Semantic Reasoning, has proven to be a great solution to provide efficient ways to prevent, diagnose and limit the spread of COVID-19. IoT solutions have been widely proposed for COVID-19 disease monitoring, infection geolocation, and social applications. In this paper, we investigate the usage of the three technologies for handling the COVID-19 pandemic. For this purpose, we surveyed the existing ML applications and algorithms proposed during the pandemic to detect COVID-19 disease using symptom factors and image processing. The survey includes existing approaches including semantic technologies and IoT systems for COVID-19. Based on the survey result, we classified the main challenges and the solutions that could solve them. The study proposes a conceptual framework for pandemic management and discusses challenges and trends for future research.

15.
Neuroimage ; 263: 119610, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-36064138

RESUMEN

A deep understanding of the neural architecture of mental function should enable the accurate prediction of a specific pattern of brain activity for any psychological task, based only on the cognitive functions known to be engaged by that task. Encoding models (EMs), which predict neural responses from known features (e.g., stimulus properties), have succeeded in circumscribed domains (e.g., visual neuroscience), but implementing domain-general EMs that predict brain-wide activity for arbitrary tasks has been limited mainly by availability of datasets that 1) sufficiently span a large space of psychological functions, and 2) are sufficiently annotated with such functions to allow robust EM specification. We examine the use of EMs based on a formal specification of psychological function, to predict cortical activation patterns across a broad range of tasks. We utilized the Multi-Domain Task Battery, a dataset in which 24 subjects completed 32 ten-minute fMRI scans, switching tasks every 35 s and engaging in 44 total conditions of diverse psychological manipulations. Conditions were annotated by a group of experts using the Cognitive Atlas ontology to identify putatively engaged functions, and region-wise cognitive EMs (CEMs) were fit, for individual subjects, on neocortical responses. We found that CEMs predicted cortical activation maps of held-out tasks with high accuracy, outperforming a permutation-based null model while approaching the noise ceiling of the data, without being driven solely by either cognitive or perceptual-motor features. Hierarchical clustering on the similarity structure of CEM generalization errors revealed relationships amongst psychological functions. Spatial distributions of feature importances systematically overlapped with large-scale resting-state functional networks (RSNs), supporting the hypothesis of functional specialization within RSNs while grounding their function in an interpretable data-driven manner. Our implementation and validation of CEMs provides a proof of principle for the utility of formal ontologies in cognitive neuroscience and motivates the use of CEMs in the further testing of cognitive theories.


Asunto(s)
Encéfalo , Cognición , Humanos , Encéfalo/diagnóstico por imagen , Encéfalo/fisiología , Cognición/fisiología , Mapeo Encefálico , Imagen por Resonancia Magnética
16.
Brief Bioinform ; 21(2): 473-485, 2020 03 23.
Artículo en Inglés | MEDLINE | ID: mdl-30715146

RESUMEN

The development and application of biological ontologies have increased significantly in recent years. These ontologies can be retrieved from different repositories, which do not provide much information about quality aspects of the ontologies. In the past years, some ontology structural metrics have been proposed, but their validity as measurement instrument has not been sufficiently studied to date. In this work, we evaluate a set of reproducible and objective ontology structural metrics. Given the lack of standard methods for this purpose, we have applied an evaluation method based on the stability and goodness of the classifications of ontologies produced by each metric on an ontology corpus. The evaluation has been done using ontology repositories as corpora. More concretely, we have used 119 ontologies from the OBO Foundry repository and 78 ontologies from AgroPortal. First, we study the correlations between the metrics. Second, we study whether the clusters for a given metric are stable and have a good structure. The results show that the existing correlations are not biasing the evaluation, there are no metrics generating unstable clusterings and all the metrics evaluated provide at least reasonable clustering structure. Furthermore, our work permits to review and suggest the most reliable ontology structural metrics in terms of stability and goodness of their classifications. Availability: http://sele.inf.um.es/ontology-metrics.


Asunto(s)
Ontologías Biológicas , Sistemas de Administración de Bases de Datos , Sector Público
17.
Brief Bioinform ; 21(1): 355-367, 2020 Jan 17.
Artículo en Inglés | MEDLINE | ID: mdl-30452543

RESUMEN

Coeliac disease (CD) is a complex, multifactorial pathology caused by different factors, such as nutrition, immunological response and genetic factors. Many autoimmune diseases are comorbidities for CD, and a comprehensive and integrated analysis with bioinformatics approaches can help in evaluating the interconnections among all the selected pathologies. We first performed a detailed survey of gene expression data available in public repositories on CD and less commonly considered comorbidities. Then we developed an innovative pipeline that integrates gene expression, cell-type data and online resources (e.g. a list of comorbidities from the literature), using bioinformatics methods such as gene set enrichment analysis and semantic similarity. Our pipeline is written in R language, available at the following link: http://bioinformatica.isa.cnr.it/COELIAC_DISEASE/SCRIPTS/. We found a list of common differential expressed genes, gene ontology terms and pathways among CD and comorbidities and the closeness among the selected pathologies by means of disease ontology terms. Physicians and other researchers, such as molecular biologists, systems biologists and pharmacologists can use it to analyze pathology in detail, from differential expressed genes to ontologies, performing a comparison with the pathology comorbidities or with other diseases.

18.
J Biomed Inform ; 129: 104057, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-35339665

RESUMEN

It is estimated that oncogenic gene fusions cause about 20% of human cancer morbidity. Identifying potentially oncogenic gene fusions may improve affected patients' diagnosis and treatment. Previous approaches to this issue included exploiting specific gene-related information, such as gene function and regulation. Here we propose a model that profits from the previous findings and includes the microRNAs in the oncogenic assessment. We present ChimerDriver, a tool to classify gene fusions as oncogenic or not oncogenic. ChimerDriver is based on a specifically designed neural network and trained on genetic and post-transcriptional information to obtain a reliable classification. The designed neural network integrates information related to transcription factors, gene ontologies, microRNAs and other detailed information related to the functions of the genes involved in the fusion and the gene fusion structure. As a result, the performances on the test set reached 0.83 f1-score and 96% recall. The comparison with state-of-the-art tools returned comparable or higher results. Moreover, ChimerDriver performed well in a real-world case where 21 out of 24 validated gene fusion samples were detected by the gene fusion detection tool Starfusion. ChimerDriver integrates transcriptional and post-transcriptional information in an ad-hoc designed neural network to effectively discriminate oncogenic gene fusions from passenger ones. ChimerDriver source code is freely available at https://github.com/martalovino/ChimerDriver.


Asunto(s)
MicroARNs , Fusión Génica , Humanos , MicroARNs/genética , Redes Neurales de la Computación , Fusión de Oncogenes , Programas Informáticos
19.
J Biomed Inform ; 136: 104241, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-36375772

RESUMEN

OBJECTIVE: To describe methods to approach application of data standards to integrate social determinants of health (SDoH) into EHRs through evaluation of a case of clinical decision support for pediatric asthma. MATERIALS AND METHODS: We identified a list of environmental factors important for managing pediatric asthma. We identified and integrated data from local outdoor air quality monitors with elements available from the clinic's EHR and self-reported indoor air quality questionnaire data. We assessed existing SDoH frameworks, assessment tools, and terminologies to identify representative data standards for these environmental SDoH measures. RESULTS: We found many-to-many relationships between the multiple framework domains, the environmental exposure measures collected, and existing standards. The majority of concepts did not accurately align with environmental exposure measurements. We propose an ontology-driven information framework methodology to apply standards for SDoH measurements to support measuring, managing, and computing SDoH data. DISCUSSION: To support methods of integrating SDoH data in the EHR via an ontology-driven information framework, a common SDoH ecosystem should be developed descriptively and prescriptively integrating framework domains, assessment tools, and standard ontologies to support future data sharing, aggregation, and interoperability. A hierarchical object-oriented information model should be adopted to manage SDoH to extend beyond patient-centered orientation of EHRs to orient to households and communities. CONCLUSION: SDoH data pose unique challenges and opportunities in collecting, measuring, and managing health information. Future work is needed to define data standards for implementing SDoH in a hierarchical, object-oriented information model representing multiple units of orientation including individuals, households, and communities.


Asunto(s)
Asma , Sistemas de Apoyo a Decisiones Clínicas , Humanos , Niño , Determinantes Sociales de la Salud , Ecosistema , Encuestas y Cuestionarios , Asma/diagnóstico , Asma/terapia
20.
J Biomed Inform ; 133: 104150, 2022 09.
Artículo en Inglés | MEDLINE | ID: mdl-35878822

RESUMEN

INTRODUCTION: Patient safety classifications/ontologies enable patient safety information systems to receive and analyze patient safety data to improve patient safety. Patient safety classifications/ontologies have been developed and evaluated using a variety of methods. The purpose of this review was to discuss and analyze the methodologies for developing and evaluating patient safety classifications/ontologies. METHODS: Studies that developed or evaluated patient safety classifications, terminologies, taxonomies, or ontologies were searched through Google Scholar, Google search engines, National Center for Biomedical Ontology (NCBO) BioPortal, Open Biological and Biomedical Ontology (OBO) Foundry and World Health Organization (WHO) websites and Scopus, Web of Science, PubMed, and Science Direct. We updated our search on 30 February 2021 and included all studies published until the end of 2020. Studies that developed or evaluated classifications only for patient safety and provided information on how they were developed or evaluated were included. Systems with covered patient safety terms (such as ICD-10) but are not specifically developed for patient safety were excluded. The quality and the risk of bias of studies were not assessed because all methodologies and criteria were intended to be covered. In addition, we analyzed the data through descriptive narrative synthesis and compared and classified the development and evaluation methods and evaluation criteria according to available development and evaluation approaches for biomedical ontologies. RESULTS: We identified 84 articles that met all of the inclusion criteria, resulting in 70 classifications/ontologies, nine of which were for the general medical domain. The most papers were published in 2010 and 2011, with 8 and 7 papers, respectively. The United States (50) and Australia (23) have the most studies. The most commonly used methods for developing classifications/ontologies included the use of existing systems (for expanding or mapping) (44) and qualitative analysis of event reports (39). The most common evaluation methods were coding or classifying some safety report samples (25), quantitative analysis of incidents based on the developed classification (24), and consensus among physicians (16). The most commonly applied evaluation criteria were reliability (27), content and face validity (9), comprehensiveness (6), usability (5), linguistic clarity (5), and impact (4), respectively. CONCLUSIONS: Because of the weaknesses and strengths of the development/evaluation methods, it is advised that more than one method for development or evaluation, as well as evaluation criteria, should be used. To organize the processes of developing classification/ontologies, well-established approaches such as Methontology are recommended. The most prevalent evaluation methods applied in this domain are well fitted to the biomedical ontology evaluation methods, but it is also advised to apply some evaluation approaches such as logic, rules, and Natural language processing (NLP) based in combination with other evaluation approaches. This research can assist domain researchers in developing or evaluating domain ontologies using more complete methodologies. There is also a lack of reporting consistency in the literature and same methods or criteria were reported with different terminologies.


Asunto(s)
Ontologías Biológicas , Seguridad del Paciente , Humanos , Lógica , Procesamiento de Lenguaje Natural , Reproducibilidad de los Resultados
SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda