Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 43
Filtrar
1.
Am J Hum Genet ; 103(1): 58-73, 2018 07 05.
Artigo em Inglês | MEDLINE | ID: mdl-29961570

RESUMO

Integration of detailed phenotype information with genetic data is well established to facilitate accurate diagnosis of hereditary disorders. As a rich source of phenotype information, electronic health records (EHRs) promise to empower diagnostic variant interpretation. However, how to accurately and efficiently extract phenotypes from heterogeneous EHR narratives remains a challenge. Here, we present EHR-Phenolyzer, a high-throughput EHR framework for extracting and analyzing phenotypes. EHR-Phenolyzer extracts and normalizes Human Phenotype Ontology (HPO) concepts from EHR narratives and then prioritizes genes with causal variants on the basis of the HPO-coded phenotype manifestations. We assessed EHR-Phenolyzer on 28 pediatric individuals with confirmed diagnoses of monogenic diseases and found that the genes with causal variants were ranked among the top 100 genes selected by EHR-Phenolyzer for 16/28 individuals (p < 2.2 × 10-16), supporting the value of phenotype-driven gene prioritization in diagnostic sequence interpretation. To assess the generalizability, we replicated this finding on an independent EHR dataset of ten individuals with a positive diagnosis from a different institution. We then assessed the broader utility by examining two additional EHR datasets, including 31 individuals who were suspected of having a Mendelian disease and underwent different types of genetic testing and 20 individuals with positive diagnoses of specific Mendelian etiologies of chronic kidney disease from exome sequencing. Finally, through several retrospective case studies, we demonstrated how combined analyses of genotype data and deep phenotype data from EHRs can expedite genetic diagnoses. In summary, EHR-Phenolyzer leverages EHR narratives to automate phenotype-driven analysis of clinical exomes or genomes, facilitating the broader implementation of genomic medicine.


Assuntos
Exoma/genética , Adolescente , Criança , Pré-Escolar , Registros Eletrônicos de Saúde , Feminino , Testes Genéticos/métodos , Genômica/métodos , Genótipo , Humanos , Lactente , Recém-Nascido , Masculino , Fenótipo , Insuficiência Renal Crônica/genética , Estudos Retrospectivos
2.
Sensors (Basel) ; 21(6)2021 Mar 17.
Artigo em Inglês | MEDLINE | ID: mdl-33803046

RESUMO

The copper mining industry is increasingly using artificial intelligence methods to improve copper production processes. Recent studies reveal the use of algorithms, such as Artificial Neural Network, Support Vector Machine, and Random Forest, among others, to develop models for predicting product quality. Other studies compare the predictive models developed with these machine learning algorithms in the mining industry as a whole. However, not many copper mining studies published compare the results of machine learning techniques for copper recovery prediction. This study makes a detailed comparison between three models for predicting copper recovery by leaching, using four datasets resulting from mining operations in Northern Chile. The algorithms used for developing the models were Random Forest, Support Vector Machine, and Artificial Neural Network. To validate these models, four indicators or values of merit were used: accuracy (acc), precision (p), recall (r), and Matthew's correlation coefficient (mcc). This paper describes the dataset preparation and the refinement of the threshold values used for the predictive variable most influential on the class (the copper recovery). Results show both a precision over 98.50% and also the model with the best behavior between the predicted and the real values. Finally, the obtained models have the following mean values: acc = 0.943, p = 88.47, r = 0.995, and mcc = 0.232. These values are highly competitive when compared with those obtained in similar studies using other approaches in the context.

3.
BMC Bioinformatics ; 20(Suppl 4): 139, 2019 Apr 18.
Artigo em Inglês | MEDLINE | ID: mdl-30999867

RESUMO

BACKGROUND: Pharmacogenomics (PGx) studies how genomic variations impact variations in drug response phenotypes. Knowledge in pharmacogenomics is typically composed of units that have the form of ternary relationships gene variant - drug - adverse event. Such a relationship states that an adverse event may occur for patients having the specified gene variant and being exposed to the specified drug. State-of-the-art knowledge in PGx is mainly available in reference databases such as PharmGKB and reported in scientific biomedical literature. But, PGx knowledge can also be discovered from clinical data, such as Electronic Health Records (EHRs), and in this case, may either correspond to new knowledge or confirm state-of-the-art knowledge that lacks "clinical counterpart" or validation. For this reason, there is a need for automatic comparison of knowledge units from distinct sources. RESULTS: In this article, we propose an approach, based on Semantic Web technologies, to represent and compare PGx knowledge units. To this end, we developed PGxO, a simple ontology that represents PGx knowledge units and their components. Combined with PROV-O, an ontology developed by the W3C to represent provenance information, PGxO enables encoding and associating provenance information to PGx relationships. Additionally, we introduce a set of rules to reconcile PGx knowledge, i.e. to identify when two relationships, potentially expressed using different vocabularies and levels of granularity, refer to the same, or to different knowledge units. We evaluated our ontology and rules by populating PGxO with knowledge units extracted from PharmGKB (2701), the literature (65,720) and from discoveries reported in EHR analysis studies (only 10, manually extracted); and by testing their similarity. We called PGxLOD (PGx Linked Open Data) the resulting knowledge base that represents and reconciles knowledge units of those various origins. CONCLUSIONS: The proposed ontology and reconciliation rules constitute a first step toward a more complete framework for knowledge comparison in PGx. In this direction, the experimental instantiation of PGxO, named PGxLOD, illustrates the ability and difficulties of reconciling various existing knowledge sources.


Assuntos
Bases de Conhecimento , Farmacogenética , Mineração de Dados , Bases de Dados Factuais , Registros Eletrônicos de Saúde , Humanos , Bancos de Tecidos
4.
J Biomed Inform ; 96: 103239, 2019 08.
Artigo em Inglês | MEDLINE | ID: mdl-31238109

RESUMO

Systematic application of observational data to the understanding of impacts of cancer treatments requires detailed information models allowing meaningful comparisons between treatment regimens. Unfortunately, details of systemic therapies are scarce in registries and data warehouses, primarily due to the complex nature of the protocols and a lack of standardization. Since 2011, we have been creating a curated and semi-structured website of chemotherapy regimens, HemOnc.org. In coordination with the Observational Health Data Sciences and Informatics (OHDSI) Oncology Subgroup, we have transformed a substantial subset of this content into the OMOP common data model, with bindings to multiple external vocabularies, e.g., RxNorm and the National Cancer Institute Thesaurus. Currently, there are >73,000 concepts and >177,000 relationships in the full vocabulary. Content related to the definition and composition of chemotherapy regimens has been released within the ATHENA tool (athena.ohdsi.org) for widespread utilization by the OHDSI membership. Here, we describe the rationale, data model, and initial contents of the HemOnc vocabulary along with several use cases for which it may be valuable.


Assuntos
Antineoplásicos/farmacologia , Hematologia/normas , Informática Médica/normas , Oncologia/normas , Neoplasias/tratamento farmacológico , Algoritmos , Bases de Dados Factuais , Humanos , Internet , National Cancer Institute (U.S.) , Sociedades Médicas , Software , Terminologia como Assunto , Estados Unidos , Vocabulário
5.
BMC Med Inform Decis Mak ; 19(Suppl 4): 152, 2019 08 08.
Artigo em Inglês | MEDLINE | ID: mdl-31391056

RESUMO

BACKGROUND: The existing community-wide bodies of biomedical ontologies are known to contain quality and content problems. Past research has revealed various errors related to their semantics and logical structure. Automated tools may help to ease the ontology construction, maintenance, assessment and quality assurance processes. However, there are relatively few tools that exist that can provide this support to knowledge engineers. METHOD: We introduce OntoKeeper as a web-based tool that can automate quality scoring for ontology developers. We enlisted 5 experienced ontologists to test the tool and then administered the System Usability Scale to measure their assessment. RESULTS: In this paper, we present usability results from 5 ontologists revealing high system usability of OntoKeeper, and use-cases that demonstrate its capabilities in previous published biomedical ontology research. CONCLUSION: To the best of our knowledge, OntoKeeper is the first of a few ontology evaluation tools that can help provide ontology evaluation functionality for knowledge engineers with good usability.


Assuntos
Ontologias Biológicas , Software , Humanos , Conhecimento , Semântica
6.
BMC Med Inform Decis Mak ; 18(Suppl 2): 64, 2018 07 23.
Artigo em Inglês | MEDLINE | ID: mdl-30066654

RESUMO

BACKGROUND: Healthcare services, particularly in patient-provider interaction, often involve highly emotional situations, and it is important for physicians to understand and respond to their patients' emotions to best ensure their well-being. METHODS: In order to model the emotion domain, we have created the Visualized Emotion Ontology (VEO) to provide a semantic definition of 25 emotions based on established models, as well as visual representations of emotions utilizing shapes, lines, and colors. RESULTS: As determined by ontology evaluation metrics, VEO exhibited better machine-readability (z=1.12), linguistic quality (z=0.61), and domain coverage (z=0.39) compared to a sample of cognitive ontologies. Additionally, a survey of 1082 participants through Amazon Mechanical Turk revealed that a significantly higher proportion of people agree than disagree with 17 out of our 25 emotion images, validating the majority of our visualizations. CONCLUSION: From the development, evaluation, and serialization of the VEO, we have defined a set of 25 emotions using OWL that linked surveyed visualizations to each emotion. In the future, we plan to use the VEO in patient-facing software tools, such as embodied conversational agents, to enhance interactions between patients and providers in a clinical environment.


Assuntos
Sinais (Psicologia) , Emoções , Interface Usuário-Computador , Inteligência Artificial , Crowdsourcing , Atenção à Saúde , Feminino , Humanos , Masculino , Semântica , Software
7.
J Biomed Inform ; 68: 83-95, 2017 04.
Artigo em Inglês | MEDLINE | ID: mdl-28232035

RESUMO

CREDO is a framework for understanding human expertise and for designing and deploying systems that support cognitive tasks like situation and risk assessment, decision-making, therapy planning and workflow management. The framework has evolved through an extensive program of research on human decision-making and clinical practice. It draws on concepts from cognitive science, and has contributed new results to cognitive theory and understanding of human expertise and knowledge-based AI. These results are exploited in a suite of technologies for designing, implementing and deploying clinical services, early versions of which were reported by Das et al. (1997) [9] and Fox and Das (2000) [26]. A practical outcome of the CREDO program is a technology stack, a key element of which is an agent specification language (PROforma: Sutton and Fox (2003) [55]) which has proved to be a versatile tool for designing point of care applications in many clinical specialties and settings. Since software became available for implementing and deploying PROforma applications many kinds of services have been successfully built and trialed, some of which are in large-scale routine use. This retrospective describes the foundations of the CREDO model, summarizes the main theoretical, technical and clinical contributions, and discusses benefits of the cognitive approach.


Assuntos
Cognição , Tomada de Decisões Assistida por Computador , Sistemas Automatizados de Assistência Junto ao Leito , Sistemas de Apoio a Decisões Clínicas , Humanos , Estudos Retrospectivos
8.
Appl Microbiol Biotechnol ; 101(20): 7427-7434, 2017 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-28884354

RESUMO

Genetically modified microbes have had much industrial success producing protein-based products (such as antibodies and enzymes). However, engineering microbial workhorses for biomanufacturing of commodity compounds remains challenging. First, microbes cannot afford burdens with both overexpression of multiple enzymes and metabolite drainage for product synthesis. Second, synthetic circuits and introduced heterologous pathways are not yet as "robust and reliable" as native pathways due to hosts' innate regulations, especially under suboptimal fermentation conditions. Third, engineered enzymes may lack channeling capabilities for cascade-like transport of metabolites to overcome diffusion barriers or to avoid intermediate toxicity in the cytoplasmic environment. Fourth, moving engineered hosts from laboratory to industry is unreliable because genetic mutations and non-genetic cell-to-cell variations impair the large-scale fermentation outcomes. Therefore, synthetic biology strains often have unsatisfactory industrial performance (titer/yield/productivity). To overcome these problems, many different species are being explored for their metabolic strengths that can be leveraged to synthesize specific compounds. Here, we provide examples of non-conventional and genetically amenable species for industrial manufacturing, including the following: Corynebacterium glutamicum for its TCA cycle-derived biosynthesis, Yarrowia lipolytica for its biosynthesis of fatty acids and carotenoids, cyanobacteria for photosynthetic production from its sugar phosphate pathways, and Rhodococcus for its ability to biotransform recalcitrant feedstock. Finally, we discuss emerging technologies (e.g., genome-to-phenome mapping, single cell methods, and knowledge engineering) that may facilitate the development of novel cell factories.


Assuntos
Biotecnologia/métodos , Corynebacterium glutamicum/metabolismo , Cianobactérias/metabolismo , Microbiologia Industrial/métodos , Rhodococcus/metabolismo , Biologia Sintética/métodos , Yarrowia/metabolismo , Corynebacterium glutamicum/genética , Cianobactérias/genética , Rhodococcus/genética , Yarrowia/genética
9.
J Biomed Inform ; 61: 27-33, 2016 06.
Artigo em Inglês | MEDLINE | ID: mdl-27005590

RESUMO

The Radiology Gamuts Ontology (RGO)-an ontology of diseases, interventions, and imaging findings-was developed to aid in decision support, education, and translational research in diagnostic radiology. The ontology defines a subsumption (is_a) relation between more general and more specific terms, and a causal relation (may_cause) to express the relationship between disorders and their possible imaging manifestations. RGO incorporated 19,745 terms with their synonyms and abbreviations, 1768 subsumption relations, and 55,558 causal relations. Transitive closure was computed iteratively; it yielded 2154 relations over subsumption and 1,594,896 relations over causality. Five causal cycles were discovered, all with path length of no more than 5. The graph-theoretic metrics of in-degree and out-degree were explored; the most useful metric to prioritize modification of the ontology was found to be the product of the in-degree of transitive closure over subsumption and the out-degree of transitive closure over causality. Two general types of error were identified: (1) causal assertions that used overly general terms because they implicitly assumed an organ-specific context and (2) subsumption relations where a site-specific disorder was asserted to be a subclass of the general disorder. Transitive closure helped identify incorrect assertions, prioritized and guided ontology revision, and aided resources that applied the ontology's knowledge.


Assuntos
Ontologias Biológicas , Radiografia , Radiologia , Causalidade , Técnicas de Apoio para a Decisão , Humanos , Semântica , Pesquisa Translacional Biomédica
10.
Sensors (Basel) ; 16(6)2016 May 26.
Artigo em Inglês | MEDLINE | ID: mdl-27240360

RESUMO

The paper examines intelligent sensor and sensor system development according to the Common Criteria methodology, which is the basic security assurance methodology for IT products and systems. The paper presents how the development process can be supported by software tools, design patterns and knowledge engineering. The automation of this process brings cost-, quality-, and time-related advantages, because the most difficult and most laborious activities are software-supported and the design reusability is growing. The paper includes a short introduction to the Common Criteria methodology and its sensor-related applications. In the experimental section the computer-supported and patterns-based IT security development process is presented using the example of an intelligent methane detection sensor. This process is supported by an ontology-based tool for security modeling and analyses. The verified and justified models are transferred straight to the security target specification representing security requirements for the IT product. The novelty of the paper is to provide a patterns-based and computer-aided methodology for the sensors development with a view to achieving their IT security assurance. The paper summarizes the validation experiment focused on this methodology adapted for the sensors system development, and presents directions of future research.

11.
J Biomed Inform ; 48: 28-37, 2014 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-24189161

RESUMO

Many medical conditions are only indirectly observed through symptoms and tests. Developing predictive models for such conditions is challenging since they can be thought of as 'latent' variables. They are not present in the data and often get confused with measurements. As a result, building a model that fits data well is not the same as making a prediction that is useful for decision makers. In this paper, we present a methodology for developing Bayesian network (BN) models that predict and reason with latent variables, using a combination of expert knowledge and available data. The method is illustrated by a case study into the prediction of acute traumatic coagulopathy (ATC), a disorder of blood clotting that significantly increases the risk of death following traumatic injuries. There are several measurements for ATC and previous models have predicted one of these measurements instead of the state of ATC itself. Our case study illustrates the advantages of models that distinguish between an underlying latent condition and its measurements, and of a continuing dialogue between the modeller and the domain experts as the model is developed using knowledge as well as data.


Assuntos
Transtornos da Coagulação Sanguínea/terapia , Informática Médica/métodos , Algoritmos , Teorema de Bayes , Coagulação Sanguínea , Análise por Conglomerados , Tomada de Decisões , Sistemas de Apoio a Decisões Clínicas , Diagnóstico por Computador , Serviços Médicos de Emergência/organização & administração , Humanos , Erros Médicos/prevenção & controle , Informática Médica/tendências , Medição de Risco , Sensibilidade e Especificidade
12.
J Biomed Inform ; 52: 364-72, 2014 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-25089026

RESUMO

BACKGROUND: Information in Electronic Health Records (EHRs) are being promoted for use in clinical decision support, patient registers, measurement and improvement of integration and quality of care, and translational research. To do this EHR-derived data product creators need to logically integrate patient data with information and knowledge from diverse sources and contexts. OBJECTIVE: To examine the accuracy of an ontological multi-attribute approach to create a Type 2 Diabetes Mellitus (T2DM) register to support integrated care. METHODS: Guided by Australian best practice guidelines, the T2DM diagnosis and management ontology was conceptualized, contextualized and validated by clinicians; it was then specified, formalized and implemented. The algorithm was standardized against the domain ontology in SNOMED CT-AU. Accuracy of the implementation was measured in 4 datasets of varying sizes (927-12,057 patients) and an integrated dataset (23,793 patients). Results were cross-checked with sensitivity and specificity calculated with 95% confidence intervals. RESULTS: Incrementally integrating Reason for Visit (RFV), medication (Rx), and pathology in the algorithm identified nearly100% of T2DM cases. Incrementally integrating the four datasets improved accuracy; controlling for sample size, data incompleteness and duplicates. Manual validation confirmed the accuracy of the algorithm. CONCLUSION: Integrating multiple data elements within an EHR using ontology-based case-finding algorithms can improve the accuracy of the diagnosis and compensate for suboptimal data quality, and hence creating a dataset that is more fit-for-purpose. This clinical and pragmatic application of ontologies to EHR data improves the integration of data and the potential for better use of data to improve the quality of care.


Assuntos
Ontologias Biológicas , Prestação Integrada de Cuidados de Saúde/métodos , Diabetes Mellitus Tipo 2/diagnóstico , Registros Eletrônicos de Saúde/classificação , Algoritmos , Austrália , Humanos
13.
Risk Anal ; 34(10): 1923-43, 2014 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-24842516

RESUMO

The authors of this article have developed six probabilistic causal models for critical risks in tunnel works. The details of the models' development and evaluation were reported in two earlier publications of this journal. Accordingly, as a remaining step, this article is focused on the investigation into the use of these models in a real case study project. The use of the models is challenging given the need to provide information on risks that usually are both project and context dependent. The latter is of particular concern in underground construction projects. Tunnel risks are the consequences of interactions between site- and project-specific factors. Large variations and uncertainties in ground conditions as well as project singularities give rise to particular risk factors with very specific impacts. These circumstances mean that existing risk information, gathered from previous projects, is extremely difficult to use in other projects. This article considers these issues and addresses the extent to which prior risk-related knowledge, in the form of causal models, as the models developed for the investigation, can be used to provide useful risk information for the case study project. The identification and characterization of the causes and conditions that lead to failures and their interactions as well as their associated probabilistic information is assumed to be risk-related knowledge in this article. It is shown that, irrespective of existing constraints on using information and knowledge from past experiences, construction risk-related knowledge can be transferred and used from project to project in the form of comprehensive models based on probabilistic-causal relationships. The article also shows that the developed models provide guidance as to the use of specific remedial measures by means of the identification of critical risk factors, and therefore they support risk management decisions. Similarly, a number of limitations of the models are discussed.

14.
PeerJ Comput Sci ; 10: e2097, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38983207

RESUMO

With the rapid advancement of robotics technology, an increasing number of researchers are exploring the use of natural language as a communication channel between humans and robots. In scenarios where language conditioned manipulation grounding, prevailing methods rely heavily on supervised multimodal deep learning. In this paradigm, robots assimilate knowledge from both language instructions and visual input. However, these approaches lack external knowledge for comprehending natural language instructions and are hindered by the substantial demand for a large amount of paired data, where vision and language are usually linked through manual annotation for the creation of realistic datasets. To address the above problems, we propose the knowledge enhanced bottom-up affordance grounding network (KBAG-Net), which enhances natural language understanding through external knowledge, improving accuracy in object grasping affordance segmentation. In addition, we introduce a semi-automatic data generation method aimed at facilitating the quick establishment of the language following manipulation grounding dataset. The experimental results on two standard dataset demonstrate that our method outperforms existing methods with the external knowledge. Specifically, our method outperforms the two-stage method by 12.98% and 1.22% of mIoU on the two dataset, respectively. For broader community engagement, we will make the semi-automatic data construction method publicly available at https://github.com/wmqu/Automated-Dataset-Construction4LGM.

15.
Artif Intell Rev ; : 1-32, 2023 Apr 03.
Artigo em Inglês | MEDLINE | ID: mdl-37362886

RESUMO

With the explosive growth of artificial intelligence (AI) and big data, it has become vitally important to organize and represent the enormous volume of knowledge appropriately. As graph data, knowledge graphs accumulate and convey knowledge of the real world. It has been well-recognized that knowledge graphs effectively represent complex information; hence, they rapidly gain the attention of academia and industry in recent years. Thus to develop a deeper understanding of knowledge graphs, this paper presents a systematic overview of this field. Specifically, we focus on the opportunities and challenges of knowledge graphs. We first review the opportunities of knowledge graphs in terms of two aspects: (1) AI systems built upon knowledge graphs; (2) potential application fields of knowledge graphs. Then, we thoroughly discuss severe technical challenges in this field, such as knowledge graph embeddings, knowledge acquisition, knowledge graph completion, knowledge fusion, and knowledge reasoning. We expect that this survey will shed new light on future research and the development of knowledge graphs.

16.
Biotechnol Adv ; 62: 108069, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36442697

RESUMO

Metabolic engineering encompasses several widely-used strategies, which currently hold a high seat in the field of biotechnology when its potential is manifesting through a plethora of research and commercial products with a strong societal impact. The genomic revolution that occurred almost three decades ago has initiated the generation of large omics-datasets which has helped in gaining a better understanding of cellular behavior. The itinerary of metabolic engineering that has occurred based on these large datasets has allowed researchers to gain detailed insights and a reasonable understanding of the intricacies of biosystems. However, the existing trail-and-error approaches for metabolic engineering are laborious and time-intensive when it comes to the production of target compounds with high yields through genetic manipulations in host organisms. Machine learning (ML) coupled with the available metabolic engineering test instances and omics data brings a comprehensive and multidisciplinary approach that enables scientists to evaluate various parameters for effective strain design. This vast amount of biological data should be standardized through knowledge engineering to train different ML models for providing accurate predictions in gene circuits designing, modification of proteins, optimization of bioprocess parameters for scaling up, and screening of hyper-producing robust cell factories. This review briefs on the premise of ML, followed by mentioning various ML methods and algorithms alongside the numerous omics datasets available to train ML models for predicting metabolic outcomes with high-accuracy. The combinative interplay between the ML algorithms and biological datasets through knowledge engineering have guided the recent advancements in applications such as CRISPR/Cas systems, gene circuits, protein engineering, metabolic pathway reconstruction, and bioprocess engineering. Finally, this review addresses the probable challenges of applying ML in metabolic engineering which will guide the researchers toward novel techniques to overcome the limitations.


Assuntos
Biotecnologia , Engenharia Metabólica , Engenharia Metabólica/métodos , Sistemas CRISPR-Cas , Engenharia de Proteínas , Aprendizado de Máquina
17.
Int J Soc Robot ; 15(3): 445-472, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-34804257

RESUMO

Social companion robots are getting more attention to assist elderly people to stay independent at home and to decrease their social isolation. When developing solutions, one remaining challenge is to design the right applications that are usable by elderly people. For this purpose, co-creation methodologies involving multiple stakeholders and a multidisciplinary researcher team (e.g., elderly people, medical professionals, and computer scientists such as roboticists or IoT engineers) are designed within the ACCRA (Agile Co-Creation of Robots for Ageing) project. This paper will address this research question: How can Internet of Robotic Things (IoRT) technology and co-creation methodologies help to design emotional-based robotic applications? This is supported by the ACCRA project that develops advanced social robots to support active and healthy ageing, co-created by various stakeholders such as ageing people and physicians. We demonstra this with three robots, Buddy, ASTRO, and RoboHon, used for daily life, mobility, and conversation. The three robots understand and convey emotions in real-time using the Internet of Things and Artificial Intelligence technologies (e.g., knowledge-based reasoning).

18.
Comput Biol Med ; 145: 105313, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-35405400

RESUMO

Rare disease data is often fragmented within multiple heterogeneous siloed regional disease registries, each containing a small number of cases. These data are particularly sensitive, as low subject counts make the identification of patients more likely, meaning registries are not inclined to share subject level data outside their registries. At the same time access to multiple rare disease datasets is important as it will lead to new research opportunities and analysis over larger cohorts. To enable this, two major challenges must therefore be overcome. The first is to integrate data at a semantic level, so that it is possible to query over registries and return results which are comparable. The second is to enable queries which do not take subject level data from the registries. To meet the first challenge, this paper presents the FAIRVASC ontology to manage data related to the rare disease anti-neutrophil cytoplasmic antibody (ANCA) associated vasculitis (AAV), which is based on the harmonisation of terms in seven European data registries. It has been built upon a set of key clinical questions developed by a team of experts in vasculitis selected from the registry sites and makes use of several standard classifications, such as Systematized Nomenclature of Medicine - Clinical Terms (SNOMED-CT) and Orphacode. It also presents the method for adding semantic meaning to AAV data across the registries using the declarative Relational to Resource Description Framework Mapping Language (R2RML). To meet the second challenge a federated querying approach is presented for accessing aggregated and pseudonymized data, and which supports analysis of AAV data in a manner which protects patient privacy. For additional security the federated querying approach is augmented with a method for auditing queries (and the uplift process) using the provenance ontology (PROV-O) to track when queries and changes occur and by whom. The main contribution of this work is the successful application of semantic web technologies and federated queries to provide a novel infrastructure that can readily incorporate additional registries, thus providing access to harmonised data relating to unprecedented numbers of patients with rare disease, while also meeting data privacy and security concerns.


Assuntos
Web Semântica , Vasculite , Humanos , Doenças Raras , Sistema de Registros , Systematized Nomenclature of Medicine
19.
Artif Intell Med ; 129: 102324, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35659389

RESUMO

BACKGROUND: Traditionally guideline (GL)-based Decision Support Systems (DSSs) use a centralized infrastructure to generate recommendations to care providers, rather than to patients at home. However, managing patients at home is often preferable, reducing costs and empowering patients. Thus, we wanted to explore an option in which patients, in particular chronic patients, might be assisted by a local DSS, which interacts as needed with the central DSS engine, to manage their disease outside the standard clinical settings. OBJECTIVES: To design, implement, and demonstrate the technical and clinical feasibility of a new architecture for a distributed DSS that provides patients with evidence-based guidance, offered through applications running on the patients' mobile devices, monitoring and reacting to changes in the patient's personal environment, and providing the patients with appropriate GL-based alerts and personalized recommendations; and increase the overall robustness of the distributed application of the GL. METHODS: We have designed and implemented a novel projection-callback (PCB) model, in which small portions of the evidence-based guideline's procedural knowledge are projected from a projection engine within the central DSS server, to a local DSS that resides on each patient's mobile device. The local DSS applies the knowledge using the mobile device's local resources. The GL projections generated by the projection engine are adapted to the patient's previously defined preferences and, implicitly, to the patient's current context, in a manner that is embodied in the projected therapy plans. When appropriate, as defined by a temporal pattern within the projected plan, the local DSS calls back the central DSS, requesting further assistance, possibly another projection. To support the new model, the initial specification of the GL includes two levels: one for the central DSS, and one for the local DSS. We have implemented a distributed GL-based DSS using the projection-callback model within the MobiGuide EU project, which automatically manages chronic patients at home using sensors on the patients and their mobile phone. We assessed the new GL specification process, by specifying two very different, complex GLs: for Gestational Diabetes Mellitus, and for Atrial Fibrillation. Then, we evaluated the new computational architecture by applying the two GLs to the automated clinical management, at real time, of patients in two different countries: Spain and Italy, respectively. RESULTS: The specification using the new projection-callback model was found to be quite feasible. We found significant differences between the distributed versions of the two GLs, suggesting further research directions and possibly additional ways to analyze and characterize GLs. Applying the two GLs to the two patient populations proved highly feasible as well. The mean time between the central and local interactions was quite different for the two GLs: 3.95 ± 1.95 days in the case of the gestational diabetes domain, and 23.80 ± 12.47 days, in the case of the atrial fibrillation domain, probably corresponding to the difference in the distributed specifications of the two GLs. Most of the interaction types were due to projections to the local DSS (83%); others were data notifications, mostly to change context (17%). Some of the data notifications were triggered due to technical errors. The robustness of the distributed architecture was demonstrated through the successful recovery from multiple crashes of the local DSS. CONCLUSIONS: The new projection-callback model has been demonstrated to be feasible, from specification to distributed application. Different GLs might significantly differ, however, in their distributed specification and application characteristics. Distributed medical DSSs can facilitate the remote management of chronic patients by enabling the central DSSs to delegate, in a dynamic fashion, determined by the patient's context, much of the monitoring and treatment management decisions to the mobile device. Patients can be kept in their home environment, while still maintaining, through the projection-callback mechanism, several of the advantages of a central DSS, such as access to the patient's longitudinal record, and to an up-to-date evidence-based GL repository.


Assuntos
Aplicativos Móveis , Tomada de Decisões Assistida por Computador , Humanos
20.
Front Psychol ; 13: 996609, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36507004

RESUMO

Personality disorders are psychological ailments with a major negative impact on patients, their families, and society in general, especially those of the dramatic and emotional type. Despite all the research, there is still no consensus on the best way to assess and treat them. Traditional assessment of personality disorders has focused on a limited number of psychological constructs or behaviors using structured interviews and questionnaires, without an integrated and holistic approach. We present a novel methodology for the study and assessment of personality disorders consisting in the development of a Bayesian network, whose parameters have been obtained by the Delphi method of consensus from a group of experts in the diagnosis and treatment of personality disorders. The result is a probabilistic graphical model that represents the psychological variables related to the personality disorders along with their relations and conditional probabilities, which allow identifying the symptoms with the highest diagnostic potential. This model can be used, among other applications, as a decision support system for the assessment and treatment of personality disorders of the dramatic or emotional cluster. In this paper, we discuss the need to validate this model in the clinical population along with its strengths and limitations.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA