Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 27
Filter
1.
Sensors (Basel) ; 24(6)2024 Mar 07.
Article in English | MEDLINE | ID: mdl-38544003

ABSTRACT

The modern healthcare landscape is overwhelmed by data derived from heterogeneous IoT data sources and Electronic Health Record (EHR) systems. Based on the advancements in data science and Machine Learning (ML), an improved ability to integrate and process the so-called primary and secondary data fosters the provision of real-time and personalized decisions. In that direction, an innovative mechanism for processing and integrating health-related data is introduced in this article. It describes the details of the mechanism and its internal subcomponents and workflows, together with the results from its utilization, validation, and evaluation in a real-world scenario. It also highlights the potential derived from the integration of primary and secondary data into Holistic Health Records (HHRs) and from the utilization of advanced ML-based and Semantic Web techniques to improve the quality, reliability, and interoperability of the examined data. The viability of this approach is evaluated through heterogeneous healthcare datasets pertaining to personalized risk identification and monitoring related to pancreatic cancer. The key outcomes and innovations of this mechanism are the introduction of the HHRs, which facilitate the capturing of all health determinants in a harmonized way, and a holistic data ingestion mechanism for advanced data processing and analysis.


Subject(s)
Electronic Health Records , Pancreatic Neoplasms , Humans , Holistic Health , Reproducibility of Results , Semantics , Machine Learning
2.
Neural Comput Appl ; : 1-17, 2023 May 08.
Article in English | MEDLINE | ID: mdl-37362579

ABSTRACT

Text categorization and sentiment analysis are two of the most typical natural language processing tasks with various emerging applications implemented and utilized in different domains, such as health care and policy making. At the same time, the tremendous growth in the popularity and usage of social media, such as Twitter, has resulted on an immense increase in user-generated data, as mainly represented by the corresponding texts in users' posts. However, the analysis of these specific data and the extraction of actionable knowledge and added value out of them is a challenging task due to the domain diversity and the high multilingualism that characterizes these data. The latter highlights the emerging need for the implementation and utilization of domain-agnostic and multilingual solutions. To investigate a portion of these challenges this research work performs a comparative analysis of multilingual approaches for classifying both the sentiment and the text of an examined multilingual corpus. In this context, four multilingual BERT-based classifiers and a zero-shot classification approach are utilized and compared in terms of their accuracy and applicability in the classification of multilingual data. Their comparison has unveiled insightful outcomes and has a twofold interpretation. Multilingual BERT-based classifiers achieve high performances and transfer inference when trained and fine-tuned on multilingual data. While also the zero-shot approach presents a novel technique for creating multilingual solutions in a faster, more efficient, and scalable way. It can easily be fitted to new languages and new tasks while achieving relatively good results across many languages. However, when efficiency and scalability are less important than accuracy, it seems that this model, and zero-shot models in general, can not be compared to fine-tuned and trained multilingual BERT-based classifiers.

3.
Stud Health Technol Inform ; 302: 153-154, 2023 May 18.
Article in English | MEDLINE | ID: mdl-37203637

ABSTRACT

Given the challenge that healthcare related data are being obtained from various sources and in divergent formats there is an emerging need for providing improved and automated techniques and technologies that perform qualification and standardization of these data. The approach presented in this paper introduces a novel mechanism for the cleaning, qualification, and standardization of the collected primary and secondary data types. The latter is realized through the design and implementation of three (3) integrated subcomponents, the Data Cleaner, the Data Qualifier, and the Data Harmonizer that are further evaluated by performing data cleaning, qualification, and harmonization on top of data related to Pancreatic Cancer to further develop enhanced personalized risk assessment and recommendations to individuals.


Subject(s)
Delivery of Health Care , Technology , Humans , Risk Assessment , Reference Standards
4.
Digit Health ; 9: 20552076231158022, 2023.
Article in English | MEDLINE | ID: mdl-36865772

ABSTRACT

Due to the challenges and restrictions posed by COVID-19 pandemic, technology and digital solutions played an important role in the rendering of necessary healthcare services, notably in medical education and clinical care. The aim of this scoping review was to analyze and sum up the most recent developments in Virtual Reality (VR) use for therapeutic care and medical education, with a focus on training medical students and patients. We identified 3743 studies, of which 28 were ultimately selected for the review. The search strategy followed the most recent Preferred Reporting Items for Systematic Reviews and Meta-Analysis for scoping review (PRISMA-ScR) guidelines. 11 studies (39.3%) in the field of medical education assessed different domains, such as knowledge, skills, attitudes, confidence, self-efficacy, and empathy. 17 studies (60.7%) focused on clinical care, particularly in the areas of mental health, and rehabilitation. Among these, 13 studies also investigated user experiences and feasibility in addition to clinical outcomes. Overall, the findings of our review reported considerable improvements in terms of medical education and clinical care. VR systems were also found to be safe, engaging, and beneficial by the studies' participants. There were huge variations in studies with respect to the study designs, VR contents, devices, evaluation methods, and treatment periods. In the future, studies may focus on creating definitive guidelines that can help in improving patient care further. Hence, there is an urgent need for researchers to collaborate with the VR industry and healthcare professionals to foster a better understanding of contents and simulation development.

5.
Digit Finance ; 5(1): 29-56, 2023.
Article in English | MEDLINE | ID: mdl-35434526

ABSTRACT

Determining and minimizing risk exposure pose one of the biggest challenges in the financial industry as an environment with multiple factors that affect (non-)identified risks and the corresponding decisions. Various estimation metrics are utilized towards robust and efficient risk management frameworks, with the most prevalent among them being the Value at Risk (VaR). VaR is a valuable risk-assessment approach, which offers traders, investors, and financial institutions information regarding risk estimations and potential investment insights. VaR has been adopted by the financial industry for decades, but the generated predictions lack efficiency in times of economic turmoil such as the 2008 global financial crisis and the COVID-19 pandemic, which in turn affects the respective decisions. To address this challenge, a variety of well-established variations of VaR models are exploited by the financial community, including data-driven and data analytics models. In this context, this paper introduces a probabilistic deep learning approach, leveraging time-series forecasting techniques with high potential of monitoring the risk of a given portfolio in a quite efficient way. The proposed approach has been evaluated and compared to the most prominent methods of VaR calculation, yielding promising results for VaR 99% for forex-based portfolios. Supplementary Information: The online version contains supplementary material available at 10.1007/s42521-022-00050-0.

6.
Sensors (Basel) ; 22(22)2022 Nov 08.
Article in English | MEDLINE | ID: mdl-36433212

ABSTRACT

Extracting useful knowledge from proper data analysis is a very challenging task for efficient and timely decision-making. To achieve this, there exist a plethora of machine learning (ML) algorithms, while, especially in healthcare, this complexity increases due to the domain's requirements for analytics-based risk predictions. This manuscript proposes a data analysis mechanism experimented in diverse healthcare scenarios, towards constructing a catalogue of the most efficient ML algorithms to be used depending on the healthcare scenario's requirements and datasets, for efficiently predicting the onset of a disease. To this context, seven (7) different ML algorithms (Naïve Bayes, K-Nearest Neighbors, Decision Tree, Logistic Regression, Random Forest, Neural Networks, Stochastic Gradient Descent) have been executed on top of diverse healthcare scenarios (stroke, COVID-19, diabetes, breast cancer, kidney disease, heart failure). Based on a variety of performance metrics (accuracy, recall, precision, F1-score, specificity, confusion matrix), it has been identified that a sub-set of ML algorithms are more efficient for timely predictions under specific healthcare scenarios, and that is why the envisioned ML catalogue prioritizes the ML algorithms to be used, depending on the scenarios' nature and needed metrics. Further evaluation must be performed considering additional scenarios, involving state-of-the-art techniques (e.g., cloud deployment, federated ML) for improving the mechanism's efficiency.


Subject(s)
COVID-19 , Humans , Bayes Theorem , Machine Learning , Algorithms , Delivery of Health Care
7.
Stud Health Technol Inform ; 299: 145-150, 2022 Nov 03.
Article in English | MEDLINE | ID: mdl-36325855

ABSTRACT

Sharing of personal health data could facilitate and enhance the quality of care and the conduction of further research studies. However, these data are still underutilized due to legal, technical, and interoperability challenges, whereas the data subjects are not able to manage, interact, and decide on what to share, with whom, and for what purposes. This barrier obstacles continuity of care across in the European Union (EU), and neither healthcare providers nor data researchers nor the citizens are benefiting through efficient healthcare treatment and research. Despite several national-level EU studies and research activities, cross-border health data exchange and sharing is still a challenging task, which is addressed only under specific cases and scenarios. This manuscript presents the InteropEHRate research project along with its key innovations, aiming to offer Electronic Health Records (EHRs) at peoples' hands across the EU, via the exploitation of three (3) different protocol families, namely the Device-to-Device (D2D), Remote-to-Device (R2D), and Research Data Sharing (RDS) protocols. These protocols facilitate efficient, secure, privacy preserving, and General Data Protection Regulation (GDPR) compliant health data sharing across the EU, covering different real-world use cases.


Subject(s)
Electronic Health Records , Privacy , Humans , Europe , European Union , Information Dissemination , Computer Security
8.
J Big Data ; 9(1): 100, 2022.
Article in English | MEDLINE | ID: mdl-36213092

ABSTRACT

Small Medium Enterprises (SMEs) are vital to the global economy and all societies. However, they face a complex and challenging environment, as in most sectors they are lagging behind in their digital transformation. Banks, retaining a variety of data of their SME customers to perform their main activities, could offer a solution by leveraging all available data to provide a Business Financial Management (BFM) toolkit to their customers, providing value added services on top of their core business. In this direction, this paper revolves around the development of a smart, highly personalized hybrid transaction categorization model, interconnected with a cash flow prediction model based on Recurrent Neural Networks (RNNs). As the classification of transactions is of great significance, this research is extended towards explainable AI, where LIME and SHAP frameworks are utilized to interpret and illustrate the ML classification results. Our approach shows promising results on a real-world banking use case and acts as the foundation for the development of further BFM banking microservices, such as transaction fraud detection and budget monitoring.

9.
J Biomed Inform ; 134: 104199, 2022 10.
Article in English | MEDLINE | ID: mdl-36100164

ABSTRACT

Despite the availability of secure electronic data transfers, most medical information is still stored on paper, and it is usually shared by mail, fax or the patients themselves. Today's technologies aim to the challenge of sharing healthcare information, since exchanging inaccurate data leads to inefficiency and errors. Currently, there exist numerous techniques for exchanging data, which however require continuous internet connection, thus lacking generic applicability in healthcare, in the cases where no internet connection is available. In this paper, a new Device-to-Device (D2D) protocol is proposed, specifying a series of Bluetooth messages regarding the healthcare information that is being exchanged in short-range distances, between a healthcare-practitioner and a citizen. This information refers to structured and unstructured data, which can be directly exchanged through a globally used communication protocol, extending it for the permission of exchanging HL7 FHIR Bluetooth structured messages. Moreover, for high volume data, the D2D protocol can support lossless compression and decompression, improving its overall efficiency. The protocol is firstly evaluated through exchanging sample data in a real-world scenario, whereas an overall comparison of exchanging multiple sized data either using lossless compression or not is being provided. According to the evaluation results, the D2D protocol specification was strictly followed, successfully providing the ability to exchange healthcare-related data, with Bluetooth being considered the most suitable technology for current needs. For small-sized data, the D2D protocol performs better without the provided lossless compression mechanism, whereas in the case of large-sized data lossless compression is considered as the only option.


Subject(s)
Data Compression , Health Information Exchange , Delivery of Health Care , Humans
10.
Stud Health Technol Inform ; 295: 376-379, 2022 Jun 29.
Article in English | MEDLINE | ID: mdl-35773889

ABSTRACT

Big Data has proved to be vast and complex, without being efficiently manageable through traditional architectures, whereas data analysis is considered crucial for both technical and non-technical stakeholders. Current analytics platforms are siloed for specific domains, whereas the requirements to enhance their use and lower their technicalities are continuously increasing. This paper describes a domain-agnostic single access autoscaling Big Data analytics platform, namely Diastema, as a collection of efficient and scalable components, offering user-friendly analytics through graph data modelling, supporting technical and non-technical stakeholders. Diastema's applicability is evaluated in healthcare through a predicting classifier for a COVID19 dataset, considering real-world constraints.


Subject(s)
COVID-19 , Diastema , Big Data , Data Science , Delivery of Health Care , Humans
11.
Stud Health Technol Inform ; 294: 421-422, 2022 May 25.
Article in English | MEDLINE | ID: mdl-35612114

ABSTRACT

With the available data in healthcare, healthcare organizations and practitioners require interoperable, efficient, and non-time-consuming data exchange. Currently, several cases aim to the exchanged data security, without considering the complexity of the data to be exchanged. This paper provides an Ontology-driven Data Cleaning mechanism, facilitating Lossless Healthcare Data Compression to efficiently compress healthcare data of different nature (textual, audio, image). The latter is being evaluated considering three datasets of different formats, concluding to the added value of the described mechanism.


Subject(s)
Data Compression , Computer Security , Data Compression/methods , Delivery of Health Care
12.
Stud Health Technol Inform ; 281: 1013-1014, 2021 May 27.
Article in English | MEDLINE | ID: mdl-34042827

ABSTRACT

Each device, organization, or human, is affected by the effects of Big Data. Analysing these vast amounts of data can be considered of vital importance, surrounded by many challenges. To address a portion of these challenges, a Data Cleaning approach is being proposed, designed to filter the non-important data. The functionality of the Data Cleaning is evaluated on top of Global Terrorism Data, to furtherly create policies on how terrorism is affecting national healthcare.


Subject(s)
Terrorism , Big Data , Delivery of Health Care , Humans
13.
Stud Health Technol Inform ; 275: 92-96, 2020 Nov 23.
Article in English | MEDLINE | ID: mdl-33227747

ABSTRACT

Current technologies provide the ability to healthcare practitioners and citizens, to share and analyse healthcare information, thus improving the patient care quality. Nevertheless, European Union (EU) citizens have very limited control over their own health data, despite that several countries are using national or regional Electronic Health Records (EHRs) for realizing virtual or centralized national repositories of citizens' health records. Health Information Exchange (HIE) can greatly improve the completeness of patients' records. However, most of the current researches deal with exchanging health information among healthcare organizations, without giving the ability to the citizens on accessing, managing or exchanging healthcare data with healthcare organizations and thus being able to handle their own data, mainly due to lack of standardization and security protocols. Towards this challenge, in this paper a secure Device-to-Device (D2D) protocol is specified that can be used by software applications, aiming on facilitating the exchange of health data among citizens and healthcare professionals, on top of Bluetooth technologies.


Subject(s)
Delivery of Health Care , Health Information Exchange , Electronic Health Records , European Union , Humans , Software
14.
Stud Health Technol Inform ; 272: 221-224, 2020 Jun 26.
Article in English | MEDLINE | ID: mdl-32604641

ABSTRACT

Healthcare 4.0 demands healthcare data to be shaped into a common standardized and interoperable format for achieving more efficient data exchange. What is also needed is for this healthcare data to be both easily stored and securely accessed from anywhere, and vice versa. Currently, this is achieved through the secure storage of the healthcare data in different cloud repositories and infrastructures, which however increase the difficulty of accessing it in emergency situations from healthcare practitioners, or even from the citizens' themselves. The latter need to have specific credentials for accessing healthcare data in private cloud repositories, which can be almost impossible in urgent situations where this data must be accessed no matter what. For that reason, in this paper we are proposing a new health record indexing methodology that facilitates the access of non-privileged users (e.g. healthcare practitioners), to the healthcare data stored in cloud repositories of citizens-in-need, under the circumstances of emergency cases.


Subject(s)
Delivery of Health Care , Cloud Computing , Computer Security , Electronic Health Records , Medical Records Systems, Computerized
15.
Stud Health Technol Inform ; 270: 13-17, 2020 Jun 16.
Article in English | MEDLINE | ID: mdl-32570337

ABSTRACT

Healthcare 4.0 demands healthcare data to be shaped into a common standardized and interoperable format for achieving more efficient data exchange. Most of the techniques addressing this domain are dealing only with specific cases of data transformation through the translation of healthcare data into ontologies, which usually result in clinical misinterpretations. Currently, ontology alignment techniques are used to match different ontologies based on specific string and semantic similarity metrics, where very little systematic analysis has been performed on which semantic similarity techniques behave better. For that reason, in this paper we are investigating on finding the most efficient semantic similarity technique, based on an existing approach that can transform any healthcare dataset into HL7 FHIR, through the translation of the latter into ontologies, and their matching based on syntactic and semantic similarities.


Subject(s)
Biological Ontologies , Health Resources , Semantics , Delivery of Health Care , Electronic Health Records , Systems Integration
16.
Acta Inform Med ; 28(1): 58-64, 2020 Mar.
Article in English | MEDLINE | ID: mdl-32210517

ABSTRACT

INTRODUCTION: NCDs (non-communicable diseases) are considered an important social issue and a financial burden to the health care systems in the EU which can be decreased if cost-effective policies are implemented, along with proactive interventions. The CrowdHEALTH project recognizes that NCD poses a burden for the healthcare sector and society and aims at focusing on NCDs' public health policies. AIM: The aim of this paper is to present the concept of Public Health Policy (PHP), elaborate on the state-of-the-art of PHPs development, and propose a first approach to the modeling and evaluation of PHPs used in a toolkit that is going to support decision making, the Policy Development Toolkit (PDT). METHODS: The policy creation module is a part of the PDT aiming to integrate the results of the rest of the health analytics and policy components. It is the module that selects, filters, and aggregates all relevant information to help policy-makers with the decision making process. The policies creation component is connected to the visualization component to provide the final users with data visualization on different PHPs, including outcomes from data-driven models, such as risk stratification, clinical pathways mining, forecasting or causal analysis models, outcomes from cost-benefit analysis, and suggestions and recommendations from the results of different measured KPIs, using data from the Holistic Health Records (HHRs). RESULTS: In the context of CrowdHEALTH project, PHP can be defined as the decisions taken for actions by those responsible in the public sector that covers a set of actions or inactions that affect a group of public and private actors of the health care system. In the CrowdHEALTH project, the Policy Development Toolkit works as the main interface between the final users and the whole system in the CrowdHEALTH platform. The three components related to policy creation are: (i) the policy modeling component, (ii) the population identification component and (iii) the policy evaluation component. In policy evaluation, KPIs are used as measurable indicators to help prevent ambiguity problems in the interpretation of the model and the structure. CONCLUSIONS: This initial Policy creation component design might be modified during the project life circle according to the concept complexity.

17.
Int J Med Inform ; 132: 104002, 2019 12.
Article in English | MEDLINE | ID: mdl-31629311

ABSTRACT

BACKGROUND AND OBJECTIVE: Healthcare systems deal with multiple challenges in releasing information from data silos, finding it almost impossible to be implemented, maintained and upgraded, with difficulties ranging in the technical, security and human interaction fields. Currently, the increasing availability of health data is demanding data-driven approaches, bringing the opportunities to automate healthcare related tasks, providing better disease detection, more accurate prognosis, faster clinical research advance and better fit for patient management. In order to share data with as many stakeholders as possible, interoperability is the only sustainable way for letting systems to talk with one another and getting the complete image of a patient. Thus, it becomes clear that an efficient solution in the data exchange incompatibility is of extreme importance. Consequently, interoperability can develop a communication framework between non-communicable systems, which can be achieved through transforming healthcare data into ontologies. However, the multidimensionality of healthcare domain and the way that is conceptualized, results in the creation of different ontologies with contradicting or overlapping parts. Thus, an effective solution to this problem is the development of methods for finding matches among the various components of ontologies in healthcare, in order to facilitate semantic interoperability. METHODS: The proposed mechanism promises healthcare interoperability through the transformation of healthcare data into the corresponding HL7 FHIR structure. In more detail, it aims at building ontologies of healthcare data, which are later stored into a triplestore. Afterwards, for each constructed ontology the syntactic and semantic similarities with the various HL7 FHIR Resources ontologies are calculated, based on their Levenshtein distance and their semantic fingerprints accordingly. Henceforth, after the aggregation of these results, the matching to the HL7 FHIR Resources takes place, translating the healthcare data into a widely adopted medical standard. RESULTS: Through the derived results it can be seen that there exist cases that an ontology has been matched to a specific HL7 FHIR Resource due to its syntactic similarity, whereas the same ontology has been matched to a different HL7 FHIR Resource due to its semantic similarity. Nevertheless, the developed mechanism performed well since its matching results had exact match with the manual ontology matching results, which are considered as a reference value of high quality and accuracy. Moreover, in order to furtherly investigate the quality of the developed mechanism, it was also evaluated through its comparison with the Alignment API, as well as the non-dominated sorting genetic algorithm (NSGA-III) which provide ontology alignment. In both cases, the results of all the different implementations were almost identical, proving the developed mechanism's high efficiency, whereas through the comparison with the NSGA-III algorithm, it was observed that the developed mechanism needs additional improvements, through a potential adoption of the NSGA-III technique. CONCLUSIONS: The developed mechanism creates new opportunities in conquering the field of healthcare interoperability. However, according to the mechanism's evaluation results, it is almost impossible to create syntactic or semantic patterns for understanding the nature of a healthcare dataset. Hence, additional work should be performed in evaluating the developed mechanism, and updating it with respect to the results that will derive from its comparison with similar ontology matching mechanisms and data of multiple nature.


Subject(s)
Biological Ontologies , Delivery of Health Care/standards , Electronic Health Records/standards , Information Dissemination/methods , Semantics , Systems Integration , Vocabulary, Controlled , Algorithms , Health Level Seven , Humans
18.
Comput Methods Programs Biomed ; 181: 104967, 2019 Nov.
Article in English | MEDLINE | ID: mdl-31303342

ABSTRACT

BACKGROUND AND OBJECTIVE: Healthcare 4.0 is being hailed as the current industrial revolution in the healthcare domain, dealing with billions of heterogeneous IoT data sources that are connected over the Internet and aim at providing real-time health-related information for citizens and patients. It is of major importance to utilize an automated way to identify the quality levels of these data sources, in order to obtain reliable health data. METHODS: In this manuscript, we demonstrate an innovative mechanism for assessing the quality of various datasets in correlation with the quality of the corresponding data sources. For that purpose, the mechanism follows a 5-stepped approach through which the available data sources are detected, identified and connected to health platforms, where finally their data is gathered. Once the data is obtained, the mechanism cleans it and correlates it with the quality measurements that are captured from each different data source, in order to finally decide whether these data sources are being characterized as qualitative or not, and thus their data is kept for further analysis. RESULTS: The proposed mechanism is evaluated through an experiment using a sample of 18 existing heterogeneous medical data sources. Based on the captured results, we were able to identify a data source of unknown type, recognizing that it was a body weight scale. Afterwards, we were able to find out that the API method that was responsible for gathering data out of this data source was the getMeasurements() method, while combining both the body weight scale's quality and its derived data quality, we could decide that this data source was considered as qualitative enough. CONCLUSIONS: By taking full advantage of capturing the quality of a data source through measuring and correlating both the data source's quality itself and the quality of its derived data, the proposed mechanism provides efficient results, being able to ensure end-to-end both data sources and data quality.


Subject(s)
Data Accuracy , Data Analysis , Information Storage and Retrieval/standards , Medical Informatics/methods , Body Weight , Data Collection , Decision Making , Delivery of Health Care , Female , Humans , Male , Observer Variation , Registries , Reproducibility of Results
19.
Sensors (Basel) ; 19(9)2019 Apr 27.
Article in English | MEDLINE | ID: mdl-31035612

ABSTRACT

It is an undeniable fact that Internet of Things (IoT) technologies have become a milestone advancement in the digital healthcare domain, since the number of IoT medical devices is grown exponentially, and it is now anticipated that by 2020 there will be over 161 million of them connected worldwide. Therefore, in an era of continuous growth, IoT healthcare faces various challenges, such as the collection, the quality estimation, as well as the interpretation and the harmonization of the data that derive from the existing huge amounts of heterogeneous IoT medical devices. Even though various approaches have been developed so far for solving each one of these challenges, none of these proposes a holistic approach for successfully achieving data interoperability between high-quality data that derive from heterogeneous devices. For that reason, in this manuscript a mechanism is produced for effectively addressing the intersection of these challenges. Through this mechanism, initially, the collection of the different devices' datasets occurs, followed by the cleaning of them. In sequel, the produced cleaning results are used in order to capture the levels of the overall data quality of each dataset, in combination with the measurements of the availability of each device that produced each dataset, and the reliability of it. Consequently, only the high-quality data is kept and translated into a common format, being able to be used for further utilization. The proposed mechanism is evaluated through a specific scenario, producing reliable results, achieving data interoperability of 100% accuracy, and data quality of more than 90% accuracy.


Subject(s)
Data Accuracy , Delivery of Health Care/methods , Humans , Internet , Monitoring, Physiologic/methods
20.
Stud Health Technol Inform ; 258: 255-256, 2019.
Article in English | MEDLINE | ID: mdl-30942764

ABSTRACT

The aim of this paper is to present examples of big data techniques that can be applied on Holistic Health Records (HHR) in the context of the CrowdHEALTH project. Real-time big data analytics can be performed on the stored data (i.e. HHRs) enabling correlations and extraction of situational factors between laboratory exams, physical activities, biosignals, medical data patterns, and clinical assessment. Based on the outcomes of different analytics (e.g. risk analysis, pathways mining, forecasting and causal analysis) on the aforementioned HHRs datasets, actionable information can be obtained for the development of efficient health plans and public health policies.


Subject(s)
Big Data , Data Mining , Electronic Health Records , Holistic Health , Records
SELECTION OF CITATIONS
SEARCH DETAIL
...