RESUMO
Objective: Research over the past decade has extensively covered the benefits of electronic health records in developing countries. Yet, the specific impact of their limited access on doctors' workload and clinical decision-making, particularly in Bangladesh, remains underexplored. This study investigates current patients' medical history storage mechanisms and associated challenges. It explores how doctors in Bangladesh obtain and review patients' past medical histories, identifying the challenges they face. Additionally, it examines whether limited access to digital health records is an obstacle in clinical decision-making and explores factors influencing doctors' willingness to adopt electronic health record systems in such contexts. Method: An online cross-sectional survey of 105 doctors with Bachelor of Medicine, Bachelor of Surgery/Bachelor of Dental Surgery (MBBS/BDS) degrees and at least 2 years of experience was conducted, covering (a) personal information, (b) workload, (c) patient history challenges, and (d) decision-making. Results: Out of 105 participants, 51.4% of them use paper-based methods with 56% facing challenges, versus 20% using digital methods. Most (94.3%) interview patients directly, and 80.9% are interested in a web-based, comprehensive medical history system. An ordinal regression model identified that the physicians' disciplines, workload, and efficiency level of the current workplace in facilitating patient history-taking variables significantly affected willingness to adopt the described electronic health record in the survey. Conclusion: Doctors in Bangladesh encounter significant challenges related to workload and clinical decision-making, largely attributed to restricted access to patients' past medical histories. Despite the prevalent use of paper-based records, there is a notable willingness among these medical professionals to embrace electronic health record systems, indicating a potential shift towards more efficient healthcare practices in the region.
RESUMO
INTRODUCTION: In German and international research networks different approaches concerning patient consent are applied. So far it is time-consuming to find out to what extent data from these networks can be used for a specific research project. To make the contents of the consents queryable, we aimed for a permission-based approach (Opt-In) that can map both the permission and the withdrawal of consent contents as well as make it queryable beyond project boundaries. MATERIALS AND METHODS: The current state of research was analysed in terms of approach and reusability. Selected process models for defining consent policies were abstracted in a next step. On this basis, a standardised semantic terminology for the description of consent policies was developed and initially agreed with experts. In a final step, the resulting code was evaluated with regards to different aspects of applicability. RESULTS: A first and extendable version for a Semantic Consent Code (SCC) based on 3-axis (CLASS, ACTION, PURPOSE) was developed, consolidated und published. The added value achieved by the SCC was illustrated using the example of real consents from large national research associations (Medical Informatics Initiative and NUM NAPKON/NUKLEUS). The applicability of the SCC was successfully evaluated in terms of the manual semantic mapping of consents by briefly trained personnel and the automated interpretability of consent policies according to the SCC (and vice versa). In addition, a concept for the use of the SCC to simplify consent queries in heterogeneous research scenarios was presented. CONCLUSIONS: The Semantic Consent Code has already successfully undergone initial evaluations. As the published 3-axis code SCC is an essential preliminary work to standardising initially diverse consent texts and contents and can iteratively be extended in multiple ways in terms of content and technical additions. It should be extended in cooperation with the potential user community.
Assuntos
Pesquisa Biomédica , Documentação , Consentimento Livre e Esclarecido , Semântica , Consentimento Livre e Esclarecido/normas , Humanos , Pesquisa Biomédica/normas , Documentação/normas , AlemanhaRESUMO
Appointment Scheduling (AS), typically serves as the basis for the majority of non-urgent healthcare services and is a fundamental healthcare-related procedure which, if done correctly and effectively, can lead to significant benefits for the healthcare facility. The main objective of this work is to present ClinApp, an intelligent system able to schedule and manage medical appointments and collect medical data directly from patients.
Assuntos
Atenção à Saúde , Aceitação pelo Paciente de Cuidados de Saúde , Humanos , Agendamento de Consultas , Instalações de SaúdeRESUMO
Putting real-time medical data processing applications into practice comes with some challenges such as scalability and performance. Processing medical images from different collaborators is an example of such applications, in which chest X-ray data are processed to extract knowledge. It is not easy to process data and get the required information in real time using central processing techniques when data get very large in size. In this paper, real-time data are filtered and forwarded to the right processing node by using the proposed topic-based hierarchical publish/subscribe messaging middleware in the distributed scalable network of collaborating computation nodes instead of classical approaches of centralized computation. This enables processing streaming medical data in near real time and makes a warning system possible. End users have the capability of filtering/searching. The returned search results can be images (COVID-19 or non-COVID-19) and their meta-data are gender and age. Here, COVID-19 is detected using a novel capsule network-based model from chest X-ray images. This middleware allows for a smaller search space as well as shorter times for obtaining search results.
RESUMO
BACKGROUND: The Federal Ministry of Education and Research of Germany (BMBF) funds a network of university medicines (NUM) to support COVID-19 and pandemic research at national level. The "COVID-19 Data Exchange Platform" (CODEX) as part of NUM establishes a harmonised infrastructure that supports research use of COVID-19 datasets. The broad consent (BC) of the Medical Informatics Initiative (MII) is agreed by all German federal states and forms the legal base for data processing. All 34 participating university hospitals (NUM sites) work upon a harmonised infrastructural as well as legal basis for their data protection-compliant collection and transfer of their research dataset to the central CODEX platform. Each NUM site ensures that the exchanged consent information conforms to the already-balloted HL7 FHIR consent profiles and the interoperability concept of the MII Task Force "Consent Implementation" (TFCI). The Independent Trusted Third-Party (TTP) of the University Medicine Greifswald supports data protection-compliant data processing and provides the consent management solutions gICS. METHODS: Based on a stakeholder dialogue a required set of FHIR-functionalities was identified and technically specified supported by official FHIR experts. Next, a "TTP-FHIR Gateway" for the HL7 FHIR-compliant exchange of consent information using gICS was implemented. A last step included external integration tests and the development of a pre-configured consent template for the BC for the NUM sites. RESULTS: A FHIR-compliant gICS-release and a corresponding consent template for the BC were provided to all NUM sites in June 2021. All FHIR functionalities comply with the already-balloted FHIR consent profiles of the HL7 Working Group Consent Management. The consent template simplifies the technical BC rollout and the corresponding implementation of the TFCI interoperability concept at the NUM sites. CONCLUSIONS: This article shows that a HL7 FHIR-compliant and interoperable nationwide exchange of consent information could be built using of the consent management software gICS and the provided TTP-FHIR Gateway. The initial functional scope of the solution covers the requirements identified in the NUM-CODEX setting. The semantic correctness of these functionalities was validated by project-partners from the Ludwig-Maximilian University in Munich. The production rollout of the solution package to all NUM sites has started successfully.
Assuntos
COVID-19 , Registros Eletrônicos de Saúde , Humanos , Software , Consentimento Livre e EsclarecidoRESUMO
Cleft lip and palate belong to the most frequent craniofacial anomalies. Secondary osteoplasty is usually performed between 7 and 11 years with the closure of the osseus defect by autologous bone. Due to widespread occurrence of the defect in conjunction with its social significance due to possible esthetic impairments, the outcome of treatment is of substantial interest. The success of the treatment is determined by the precise rebuilding of the dental arch using autologous bone from the iliac crest. A detailed analysis of retrospective data disclosed a lack of essential and structured information to identify success factors for fast regeneration and specify the treatment. Moreover, according to the current status, no comparable process monitoring is possible during osteoplasty due to the lack of sensory systems. Therefore, a holistic approach was developed to determine the parameters for a successful treatment via the incorporation of patient data, the treatment sequences and sensor data gained by an attachable sensor module into a developed Dental Tech Space (DTS). This approach enables heterogeneous data sets to be linked inside of DTS, archiving and analysis, and is also for future considerations of respective patient-specific treatment plans.
RESUMO
BACKGROUND: With the increasing sophistication of the medical industry, various advanced medical services such as medical artificial intelligence, telemedicine, and personalized health care services have emerged. The demand for medical data is also rapidly increasing today because advanced medical services use medical data such as user data and electronic medical records (EMRs) to provide services. As a result, health care institutions and medical practitioners are researching various mechanisms and tools to feed medical data into their systems seamlessly. However, medical data contain sensitive personal information of patients. Therefore, ensuring security while meeting the demand for medical data is a very important problem in the information age for which a solution is required. OBJECTIVE: Our goal is to design a blockchain-based decentralized patient information exchange (PIE) system that can safely and efficiently share EMRs. The proposed system preserves patients' privacy in the EMRs through a medical information exchange process that includes data encryption and access control. METHODS: We propose a blockchain-based EMR-sharing system that allows patients to manage their EMRs scattered across multiple hospitals and share them with other users. Our PIE system protects the patient's EMR from security threats such as counterfeiting and privacy attacks during data sharing. In addition, it provides scalability by using distributed data-sharing methods to quickly share an EMR, regardless of its size or type. We implemented simulation models using Hyperledger Fabric, an open source blockchain framework. RESULTS: We performed a simulation of the EMR-sharing process and compared it with previous works on blockchain-based medical systems to check the proposed system's performance. During the simulation, we found that it takes an average of 0.01014 (SD 0.0028) seconds to download 1 MB of EMR in our proposed PIE system. Moreover, it has been confirmed that data can be freely shared with other users regardless of the size or format of the data to be transmitted through the distributed data-sharing technique using the InterPlanetary File System. We conducted a security analysis to check whether the proposed security mechanism can effectively protect users of the EMR-sharing system from security threats such as data forgery or unauthorized access, and we found that the distributed ledger structure and re-encryption-based data encryption method can effectively protect users' EMRs from forgery and privacy leak threats and provide data integrity. CONCLUSIONS: Blockchain is a distributed ledger technology that provides data integrity to enable patient-centered health information exchange and access control. PIE systems integrate and manage fragmented patient EMRs through blockchain and protect users from security threats during the data exchange process among users. To increase safety and efficiency in the EMR-sharing process, we used access control using security levels, data encryption based on re-encryption, and a distributed data-sharing scheme.
Assuntos
Blockchain , Inteligência Artificial , Segurança Computacional , Confidencialidade , Humanos , PrivacidadeRESUMO
This paper presents an approach to enable interoperability of the research data management system XNAT by the implementation of the HL7 standards framework Fast Healthcare Interoperability Resources (FHIR). The FHIR implementation is realized as an XNAT plugin (Source code: https://github.com/somnonetz/xnat-fhir-plugin ), that allows easy adoption in arbitrary XNAT instances. The approach is demonstrated on patient data exchange between a FHIR reference implementation and XNAT.
Assuntos
Nível Sete de Saúde/organização & administração , Sistemas Computadorizados de Registros Médicos/organização & administração , Neuroimagem/métodos , Gerenciamento de Dados , Registros Eletrônicos de Saúde , Nível Sete de Saúde/normas , Humanos , Sistemas Computadorizados de Registros Médicos/normas , Integração de SistemasRESUMO
BACKGROUND: The use of medical data for research purposes requires an informed consent of the patient that is compliant with the EU General Data Protection Regulation. In the context of multi-centre research initiatives and a multitude of clinical and epidemiological studies scalable and automatable measures for digital consent management are required. Modular form, structure, and contents render a patient's consent reusable for varying project settings in order to effectively manage and minimise organisational and technical efforts. RESULTS: Within the DFG-funded project "MAGIC" (Grant Number HO 1937/5-1) the digital consent management service tool gICS was enhanced to comply with the recommendations published in the TMF data protection guideline for medical research. In addition, a structured exchange format for modular consent templates considering established standards and formats in the area of digital informed consent management was designed. Using the new FHIR standard and the HAPI FHIR library, the first version for an exchange format and necessary import-/export-functionalities were successfully implemented. CONCLUSIONS: The proposed exchange format is a "work in progress". It represents a starting point for current discussions concerning digital consent management. It also attempts to improve interoperability between different approaches within the wider IHE-/HL7-/FHIR community. Independent of the exchange format, providing the possibility to export, modify and import templates for consents and withdrawals to be reused in similar clinical and epidemiological studies is an essential precondition for the sustainable operation of digital consent management.
Assuntos
Interoperabilidade da Informação em Saúde , Software , Humanos , Consentimento Livre e Esclarecido , Padrões de ReferênciaRESUMO
BACKGROUND: Epidemiological studies are based on a considerable amount of personal, medical and socio-economic data. To answer research questions with reliable results, epidemiological research projects face the challenge of providing high quality data. Consequently, gathered data has to be reviewed continuously during the data collection period. OBJECTIVES: This article describes the development of the mosaicQA-library for non-statistical experts consisting of a set of reusable R functions to provide support for a basic data quality assurance for a wide range of application scenarios in epidemiological research. METHODS: To generate valid quality reports for various scenarios and data sets, a general and flexible development approach was needed. As a first step, a set of quality-related questions, targeting quality aspects on a more general level, was identified. The next step included the design of specific R-scripts to produce proper reports for metric and categorical data. For more flexibility, the third development step focussed on the generalization of the developed R-scripts, e.g. extracting characteristics and parameters. As a last step the generic characteristics of the developed R functionalities and generated reports have been evaluated using different metric and categorical datasets. RESULTS: The developed mosaicQA-library generates basic data quality reports for multivariate input data. If needed, more detailed results for single-variable data, including definition of units, variables, descriptions, code lists and categories of qualified missings, can easily be produced. CONCLUSIONS: The mosaicQA-library enables researchers to generate reports for various kinds of metric and categorical data without the need for computational or scripting knowledge. At the moment, the library focusses on the data structure quality and supports the assessment of several quality indicators, including frequency, distribution and plausibility of research variables as well as the occurrence of missing and extreme values. To simplify the installation process, mosaicQA has been released as an official R-package.
Assuntos
Confiabilidade dos Dados , Estudos Epidemiológicos , Humanos , SoftwareRESUMO
With the increasing amount of medical data available on the Web, looking for health information has become one of the most widely searched topics on the Internet. Patients and people of several backgrounds are now using Web search engines to acquire medical information, including information about a specific disease, medical treatment or professional advice. Nonetheless, due to a lack of medical knowledge, many laypeople have difficulties in forming appropriate queries to articulate their inquiries, which deem their search queries to be imprecise due the use of unclear keywords. The use of these ambiguous and vague queries to describe the patients' needs has resulted in a failure of Web search engines to retrieve accurate and relevant information. One of the most natural and promising method to overcome this drawback is Query Expansion. In this paper, an original approach based on Bat Algorithm is proposed to improve the retrieval effectiveness of query expansion in medical field. In contrast to the existing literature, the proposed approach uses Bat Algorithm to find the best expanded query among a set of expanded query candidates, while maintaining low computational complexity. Moreover, this new approach allows the determination of the length of the expanded query empirically. Numerical results on MEDLINE, the on-line medical information database, show that the proposed approach is more effective and efficient compared to the baseline.
Assuntos
Algoritmos , Informação de Saúde ao Consumidor/métodos , Armazenamento e Recuperação da Informação/métodos , Internet , Ferramenta de Busca/métodos , Comportamento de Busca de InformaçãoRESUMO
The difficulty of disambiguating the sense of the incomplete and imprecise keywords that are extensively used in the search queries has caused the failure of search systems to retrieve the desired information. One of the most powerful and promising method to overcome this shortcoming and improve the performance of search engines is Query Expansion, whereby the user's original query is augmented by new keywords that best characterize the user's information needs and produce more useful query. In this paper, a new Firefly Algorithm-based approach is proposed to enhance the retrieval effectiveness of query expansion while maintaining low computational complexity. In contrast to the existing literature, the proposed approach uses a Firefly Algorithm to find the best expanded query among a set of expanded query candidates. Moreover, this new approach allows the determination of the length of the expanded query empirically. Experimental results on MEDLINE, the on-line medical information database, show that our proposed approach is more effective and efficient compared to the state-of-the-art.
Assuntos
Algoritmos , Bases de Dados Factuais , Armazenamento e Recuperação da Informação/métodos , Inteligência Artificial , MEDLINERESUMO
We present in this paper a novel approach based on multi-agent technology for Web information foraging. We proposed for this purpose an architecture in which we distinguish two important phases. The first one is a learning process for localizing the most relevant pages that might interest the user. This is performed on a fixed instance of the Web. The second takes into account the openness and dynamicity of the Web. It consists on an incremental learning starting from the result of the first phase and reshaping the outcomes taking into account the changes that undergoes the Web. The system was implemented using a colony of artificial ants hybridized with tabu search in order to achieve more effectiveness and efficiency. To validate our proposal, experiments were conducted on MedlinePlus, a real website dedicated for research in the domain of Health in contrast to other previous works where experiments were performed on web logs datasets. The main results are promising either for those related to strong Web regularities and for the response time, which is very short and hence complies the real time constraint.
Assuntos
Algoritmos , Inteligência Artificial , Mineração de Dados/métodos , Internet , HumanosRESUMO
The present work is related to Web intelligence and more precisely to medical information foraging. We present here a novel approach based on agents technology for information foraging. An architecture is proposed, in which we distinguish two important phases. The first one is a learning process for localizing the most relevant pages that might interest the user. This is performed on a fixed instance of the Web. The second takes into account the openness and the dynamicity of the Web. It consists on an incremental learning starting from the result of the first phase and reshaping the outcomes taking into account the changes that undergoes the Web. The whole system offers a tool to help the user undertaking information foraging. We implemented the system using a group of cooperative reactive agents and more precisely a colony of artificial bees. In order to validate our proposal, experiments were conducted on MedlinePlus, a benchmark dedicated for research in the domain of Health. The results are promising either for those related to Web regularities and for the response time, which is very short and hence complies the real time constraint.
Assuntos
Inteligência Artificial , Mineração de Dados/métodos , Gestão da Informação em Saúde/métodos , Internet , Algoritmos , HumanosRESUMO
INTRODUCTION: In the context of an increasing number of multi-centric studies providing data from different sites and sources the necessity for central data management (CDM) becomes undeniable. This is exacerbated by a multiplicity of featured data types, formats and interfaces. In relation to methodological medical research the definition of central data management needs to be broadened beyond the simple storage and archiving of research data. OBJECTIVES: This paper highlights typical requirements of CDM for cohort studies and registries and illustrates how orientation for CDM can be provided by addressing selected data management challenges. METHODS: Therefore in the first part of this paper a short review summarises technical, organisational and legal challenges for CDM in cohort studies and registries. A deduced set of typical requirements of CDM in epidemiological research follows. RESULTS: In the second part the MOSAIC project is introduced (a modular systematic approach to implement CDM). The modular nature of MOSAIC contributes to manage both technical and organisational challenges efficiently by providing practical tools. A short presentation of a first set of tools, aiming for selected CDM requirements in cohort studies and registries, comprises a template for comprehensive documentation of data protection measures, an interactive reference portal for gaining insights and sharing experiences, supplemented by modular software tools for generation and management of generic pseudonyms, for participant management and for sophisticated consent management. CONCLUSIONS: Altogether, work within MOSAIC addresses existing challenges in epidemiological research in the context of CDM and facilitates the standardized collection of data with pre-programmed modules and provided document templates. The necessary effort for in-house programming is reduced, which accelerates the start of data collection.