RESUMEN
BACKGROUND: The variant call format (VCF) file is a structured and comprehensive text file crucial for researchers and clinicians in interpreting and understanding genomic variation data. It contains essential information about variant positions in the genome, along with alleles, genotype calls, and quality scores. Analyzing and visualizing these files, however, poses significant challenges due to the need for diverse resources and robust features for in-depth exploration. RESULTS: To address these challenges, we introduce variant graph craft (VGC), a VCF file visualization and analysis tool. VGC offers a wide range of features for exploring genetic variations, including extraction of variant data, intuitive visualization, and graphical representation of samples with genotype information. VGC is designed primarily for the analysis of patient cohorts, but it can also be adapted for use with individual probands or families. It integrates seamlessly with external resources, providing insights into gene function and variant frequencies in sample data. VGC includes gene function and pathway information from Molecular Signatures Database (MSigDB) for GO terms, KEGG, Biocarta, Pathway Interaction Database, and Reactome. Additionally, it dynamically links to gnomAD for variant information and incorporates ClinVar data for pathogenic variant information. VGC supports the Human Genome Assembly Hg37 and Hg38, ensuring compatibility with a wide range of data sets, and accommodates various approaches to exploring genetic variation data. It can be tailored to specific user needs with optional phenotype input data. CONCLUSIONS: In summary, VGC provides a comprehensive set of features tailored to researchers working with genomic variation data. Its intuitive interface, rapid filtering capabilities, and the flexibility to perform queries using custom groups make it an effective tool in identifying variants potentially associated with diseases. VGC operates locally, ensuring data security and privacy by eliminating the need for cloud-based VCF uploads, making it a secure and user-friendly tool. It is freely available at https://github.com/alperuzun/VGC .
Asunto(s)
Variación Genética , Programas Informáticos , Humanos , Variación Genética/genética , Bases de Datos Genéticas , Genómica/métodos , GenotipoRESUMEN
This viewpoint article first explores the ethical challenges associated with the future application of large language models (LLMs) in the context of medical education. These challenges include not only ethical concerns related to the development of LLMs, such as artificial intelligence (AI) hallucinations, information bias, privacy and data risks, and deficiencies in terms of transparency and interpretability but also issues concerning the application of LLMs, including deficiencies in emotional intelligence, educational inequities, problems with academic integrity, and questions of responsibility and copyright ownership. This paper then analyzes existing AI-related legal and ethical frameworks and highlights their limitations with regard to the application of LLMs in the context of medical education. To ensure that LLMs are integrated in a responsible and safe manner, the authors recommend the development of a unified ethical framework that is specifically tailored for LLMs in this field. This framework should be based on 8 fundamental principles: quality control and supervision mechanisms; privacy and data protection; transparency and interpretability; fairness and equal treatment; academic integrity and moral norms; accountability and traceability; protection and respect for intellectual property; and the promotion of educational research and innovation. The authors further discuss specific measures that can be taken to implement these principles, thereby laying a solid foundation for the development of a comprehensive and actionable ethical framework. Such a unified ethical framework based on these 8 fundamental principles can provide clear guidance and support for the application of LLMs in the context of medical education. This approach can help establish a balance between technological advancement and ethical safeguards, thereby ensuring that medical education can progress without compromising the principles of fairness, justice, or patient safety and establishing a more equitable, safer, and more efficient environment for medical education.
Asunto(s)
Inteligencia Artificial , Educación Médica , Educación Médica/ética , Humanos , Inteligencia Artificial/ética , Lenguaje , PrivacidadRESUMEN
To further explore the relationship between aryl substituents and mechanofluorochromic (MFC) behaviors, four salicylaldimine-based difluoroboron complexes (ts-Ph BF2, ts-Ph-NA BF2, ts-2NA BF2, and ts-triphenylamine [TPA] BF2), including aromatic substituents with different steric hindrance effects, were designed and successfully synthesized. Four complexes with twisted molecular conformation displayed intramolecular charge transfer and aggregation-induced emission properties. Under external mechanical stimuli, the as-synthesized powders of ts-Ph BF2, ts-Ph-NA BF2, and ts-TPA BF2 exhibited redshift fluorescence emission behaviors, and ts-Ph BF2 and ts-TPA BF2 could be recovered to original shifts by fuming, but ts-Ph-NA BF2 displayed irreversible switching. ts-2NA BF2 had no change during the grinding and fuming processes. The results indicated that the MFC behaviors could be attributed to the phase transformation between the well-defined crystalline and disordered amorphous states by X-ray diffraction measurement. Further research illustrated that ts-TPA BF2 with the most significant MFC phenomenon could be applied in data security protection in ink-free rewritable paper.
Asunto(s)
Seguridad Computacional , Difracción de Rayos XRESUMEN
Existing attribute-based proxy re-encryption schemes suffer from issues like complex access policies, large ciphertext storage space consumption, and an excessive authority of the authorization center, leading to weak security and controllability of data sharing in cloud storage. This study proposes a Weighted Attribute Authority Multi-Authority Proxy Re-Encryption (WAMA-PRE) scheme that introduces attribute weights to elevate the expression of access policies from binary to multi-valued, simplifying policies and reducing ciphertext storage space. Simultaneously, the multiple attribute authorities and the authorization center construct a joint key, reducing reliance on a single authorization center. The proposed distributed attribute authority network enhances the anti-attack capability of cloud storage. Experimental results show that introducing attribute weights can reduce ciphertext storage space by 50%, proxy re-encryption saves 63% time compared to repeated encryption, and the joint key construction time is only 1% of the benchmark scheme. Security analysis proves that WAMA-PRE achieves CPA security under the decisional q-parallel BDHE assumption in the random oracle model. This study provides an effective solution for secure data sharing in cloud storage.
RESUMEN
Due to their inherent openness, wireless sensor networks (WSNs) are vulnerable to eavesdropping attacks. Addressing the issue of secure Internet Key Exchange (IKE) in the absence of reliable third parties like CA/PKI (Certificate Authority/Public Key Infrastructure) in WSNs, a novel key synchronization method named NDPCS-KS is proposed in the paper. Firstly, through an initial negotiation process, both ends of the main channels generate the same initial key seeds using the Channel State Information (CSI). Subsequently, negotiation keys and a negative database (NDB) are synchronously generated at the two ends based on the initial key seeds. Then, in a second-negotiation process, the NDB is employed to filter the negotiation keys to obtain the keys for encryption. NDPCS-KS reduced the risk of information leakage, since the keys are not directly transmitted over the network, and the eavesdroppers cannot acquire the initial key seeds because of the physical isolation of their eavesdropping channels and the main channels. Furthermore, due to the NP-hard problem of reversing the NDB, even if an attacker obtains the NDB, deducing the initial key seeds is computationally infeasible. Therefore, it becomes exceedingly difficult for attackers to generate legitimate encryption keys without the NDB or initial key seeds. Moreover, a lightweight anti-replay and identity verification mechanism is designed to deal with replay attacks or forgery attacks. Experimental results show that NDPCS-KS has less time overhead and stronger randomness in key generation compared with other methods, and it can effectively counter replay, forgery, and tampering attacks.
RESUMEN
In the realm of IoT sensor data security, particularly in areas like agricultural product traceability, the challenges of ensuring product origin and quality are paramount. This research presents a novel blockchain oracle solution integrating an enhanced MTAS signature algorithm derived from the Schnorr signature algorithm. The key improvement lies in the automatic adaptation of flexible threshold values based on the current scenario, catering to diverse security and efficiency requirements. Utilizing the continuously increasing block height of the blockchain as a pivotal blinding parameter, our approach strengthens signature verifiability and security. By combining the block height with signature parameters, we devise a distinctive signing scheme reliant on a globally immutable timestamp. Additionally, this study introduces a reliable oracle reputation mechanism for monitoring and assessing oracle node performance, maintaining both local and global reputations. This mechanism leverages smart contracts to evaluate each oracle's historical service, penalizing or removing nodes engaged in inappropriate behaviors. Experimental results highlight the innovative contributions of our approach to enhancing on-chain efficiency and fortifying security during the on-chain process, offering promising advancements for secure and efficient IoT sensor data transmission.
RESUMEN
Background: The COVID-19 pandemic has accelerated the adoption of Electronic health (e-Health), leveraging technologies such as telemedicine, electronic health records, artificial intelligence, and patient engagement platforms. This transformation underscores e-Health's role in providing efficient, patient-centered care. Our study explores health care professionals' readiness for these technologies, emphasizing the need for tailored education in this evolving landscape. Methods: In our study, conducted between February and March 2023, we administered a questionnaire-based survey to 500 staff members (82.4% female, 17.6% male) aged 25-70 from medical universities in Tbilisi, Georgia. The structured questionnaire covered topics such as computer literacy, telemedicine awareness, patient data security, and ethical considerations. We employed SPSS v21.0 for data analysis, encompassing descriptive statistics and thematic analysis of open-ended responses. Results: Our study included 500 participants categorized into five age groups. Notably, 31% considered themselves computer "experts," while 69% rated their skills as "intermediate" or "advanced." Furthermore, 85% used computers professionally, with 33% having practical computer training. Interestingly, 59% expressed interest in information technology training. Regarding e-Health, 15% believed it involves remote communication between health care professionals and patients, while 42% considered it "correct," and 37% "might be correct." Concerning its application in managing patients, opinions varied. In terms of e-Health's integration into Georgia's health care, responses ranged. Regarding patient data safety, participants exhibited diverse views. Finally, opinions on the necessity of informed consent for e-Health applications varied among participants. Conclusions: Our study explores health care professionals' readiness for e-Health adoption during the COVID-19 pandemic. It reveals varying computer literacy levels, a willingness to learn, differing views on e-Health applications, and mixed opinions on its integration into Georgian health care. These findings emphasize the need for clear e-Health terminology, education, tailored approaches, and a focus on data privacy and informed consent. Overall, e-Health's transformative role in modern health care is underscored.
Asunto(s)
COVID-19 , Alfabetización Digital , Personal de Salud , SARS-CoV-2 , Telemedicina , Humanos , COVID-19/epidemiología , Masculino , Femenino , Persona de Mediana Edad , Adulto , Anciano , Georgia (República) , Personal de Salud/psicología , Pandemias , Actitud del Personal de Salud , Encuestas y Cuestionarios , Seguridad Computacional , Actitud hacia los Computadores , Registros Electrónicos de SaludRESUMEN
In recent years, with the rapid development of blockchain technology, the issues of storage load and data security have attracted increasing attention. Due to the immutable nature of data on the blockchain, where data can only be added and not deleted, there is a significant increase in storage pressure on blockchain nodes. In order to alleviate this burden, this paper proposes a blockchain data storage strategy based on a hot and cold block mechanism. It employs a block heat evaluation algorithm to assess the historical and correlation-based heat indicators of blocks, enabling the identification of frequently accessed block data for storage within the blockchain nodes. Conversely, less frequently accessed or "cold" block data are offloaded to cloud storage systems. This approach effectively reduces the overall storage pressure on blockchain nodes. Furthermore, in applications such as healthcare and government services that utilize blockchain technology, it is essential to encrypt stored data to safeguard personal privacy and enforce access control measures. To address this need, we introduce a blockchain data encryption storage mechanism based on threshold secret sharing. Leveraging threshold secret sharing technology, the encryption key for blockchain data is fragmented into multiple segments and distributed across network nodes. These encrypted key segments are further secured through additional encryption using public keys before being stored. This method serves to significantly increase attackers' costs associated with accessing blockchain data. Additionally, our proposed encryption scheme ensures that each block has an associated encryption key that is stored alongside its corresponding block data. This design effectively mitigates vulnerabilities such as weak password attacks. Experimental results demonstrate that our approach achieves efficient encrypted storage of data while concurrently reducing the storage pressure experienced by blockchain nodes.
RESUMEN
Telemedicine, defined as the practice of delivering healthcare services remotely using information and communications technologies, raises a plethora of ethical considerations. As telemedicine evolves, its ethical dimensions play an increasingly pivotal role in balancing the benefits of advanced technologies, ensuring responsible healthcare practices within telemedicine environments, and safeguarding patient rights. Healthcare providers, patients, policymakers, and technology developers involved in telemedicine encounter numerous ethical challenges that need to be addressed. Key ethical topics include prioritizing the protection of patient rights and privacy, which entails ensuring equitable access to remote healthcare services and maintaining the doctor-patient relationship in virtual settings. Additional areas of focus encompass data security concerns and the quality of healthcare delivery, underscoring the importance of upholding ethical standards in the digital realm. A critical examination of these ethical dimensions highlights the necessity of establishing binding ethical guidelines and legal regulations. These measures could assist stakeholders in formulating effective strategies and methodologies to navigate the complex telemedicine landscape, ensuring adherence to the highest ethical standards and promoting patient welfare. A balanced approach to telemedicine ethics should integrate the benefits of telemedicine with proactive measures to address emerging ethical challenges and should be grounded in a well-prepared and respected ethical framework.
Asunto(s)
Telemedicina , Telemedicina/ética , Humanos , Derechos del Paciente/ética , Confidencialidad/ética , Seguridad Computacional/ética , Relaciones Médico-Paciente/éticaRESUMEN
INTRODUCTION: The integration of high-resolution video into surgical practice has fostered widespread interest in capturing surgical video recordings for the purposes of patient care, medical training, quality improvement, and documentation. The capture, analysis, and storing of such recordings inherently impact operating room (OR) activities and introduce potential harms to patients as well as members of the surgical team, which can be analyzed from both ethical and legal perspectives. METHODS: Following Preferred Reporting Items for Systematic Reviews and Meta Analyses (PRISMA) guidelines, a systematic literature search of PubMed was conducted. The citations of included articles were then reviewed to find any articles not captured by our initial search. RESULTS: 62 Articles were included in the review (52 from PubMed search and 10 from citation review). Prevalent key issues in the literature at present include privacy, consent, ownership, legal use and discoverability, editing, data security, and recording's impact on the surgical team. CONCLUSIONS: This review aims to spark proactive discussions of the ethical and legal implications of recording in the OR, which will guide transformation as the medical field adapts to new and innovative technologies without compromising its ideals or patient care.
Asunto(s)
Quirófanos , Humanos , Grabación en VideoRESUMEN
The rapid global spread of infectious diseases, epitomized by the recent COVID-19 pandemic, has highlighted the critical need for effective cross-border pandemic management strategies. Digital health passports (DHPs), which securely store and facilitate the sharing of critical health information, including vaccination records and test results, have emerged as a promising solution to enable safe travel and access to essential services and economic activities during pandemics. However, the implementation of DHPs faces several significant challenges, both related to geographical disparities and practical considerations, necessitating a comprehensive approach for successful global adoption. In this narrative review article, we identify and elaborate on the critical geographical and practical barriers that hinder global adoption and the effective utilization of DHPs. Geographical barriers are complex, encompassing disparities in vaccine access, regulatory inconsistencies, differences across countries in data security and users' privacy policies, challenges related to interoperability and standardization, and inadequacies in technological infrastructure and limited access to digital technologies. Practical challenges include the possibility of vaccine contraindications and breakthrough infections, uncertainties surrounding natural immunity, and limitations of standard tests in assessing infection risk. To address geographical disparities and enhance the functionality and interoperability of DHPs, we propose a framework that emphasizes international collaboration to achieve equitable access to vaccines and testing resources. Furthermore, we recommend international cooperation to establish unified vaccine regulatory frameworks, adopting globally accepted standards for data privacy and protection, implementing interoperability protocols, and taking steps to bridge the digital divide. Addressing practical challenges requires a meticulous approach to assessing individual risk and augmenting DHP implementation with rigorous health screenings and personal infection prevention measures. Collectively, these initiatives contribute to the development of robust and inclusive cross-border pandemic management strategies, ultimately promoting a safer and more interconnected global community in the face of current and future pandemics.
Asunto(s)
COVID-19 , Vacunas , Humanos , COVID-19/epidemiología , COVID-19/prevención & control , Pandemias/prevención & control , VacunaciónRESUMEN
BACKGROUND: Reference intervals (RIs) for patient test results are in standard use across many medical disciplines, allowing physicians to identify measurements indicating potentially pathological states with relative ease. The process of inferring cohort-specific RIs is, however, often ignored because of the high costs and cumbersome efforts associated with it. Sophisticated analysis tools are required to automatically infer relevant and locally specific RIs directly from routine laboratory data. These tools would effectively connect clinical laboratory databases to physicians and provide personalized target ranges for the respective cohort population. OBJECTIVE: This study aims to describe the BioRef infrastructure, a multicentric governance and IT framework for the estimation and assessment of patient group-specific RIs from routine clinical laboratory data using an innovative decentralized data-sharing approach and a sophisticated, clinically oriented graphical user interface for data analysis. METHODS: A common governance agreement and interoperability standards have been established, allowing the harmonization of multidimensional laboratory measurements from multiple clinical databases into a unified "big data" resource. International coding systems, such as the International Classification of Diseases, Tenth Revision (ICD-10); unique identifiers for medical devices from the Global Unique Device Identification Database; type identifiers from the Global Medical Device Nomenclature; and a universal transfer logic, such as the Resource Description Framework (RDF), are used to align the routine laboratory data of each data provider for use within the BioRef framework. With a decentralized data-sharing approach, the BioRef data can be evaluated by end users from each cohort site following a strict "no copy, no move" principle, that is, only data aggregates for the intercohort analysis of target ranges are exchanged. RESULTS: The TI4Health distributed and secure analytics system was used to implement the proposed federated and privacy-preserving approach and comply with the limitations applied to sensitive patient data. Under the BioRef interoperability consensus, clinical partners enable the computation of RIs via the TI4Health graphical user interface for query without exposing the underlying raw data. The interface was developed for use by physicians and clinical laboratory specialists and allows intuitive and interactive data stratification by patient factors (age, sex, and personal medical history) as well as laboratory analysis determinants (device, analyzer, and test kit identifier). This consolidated effort enables the creation of extremely detailed and patient group-specific queries, allowing the generation of individualized, covariate-adjusted RIs on the fly. CONCLUSIONS: With the BioRef-TI4Health infrastructure, a framework for clinical physicians and researchers to define precise RIs immediately in a convenient, privacy-preserving, and reproducible manner has been implemented, promoting a vital part of practicing precision medicine while streamlining compliance and avoiding transfers of raw patient data. This new approach can provide a crucial update on RIs and improve patient care for personalized medicine.
Asunto(s)
Macrodatos , Privacidad , Humanos , Recolección de Datos , Laboratorios , Difusión de la InformaciónRESUMEN
Artificial intelligence (AI) chatbots like ChatGPT and Google Bard are computer programs that use AI and natural language processing to understand customer questions and generate natural, fluid, dialogue-like responses to their inputs. ChatGPT, an AI chatbot created by OpenAI, has rapidly become a widely used tool on the internet. AI chatbots have the potential to improve patient care and public health. However, they are trained on massive amounts of people's data, which may include sensitive patient data and business information. The increased use of chatbots introduces data security issues, which should be handled yet remain understudied. This paper aims to identify the most important security problems of AI chatbots and propose guidelines for protecting sensitive health information. It explores the impact of using ChatGPT in health care. It also identifies the principal security risks of ChatGPT and suggests key considerations for security risk mitigation. It concludes by discussing the policy implications of using AI chatbots in health care.
Asunto(s)
Inteligencia Artificial , Programas Informáticos , Humanos , Procesamiento de Lenguaje Natural , Comercio , Atención a la SaludRESUMEN
The emergence of the Internet of Things (IoT) technology has brought about tremendous possibilities, but at the same time, it has opened up new vulnerabilities and attack vectors that could compromise the confidentiality, integrity, and availability of connected systems. Developing a secure IoT ecosystem is a daunting challenge that requires a systematic and holistic approach to identify and mitigate potential security threats. Cybersecurity research considerations play a critical role in this regard, as they provide the foundation for designing and implementing security measures that can address emerging risks. To achieve a secure IoT ecosystem, scientists and engineers must first define rigorous security specifications that serve as the foundation for developing secure devices, chipsets, and networks. Developing such specifications requires an interdisciplinary approach that involves multiple stakeholders, including cybersecurity experts, network architects, system designers, and domain experts. The primary challenge in IoT security is ensuring the system can defend against both known and unknown attacks. To date, the IoT research community has identified several key security concerns related to the architecture of IoT systems. These concerns include issues related to connectivity, communication, and management protocols. This research paper provides an all-inclusive and lucid review of the current state of anomalies and security concepts related to the IoT. We classify and analyze prevalent security distresses regarding IoT's layered architecture, including connectivity, communication, and management protocols. We establish the foundation of IoT security by examining the current attacks, threats, and cutting-edge solutions. Furthermore, we set security goals that will serve as the benchmark for assessing whether a solution satisfies the specific IoT use cases.
RESUMEN
Nowadays, Smart Healthcare Systems (SHS) are frequently used by people for personal healthcare observations using various smart devices. The SHS uses IoT technology and cloud infrastructure for data capturing, transmitting it through smart devices, data storage, processing, and healthcare advice. Processing such a huge amount of data from numerous IoT devices in a short time is quite challenging. Thus, technological frameworks such as edge computing or fog computing can be used as a middle layer between cloud and user in SHS. It reduces the response time for data processing at the lower level (edge level). But, Edge of Things (EoT) also suffers from security and privacy issues. A robust healthcare monitoring framework with secure data storage and access is needed. It will provide a quick response in case of the production of abnormal data and store/access the sensitive data securely. This paper proposed a Secure Framework based on the Edge of Things (SEoT) for Smart healthcare systems. This framework is mainly designed for real-time health monitoring, maintaining the security and confidentiality of the healthcare data in a controlled manner. This paper included clustering approaches for analyzing bio-signal data for abnormality detection and Attribute-Based Encryption (ABE) for bio-signal data security and secure access. The experimental results of the proposed framework show improved performance with maintaining the accuracy of up to 98.5% and data security.
RESUMEN
Fragmentation of healthcare systems through limited cross-speciality communication and intermittent, intervention-based care, without insight into follow-up and compliance, results in poor patient experiences and potentially contributes to suboptimal outcomes. Data-driven tools and novel technologies have the capability to address these shortcomings, but insights from all stakeholders in the care continuum remain lacking. A structured online questionnaire was given to respondents (n = 1432) in nine global geographies to investigate attitudes to the use of data and novel technologies in the management of vascular disease. Patients with coronary or peripheral artery disease (n = 961), physicians responsible for their care (n = 345), and administrators/healthcare leaders with responsibility for commissioning/procuring cardiovascular services (n = 126) were included. Narrative themes arising from the survey included patients' desire for more personalized healthcare, shared decision-making, and improved communication. Patients, administrators, and physicians perceived and experienced deficiencies in continuity of care, and all acknowledged the potential for data-driven techniques and novel technologies to address some of these shortcomings. Further, physicians and administrators saw the 'upstream' segment of the care journey-before diagnosis, at point of diagnosis, and when determining treatment-as key to enabling tangible improvements in patient experience and outcomes. Finally, despite acceptance that data sharing is critical to the success of such interventions, there remains persistent issues related to trust and transparency. The current fragmented care continuum could be improved and streamlined through the adoption of advanced data analytics and novel technologies, including diagnostic and monitoring techniques. Such an approach could enable the refocusing of healthcare from intermittent contacts and intervention-only focus to a more holistic patient view.
RESUMEN
BACKGROUND: With the increasing sophistication of the medical industry, various advanced medical services such as medical artificial intelligence, telemedicine, and personalized health care services have emerged. The demand for medical data is also rapidly increasing today because advanced medical services use medical data such as user data and electronic medical records (EMRs) to provide services. As a result, health care institutions and medical practitioners are researching various mechanisms and tools to feed medical data into their systems seamlessly. However, medical data contain sensitive personal information of patients. Therefore, ensuring security while meeting the demand for medical data is a very important problem in the information age for which a solution is required. OBJECTIVE: Our goal is to design a blockchain-based decentralized patient information exchange (PIE) system that can safely and efficiently share EMRs. The proposed system preserves patients' privacy in the EMRs through a medical information exchange process that includes data encryption and access control. METHODS: We propose a blockchain-based EMR-sharing system that allows patients to manage their EMRs scattered across multiple hospitals and share them with other users. Our PIE system protects the patient's EMR from security threats such as counterfeiting and privacy attacks during data sharing. In addition, it provides scalability by using distributed data-sharing methods to quickly share an EMR, regardless of its size or type. We implemented simulation models using Hyperledger Fabric, an open source blockchain framework. RESULTS: We performed a simulation of the EMR-sharing process and compared it with previous works on blockchain-based medical systems to check the proposed system's performance. During the simulation, we found that it takes an average of 0.01014 (SD 0.0028) seconds to download 1 MB of EMR in our proposed PIE system. Moreover, it has been confirmed that data can be freely shared with other users regardless of the size or format of the data to be transmitted through the distributed data-sharing technique using the InterPlanetary File System. We conducted a security analysis to check whether the proposed security mechanism can effectively protect users of the EMR-sharing system from security threats such as data forgery or unauthorized access, and we found that the distributed ledger structure and re-encryption-based data encryption method can effectively protect users' EMRs from forgery and privacy leak threats and provide data integrity. CONCLUSIONS: Blockchain is a distributed ledger technology that provides data integrity to enable patient-centered health information exchange and access control. PIE systems integrate and manage fragmented patient EMRs through blockchain and protect users from security threats during the data exchange process among users. To increase safety and efficiency in the EMR-sharing process, we used access control using security levels, data encryption based on re-encryption, and a distributed data-sharing scheme.
Asunto(s)
Cadena de Bloques , Inteligencia Artificial , Seguridad Computacional , Confidencialidad , Humanos , PrivacidadRESUMEN
Established Internet of Things (IoT) platforms suffer from their inability to determine whether an IoT app is secure or not. A security analysis system (SAS) is a protective shield against any attack that breaks down data privacy and security. Its main task focuses on detecting malware and verifying app behavior. There are many SASs implemented in various IoT applications. Most of them build on utilizing static or dynamic analysis separately. However, the hybrid analysis is the best for obtaining accurate results. The SAS provides an effective outcome according to many criteria related to the analysis process, such as analysis type, characteristics, sensitivity, and analysis techniques. This paper proposes a new hybrid (static and dynamic) SAS based on the model-checking technique and deep learning, called an HSAS-MD analyzer, which focuses on the holistic analysis perspective of IoT apps. It aims to analyze the data of IoT apps by (1) converting the source code of the target applications to the format of a model checker that can deal with it; (2) detecting any abnormal behavior in the IoT application; (3) extracting the main static features from it to be tested and classified using a deep-learning CNN algorithm; (4) verifying app behavior by using the model-checking technique. HSAS-MD gives the best results in detecting malware from malicious smart Things applications compared to other SASs. The experimental results of HSAS-MD show that it provides 95%, 94%, 91%, and 93% for accuracy, precision, recall, and F-measure, respectively. It also gives the best results compared with other analyzers from various criteria.
RESUMEN
Edge Computing (EC) is a new architecture that extends Cloud Computing (CC) services closer to data sources. EC combined with Deep Learning (DL) is a promising technology and is widely used in several applications. However, in conventional DL architectures with EC enabled, data producers must frequently send and share data with third parties, edge or cloud servers, to train their models. This architecture is often impractical due to the high bandwidth requirements, legalization, and privacy vulnerabilities. The Federated Learning (FL) concept has recently emerged as a promising solution for mitigating the problems of unwanted bandwidth loss, data privacy, and legalization. FL can co-train models across distributed clients, such as mobile phones, automobiles, hospitals, and more, through a centralized server, while maintaining data localization. FL can therefore be viewed as a stimulating factor in the EC paradigm as it enables collaborative learning and model optimization. Although the existing surveys have taken into account applications of FL in EC environments, there has not been any systematic survey discussing FL implementation and challenges in the EC paradigm. This paper aims to provide a systematic survey of the literature on the implementation of FL in EC environments with a taxonomy to identify advanced solutions and other open problems. In this survey, we review the fundamentals of EC and FL, then we review the existing related works in FL in EC. Furthermore, we describe the protocols, architecture, framework, and hardware requirements for FL implementation in the EC environment. Moreover, we discuss the applications, challenges, and related existing solutions in the edge FL. Finally, we detail two relevant case studies of applying FL in EC, and we identify open issues and potential directions for future research. We believe this survey will help researchers better understand the connection between FL and EC enabling technologies and concepts.
Asunto(s)
Nube Computacional , Privacidad , Predicción , HumanosRESUMEN
In emergency scenarios where the on-site information is completely lacking or the original environmental state has been completely changed, autonomous and mobile swarm robotics are used to quickly build a rescue support system to ensure the safety of follow-up rescuers and improve rescue efficiency. To address the data security problem caused by the complex and changeable topology of the heterogeneous swarm robotics network in the process of building the rescue support system, this paper introduced a decentralized data security communication scheme for heterogeneous swarm robotics. First, we built a decentralized network topology model by using base robot, communication robotics, and business robotics, and it can ensure the stability of the system. Moreover, based on the decentralized network topology model, we designed a storage model using the master-slave blockchain method. The master chain is composed of base robot and communication robotics, which mainly store the digests of robot data in multiple slave chains to reach the global data consensus of the system. The slave chains are composed of business robotics and communication robotics, which mainly store all data on the slave chains to reach the local data consensus of the system. The whole data storage system adopts the Delegated Proof of Stake consensus mechanism to elect proxy nodes to participate in the data consensus tasks in the system and to ensure the data consistency of each robot node in the decentralized network. Additionally, a prototype of the heterogeneous swarm robotics system based on the master-slave chains is constructed to verify the effectiveness of the proposed model. The experimental results show that the scheme effectively solves the data security problem caused by the unstable communication link of the heterogeneous swarm robotics system.