Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 132
Filtrar
Mais filtros

Tipo de documento
Intervalo de ano de publicação
1.
J Exp Biol ; 227(9)2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38686556

RESUMO

The ease with which scientific data, particularly certain types of raw data in experimental biology, can be fabricated without trace begs urgent attention. This is thought to be a widespread problem across the academic world, where published results are the major currency, incentivizing publication of (usually positive) results at the cost of lax scientific rigor and even fraudulent data. Although solutions to improve data sharing and methodological transparency are increasingly being implemented, the inability to detect dishonesty within raw data remains an inherent flaw in the way in which we judge research. We therefore propose that one solution would be the development of a non-modifiable raw data format that could be published alongside scientific results; a format that would enable data authentication from the earliest stages of experimental data collection. A further extension of this tool could allow changes to the initial original version to be tracked, so every reviewer and reader could follow the logical footsteps of the author and detect unintentional errors or intentional manipulations of the data. Were such a tool to be developed, we would not advocate its use as a prerequisite for journal submission; rather, we envisage that authors would be given the option to provide such authentication. Only authors who did not manipulate or fabricate their data can provide the original data without risking discovery, so the mere choice to do so already increases their credibility (much like 'honest signaling' in animals). We strongly believe that such a tool would enhance data honesty and encourage more reliable science.


Assuntos
Má Conduta Científica , Disseminação de Informação/métodos , Editoração/normas
2.
BMC Med Res Methodol ; 24(1): 201, 2024 Sep 12.
Artigo em Inglês | MEDLINE | ID: mdl-39266975

RESUMO

BACKGROUND: Clinical trials play a crucial role in biomedical research, and it is important to register them in public registries to ensure transparency and prevent research waste. In this study, we wished to determine what steps need to be taken to identify every clinical trial run in India that has been registered in any of the (non-Indian) World Health Organization-recognised primary registries. Of the 16 registries, we studied all except that of the European Union, which will be studied separately. METHODS: Two methodologies were employed for each registry, except for four that did not facilitate one or the other method. Methodology A involved downloading all the records in a registry and querying them. Methodology B involved conducting a search via the registry website. RESULTS: Only four registries provided consistent results with both methodologies. Seven registries had different results from the two methodologies. Of these, in four cases, in Methodology A one field indicated that the study ran in India, while another indicated otherwise. CONCLUSIONS: The above-mentioned ambiguities should be addressed by the concerned registries. Overall, this study reinforces the need for improved data accuracy and transparency in clinical trial registries and emphasizes the importance of resolving complications faced by users while navigating the registries. Ensuring accurate and comprehensive registration of clinical trials is essential for meta-research and the use of such data by a variety of stakeholders.


Assuntos
Ensaios Clínicos como Assunto , Sistema de Registros , Sistema de Registros/estatística & dados numéricos , Índia , Humanos , Ensaios Clínicos como Assunto/métodos , Ensaios Clínicos como Assunto/estatística & dados numéricos , Ensaios Clínicos como Assunto/normas , Estudos Transversais , Pesquisa Biomédica/estatística & dados numéricos , Pesquisa Biomédica/métodos , Pesquisa Biomédica/normas , Confiabilidade dos Dados
3.
J Public Health (Oxf) ; 46(3): e483-e493, 2024 Aug 25.
Artigo em Inglês | MEDLINE | ID: mdl-38693873

RESUMO

BACKGROUND: Public health surveillance is vital for monitoring and controlling disease spread. In the Philippines, an effective surveillance system is crucial for managing diverse infectious diseases. The Newcomb-Benford Law (NBL) is a statistical tool known for anomaly detection in various datasets, including those in public health. METHODS: Using Philippine epidemiological data from 2019 to 2023, this study applied NBL analysis. Diseases included acute flaccid paralysis, diphtheria, measles, rubella, neonatal tetanus, pertussis, chikungunya, dengue, leptospirosis and others. The analysis involved Chi-square tests, Mantissa Arc tests, Mean Absolute Deviation (MAD) and Distortion Factor calculations. RESULTS: Most diseases exhibited nonconformity to NBL, except for measles. MAD consistently indicated nonconformity, highlighting potential anomalies. Rabies consistently showed substantial deviations, while leptospirosis exhibited closer alignment, especially in 2021. Annual variations in disease deviations were notable, with acute meningitis encephalitis syndrome in 2019 and influenza-like illness in 2023 having the highest deviations. CONCLUSIONS: The study provides practical insights for improving Philippine public health surveillance. Despite some diseases showing conformity, deviations suggest data quality issues. Enhancing the PIDSR, especially in diseases with consistent nonconformity, is crucial for accurate monitoring and response. The NBL's versatility across diverse domains emphasizes its utility for ensuring data integrity and quality assurance.


Assuntos
Vigilância em Saúde Pública , Humanos , Filipinas/epidemiologia , Vigilância em Saúde Pública/métodos , Doenças Transmissíveis/epidemiologia
4.
BMC Med Inform Decis Mak ; 24(1): 245, 2024 Sep 03.
Artigo em Inglês | MEDLINE | ID: mdl-39227951

RESUMO

BACKGROUND: The integrity of clinical research and machine learning models in healthcare heavily relies on the quality of underlying clinical laboratory data. However, the preprocessing of this data to ensure its reliability and accuracy remains a significant challenge due to variations in data recording and reporting standards. METHODS: We developed lab2clean, a novel algorithm aimed at automating and standardizing the cleaning of retrospective clinical laboratory results data. lab2clean was implemented as two R functions specifically designed to enhance data conformance and plausibility by standardizing result formats and validating result values. The functionality and performance of the algorithm were evaluated using two extensive electronic medical record (EMR) databases, encompassing various clinical settings. RESULTS: lab2clean effectively reduced the variability of laboratory results and identified potentially erroneous records. Upon deployment, it demonstrated effective and fast standardization and validation of substantial laboratory data records. The evaluation highlighted significant improvements in the conformance and plausibility of lab results, confirming the algorithm's efficacy in handling large-scale data sets. CONCLUSIONS: lab2clean addresses the challenge of preprocessing and cleaning clinical laboratory data, a critical step in ensuring high-quality data for research outcomes. It offers a straightforward, efficient tool for researchers, improving the quality of clinical laboratory data, a major portion of healthcare data. Thereby, enhancing the reliability and reproducibility of clinical research outcomes and clinical machine learning models. Future developments aim to broaden its functionality and accessibility, solidifying its vital role in healthcare data management.


Assuntos
Algoritmos , Registros Eletrônicos de Saúde , Humanos , Estudos Retrospectivos , Registros Eletrônicos de Saúde/normas , Laboratórios Clínicos/normas
5.
Sensors (Basel) ; 24(14)2024 Jul 14.
Artigo em Inglês | MEDLINE | ID: mdl-39065957

RESUMO

Decentralized applications (DApps) built on blockchain technology offer a promising solution to issues caused by centralization. However, traditional DApps leveraging off-chain storage face performance challenges due to factors such as storage location, network speed, and hardware conditions. For example, decentralized storage solutions such as IPFS suffer from diminished download performance due to I/O constraints influenced by data access patterns. Aiming to enhance the Quality of Service (QoS) in DApps built on blockchain technology, this paper proposes a blockchain node-based distributed caching architecture that guarantees real-time responsiveness for users. The proposed architecture ensures data integrity and user data ownership through blockchain while maintaining cache data consistency through local blockchain data. By implementing local cache clusters on blockchain nodes, our system achieves rapid response times. Additionally, attribute-based encryption is applied to stored content, enabling secure content sharing and access control, which prevents data leakage and unauthorized access in unreliable off-chain storage environments. Comparative analysis shows that our proposed system achieves a reduction in request processing latency of over 89% compared to existing off-chain solutions, maintaining cache data consistency and achieving response times within 65 ms. This demonstrates the model's effectiveness in providing secure and high-performance DApp solutions.

6.
Sensors (Basel) ; 24(7)2024 Mar 29.
Artigo em Inglês | MEDLINE | ID: mdl-38610418

RESUMO

The technology landscape has been dynamically reshaped by the rapid growth of the Internet of Things, introducing an era where everyday objects, equipped with smart sensors and connectivity, seamlessly interact to create intelligent ecosystems. IoT devices are highly heterogeneous in terms of software and hardware, and many of them are severely constrained. This heterogeneity and potentially constrained nature creates new challenges in terms of security, privacy, and data management. This work proposes a Monitoring-as-a-Service platform for both monitoring and management purposes, offering a comprehensive solution for collecting, storing, and processing monitoring data from heterogeneous IoT networks for the support of diverse IoT-based applications. To ensure a flexible and scalable solution, we leverage the FIWARE open-source framework, also incorporating blockchain and smart contract technologies to establish a robust integrity verification mechanism for aggregated monitoring and management data. Additionally, we apply automated workflows to filter and label the collected data systematically. Moreover, we provide thorough evaluation results in terms of CPU and RAM utilization and average service latency.

7.
J Trauma Dissociation ; : 1-14, 2024 Oct 10.
Artigo em Inglês | MEDLINE | ID: mdl-39390771

RESUMO

The Dissociative Experiences Scale (DES) is the most widely used self-report measure of dissociation but lacks a validity scale. Abu-Rus et al. (2020) created the DES-V by embedding atypical and inconsistency items in the DES, ultimately concluding that atypicality demonstrated the greatest ability to differentiate honest respondents from feigners. Among their study limitations, Abu-Rus et al. noted the homogeneous nature of their clinical group (i.e., largely comprising individuals with PTSD) and the potential need to refine the existing atypicality items for a more heterogenous dissociation population. The current study aimed to refine the DES-V by enlisting dissociation experts to improve the believability of the atypical items (while simultaneously ensuring they did not betoken any actual dissociative symptomology) and by supplementing the online sample with a clinical sample that included a broad range of dissociative disorders. Data cleaning comprised eight different techniques, to better ensure the validity of the online sample. Honest and Feigning groups completed the assessments through Amazon's Mechanical Turk; the clinical dissociative disorder group completed hard copy versions. The atypicality scale discriminated the three groups well, with the Feigning group scoring significantly higher than both of the honest groups (online and clinical). The mean atypicality scores of the two honest groups did not differ significantly. In addition, the scale incremented over the original DES-V in a logistic regression predicting honest and feigning participants. These robust results suggest that the revised DES-V could provide researchers with a valuable tool for validating online samples with greater precision - an increasingly vital need in light of the growing reliance on online samples.

8.
Entropy (Basel) ; 26(5)2024 Apr 28.
Artigo em Inglês | MEDLINE | ID: mdl-38785625

RESUMO

Categorical data analysis of 2 × 2 contingency tables is extremely common, not least because they provide risk difference, risk ratio, odds ratio, and log odds statistics in medical research. A χ2 test analysis is most often used, although some researchers use likelihood ratio test (LRT) analysis. Does it matter which test is used? A review of the literature, examination of the theoretical foundations, and analyses of simulations and empirical data are used by this paper to argue that only the LRT should be used when we are interested in testing whether the binomial proportions are equal. This so-called test of independence is by far the most popular, meaning the χ2 test is widely misused. By contrast, the χ2 test should be reserved for where the data appear to match too closely a particular hypothesis (e.g., the null hypothesis), where the variance is of interest, and is less than expected. Low variance can be of interest in various scenarios, particularly in investigations of data integrity. Finally, it is argued that the evidential approach provides a consistent and coherent method that avoids the difficulties posed by significance testing. The approach facilitates the calculation of appropriate log likelihood ratios to suit our research aims, whether this is to test the proportions or to test the variance. The conclusions from this paper apply to larger contingency tables, including multi-way tables.

9.
Sensors (Basel) ; 23(9)2023 Apr 26.
Artigo em Inglês | MEDLINE | ID: mdl-37177511

RESUMO

With the rapid development of cloud storage and cloud computing technology, users tend to store data in the cloud for more convenient services. In order to ensure the integrity of cloud data, scholars have proposed cloud data integrity verification schemes to protect users' data security. The storage environment of the Internet of Things, in terms of big data and medical big data, demonstrates a stronger demand for data integrity verification schemes, but at the same time, the comprehensive function of data integrity verification schemes is required to be higher. Existing data integrity verification schemes are mostly applied in the cloud storage environment but cannot successfully be applied to the environment of the Internet of Things in the context of big data storage and medical big data storage. To solve this problem when combined with the characteristics and requirements of Internet of Things data storage and medical data storage, we designed an SM2-based offline/online efficient data integrity verification scheme. The resulting scheme uses the SM4 block cryptography algorithm to protect the privacy of the data content and uses a dynamic hash table to realize the dynamic updating of data. Based on the SM2 signature algorithm, the scheme can also realize offline tag generation and batch audits, reducing the computational burden of users. In security proof and efficiency analysis, the scheme has proven to be safe and efficient and can be used in a variety of application scenarios.

10.
Sensors (Basel) ; 23(21)2023 Nov 03.
Artigo em Inglês | MEDLINE | ID: mdl-37960646

RESUMO

Biomedical Microelectromechanical Systems (BioMEMS) serve as a crucial catalyst in enhancing IoT communication security and safeguarding smart healthcare systems. Situated at the nexus of advanced technology and healthcare, BioMEMS are instrumental in pioneering personalized diagnostics, monitoring, and therapeutic applications. Nonetheless, this integration brings forth a complex array of security and privacy challenges intrinsic to IoT communications within smart healthcare ecosystems, demanding comprehensive scrutiny. In this manuscript, we embark on an extensive analysis of the intricate security terrain associated with IoT communications in the realm of BioMEMS, addressing a spectrum of vulnerabilities that spans cyber threats, data manipulation, and interception of communications. The integration of real-world case studies serves to illuminate the direct repercussions of security breaches within smart healthcare systems, highlighting the imperative to safeguard both patient safety and the integrity of medical data. We delve into a suite of security solutions, encompassing rigorous authentication processes, data encryption, designs resistant to attacks, and continuous monitoring mechanisms, all tailored to fortify BioMEMS in the face of ever-evolving threats within smart healthcare environments. Furthermore, the paper underscores the vital role of ethical and regulatory considerations, emphasizing the need to uphold patient autonomy, ensure the confidentiality of data, and maintain equitable access to healthcare in the context of IoT communication security. Looking forward, we explore the impending landscape of BioMEMS security as it intertwines with emerging technologies such as AI-driven diagnostics, quantum computing, and genomic integration, anticipating potential challenges and strategizing for the future. In doing so, this paper highlights the paramount importance of adopting an integrated approach that seamlessly blends technological innovation, ethical foresight, and collaborative ingenuity, thereby steering BioMEMS towards a secure and resilient future within smart healthcare systems, in the ambit of IoT communication security and protection.


Assuntos
Sistemas Microeletromecânicos , Privacidade , Humanos , Metodologias Computacionais , Ecossistema , Teoria Quântica , Comunicação , Atenção à Saúde , Segurança Computacional
11.
Sensors (Basel) ; 23(17)2023 Aug 28.
Artigo em Inglês | MEDLINE | ID: mdl-37687931

RESUMO

Precision medicine has emerged as a transformative approach to healthcare, aiming to deliver personalized treatments and therapies tailored to individual patients. However, the realization of precision medicine relies heavily on the availability of comprehensive and diverse medical data. In this context, blockchain-enabled federated learning, coupled with electronic medical records (EMRs), presents a groundbreaking solution to unlock revolutionary insights in precision medicine. This abstract explores the potential of blockchain technology to empower precision medicine by enabling secure and decentralized data sharing and analysis. By leveraging blockchain's immutability, transparency, and cryptographic protocols, federated learning can be conducted on distributed EMR datasets without compromising patient privacy. The integration of blockchain technology ensures data integrity, traceability, and consent management, thereby addressing critical concerns associated with data privacy and security. Through the federated learning paradigm, healthcare institutions and research organizations can collaboratively train machine learning models on locally stored EMR data, without the need for data centralization. The blockchain acts as a decentralized ledger, securely recording the training process and aggregating model updates while preserving data privacy at its source. This approach allows the discovery of patterns, correlations, and novel insights across a wide range of medical conditions and patient populations. By unlocking revolutionary insights through blockchain-enabled federated learning and EMRs, precision medicine can revolutionize healthcare delivery. This paradigm shift has the potential to improve diagnosis accuracy, optimize treatment plans, identify subpopulations for clinical trials, and expedite the development of novel therapies. Furthermore, the transparent and auditable nature of blockchain technology enhances trust among stakeholders, enabling greater collaboration, data sharing, and collective intelligence in the pursuit of advancing precision medicine. In conclusion, this abstract highlights the transformative potential of blockchain-enabled federated learning in empowering precision medicine. By unlocking revolutionary insights from diverse and distributed EMR datasets, this approach paves the way for a future where healthcare is personalized, efficient, and tailored to the unique needs of each patient.


Assuntos
Blockchain , Medicina de Precisão , Humanos , Registros Eletrônicos de Saúde , Disseminação de Informação , Poder Psicológico
12.
Sensors (Basel) ; 23(17)2023 Aug 30.
Artigo em Inglês | MEDLINE | ID: mdl-37687988

RESUMO

The trustworthiness of a system is not just about proving the identity or integrity of the hardware but also extends to the data, control, and management planes of communication between devices and the software they are running. This trust in data and device integrity is desirable for Internet of Things (IoT) systems, especially in critical environments. In this study, we developed a security framework, IoTAttest, for building IoT systems that leverage the Trusted Platform Module 2.0 and remote attestation technologies to enable the establishment of IoT devices' collected data and control plan traffic integrity. After presenting the features and reference architecture of IoTAttest, we evaluated the privacy preservation and validity through the implementation of two proof-of-concept IoT applications that were designed by two teams of university students based on the reference architecture. After the development, the developers answered open questions regarding their experience and perceptions of the framework's usability, limitations, scalability, extensibility, potential, and security. The results indicate that IoTAttest can be used to develop IoT systems with effective attestation to achieve device and data integrity. The proof-of-concept solutions' outcomes illustrate the functionalities and performance of the IoT framework. The feedback from the proof-of-concept developers affirms that they perceived the framework as usable, scalable, extensible, and secure.

13.
Sensors (Basel) ; 23(3)2023 Feb 02.
Artigo em Inglês | MEDLINE | ID: mdl-36772662

RESUMO

Most existing data integrity auditing protocols in cloud storage rely on proof of probabilistic data possession. Consequently, the sampling rate of data integrity verification is low to prevent expensive costs to the auditor. However, in the case of a multi-cloud environment, the amount of stored data will be huge. As a result, a higher sampling rate is needed. It will also have an increased cost for the auditor as a consequence. Therefore, this paper proposes a blockchain-based distributed data integrity verification protocol in multi-cloud environments that enables data verification using multi-verifiers. The proposed scheme aims to increase the sampling rate of data verification without increasing the costs significantly. The performance analysis shows that this protocol achieved a lower time consumption required for verification tasks using multi-verifiers than a single verifier. Furthermore, utilizing multi-verifiers also decreases each verifier's computation and communication costs.

14.
Sensors (Basel) ; 24(1)2023 Dec 23.
Artigo em Inglês | MEDLINE | ID: mdl-38202948

RESUMO

The deployment of Electronic Toll Collection (ETC) gantry systems marks a transformative advancement in the journey toward an interconnected and intelligent highway traffic infrastructure. The integration of these systems signifies a leap forward in streamlining toll collection and minimizing environmental impact through decreased idle times. To solve the problems of missing sensor data in an ETC gantry system with large volumes and insufficient traffic detection among ETC gantries, this study constructs a high-order tensor model based on the analysis of the high-dimensional, sparse, large-volume, and heterogeneous characteristics of ETC gantry data. In addition, a missing data completion method for the ETC gantry data is proposed based on an improved dynamic tensor flow model. This study approximates the decomposition of neighboring tensor blocks in the high-order tensor model of the ETC gantry data based on tensor Tucker decomposition and the Laplacian matrix. This method captures the correlations among space, time, and user information in the ETC gantry data. Case studies demonstrate that our method enhances ETC gantry data quality across various rates of missing data while also reducing computational complexity. For instance, at a less than 5% missing data rate, our approach reduced the RMSE for time vehicle distance by 0.0051, for traffic volume by 0.0056, and for interval speed by 0.0049 compared to the MATRIX method. These improvements not only indicate a potential for more precise traffic data analysis but also add value to the application of ETC systems and contribute to theoretical and practical advancements in the field.

15.
Sensors (Basel) ; 23(4)2023 Feb 07.
Artigo em Inglês | MEDLINE | ID: mdl-36850446

RESUMO

With the increase in low-power wireless communication solutions, the deployment of Wireless Sensor Networks is becoming usual, especially to implement Cyber-Physical Systems. These latter can be used for Structural Health Monitoring applications in critical environments. To ensure a long-term deployment, battery-free and energy-autonomous wireless sensors are designed and can be powered by ambient energy harvesting or Wireless Power Transfer. Because of the criticality of the applications and the limited resources of the nodes, the security is generally relegated to the background, which leads to vulnerabilities in the entire system. In this paper, a security analysis based on an example: the implementation of a communicating reinforced concrete using a network of battery-free nodes; is presented. First, the employed wireless communication protocols are presented in regard of their native security features, main vulnerabilities, and most usual attacks. Then, the security analysis is carried out for the targeted implementation, especially by defining the main hypothesis of the attack and its consequences. Finally, solutions to secure the data and the network are compared. From a global point-of-view, this security analysis must be initiated from the project definition and must be continued throughout the deployment to allow the use of adapted, updatable and upgradable solutions.

16.
J Clin Periodontol ; 49(2): 144-152, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-34747036

RESUMO

AIM: Analysis of distribution of p-values of continuous differences between test and controls after randomization provides evidence of unintentional error, non-random sampling, or data fabrication in randomized controlled trials (RCTs). We assessed evidence of highly unusual distributions of baseline characteristics of subjects enrolled in clinical trials in implant dentistry. MATERIALS AND METHODS: RCTs published between 2005 and 2020 were systematically searched in Pubmed, Embase, and Cochrane databases. Baseline patient data were extracted from full text articles by two independent assessors. The hypothesis of non-random sampling was tested by comparing the expected and the observed distribution of the p-values of differences between test and controls after randomization. RESULTS: One-thousand five-hundred and thirty-eight unique RCTs were identified, of which 409 (26.6%) did not report baseline characteristics of the population, and 671 (43.6%) reported data in forms other than mean and standard deviation and could not be used to assess their random sampling. Four-hundred and fifty-eight trials with 1449 baseline variables in the form of mean and standard deviation were assessed. The study observed an over-representation of very small p-values [<.001, 1.38%, 95% confidence interval (CI) 0.85-2.12 compared to the expected 0.10%, 95% CI 0.00-0.26]. No evidence of over-representation of larger p-values was observed. Unusual distributions were present in 2.38% of RCTs and more frequent in non-registered trials, in studies supported by non-industry funding, and in multi-centre RCTs. CONCLUSIONS: The inability to assess random sampling due to insufficient reporting in 26.6% of trials requires attention. In trials reporting suitable baseline data, unusual distributions were uncommon, and no evidence of data fabrication was detected, but there was evidence of non-random sampling. Continued efforts are necessary to ensure high integrity and trust in the evidence base of the field.


Assuntos
Ensaios Clínicos como Assunto , Odontologia , Projetos de Pesquisa , Humanos , Fatores de Risco
17.
Pediatr Radiol ; 52(11): 2111-2119, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-35790559

RESUMO

The integration of human and machine intelligence promises to profoundly change the practice of medicine. The rapidly increasing adoption of artificial intelligence (AI) solutions highlights its potential to streamline physician work and optimize clinical decision-making, also in the field of pediatric radiology. Large imaging databases are necessary for training, validating and testing these algorithms. To better promote data accessibility in multi-institutional AI-enabled radiologic research, these databases centralize the large volumes of data required to effect accurate models and outcome predictions. However, such undertakings must consider the sensitivity of patient information and therefore utilize requisite data governance measures to safeguard data privacy and security, to recognize and mitigate the effects of bias and to promote ethical use. In this article we define data stewardship and data governance, review their key considerations and applicability to radiologic research in the pediatric context, and consider the associated best practices along with the ramifications of poorly executed data governance. We summarize several adaptable data governance frameworks and describe strategies for their implementation in the form of distributed and centralized approaches to data management.


Assuntos
Inteligência Artificial , Radiologia , Algoritmos , Criança , Bases de Dados Factuais , Humanos , Radiologistas , Radiologia/métodos
18.
Sensors (Basel) ; 22(17)2022 Aug 29.
Artigo em Inglês | MEDLINE | ID: mdl-36080969

RESUMO

Data integrity is a prerequisite for ensuring data availability of IoT data and has received extensive attention in the field of IoT big data security. Stream computing systems are widely used in the field of IoT for real-time data acquisition and computing. However, the real-time, volatility, suddenness, and disorder of stream data make data integrity verification difficult. According to the survey, there is no mature and universal solution. To solve this issue, we constructed a data integrity verification algorithm scheme of the stream computing system (S-DIV) by utilizing homomorphic message authentication code and pseudo-random function security assumption. Furthermore, based on S-DIV, an external data integrity tracking and verification system is constructed to track and analyze the message data stream in real time. By verifying the data integrity of message during the whole life cycle, the problem of data corruption or data loss can be found in time, and error alarm and message recovery can be actively implemented. Then, we conduct the formal security analysis under the standard model and, finally, implement the S-DIV scheme in simulation environment. Experimental results show that the scheme can guarantee data integrity in an acceptable time without affecting the efficiency of the original system.


Assuntos
Big Data , Segurança Computacional , Algoritmos
19.
Sensors (Basel) ; 22(10)2022 May 12.
Artigo em Inglês | MEDLINE | ID: mdl-35632091

RESUMO

Seismic hazards are typical mining hazards causing dynamic failure of coal and rock mass, which greatly threatens the safety of personnel and equipment. At present, various seismic analysis methods are used to assess seismic risks but their accuracy is significantly limited by the incompleteness of seismic data. The probability of detecting earthquakes (PDE) method has been proven as a powerful means for retrieving missed seismic events and enhancing the seismic data integrity in mines. However, to date, the reliability of the results of the PDE method has not been assessed and the highly integrated seismic data have not been linked with the actual hazard potential. To fill these gaps, this paper investigated the impacts of the seismic data volume used for calculation and the modification of the layout of sensors on the reliability and robustness of the PDE method. The event counts and seismic energy were compensated using the PDE method, correlated with strong seismic events. The results indicated that the compensated seismic data presented higher accuracy in locating future hazardous events than before. This research provides references on enhancing the performance of seismic analysing methods for seismic risk assessments.

20.
Sensors (Basel) ; 22(4)2022 Feb 13.
Artigo em Inglês | MEDLINE | ID: mdl-35214350

RESUMO

Digital healthcare is a composite infrastructure of networking entities that includes the Internet of Medical Things (IoMT)-based Cyber-Physical Systems (CPS), base stations, services provider, and other concerned components. In the recent decade, it has been noted that the demand for this emerging technology is gradually increased with cost-effective results. Although this technology offers extraordinary results, but at the same time, it also offers multifarious security perils that need to be handled effectively to preserve the trust among all engaged stakeholders. For this, the literature proposes several authentications and data preservation schemes, but somehow they fail to tackle this issue with effectual results. Keeping in view, these constraints, in this paper, we proposed a lightweight authentication and data preservation scheme for IoT based-CPS utilizing deep learning (DL) to facilitate decentralized authentication among legal devices. With decentralized authentication, we have depreciated the validation latency among pairing devices followed by improved communication statistics. Moreover, the experimental results were compared with the benchmark models to acknowledge the significance of our model. During the evaluation phase, the proposed model reveals incredible advancement in terms of comparative parameters in comparison with benchmark models.


Assuntos
Segurança Computacional , Internet das Coisas , Comunicação , Atenção à Saúde , Tecnologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA