Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 187
Filtrar
1.
Sensors (Basel) ; 24(13)2024 Jul 04.
Artículo en Inglés | MEDLINE | ID: mdl-39001131

RESUMEN

Due to the uniqueness of the underwater environment, traditional data aggregation schemes face many challenges. Most existing data aggregation solutions do not fully consider node trustworthiness, which may result in the inclusion of falsified data sent by malicious nodes during the aggregation process, thereby affecting the accuracy of the aggregated results. Additionally, because of the dynamically changing nature of the underwater environment, current solutions often lack sufficient flexibility to handle situations such as node movement and network topology changes, significantly impacting the stability and reliability of data transmission. To address the aforementioned issues, this paper proposes a secure data aggregation algorithm based on a trust mechanism. By dynamically adjusting the number and size of node slices based on node trust values and transmission distances, the proposed algorithm effectively reduces network communication overhead and improves the accuracy of data aggregation. Due to the variability in the number of node slices, even if attackers intercept some slices, it is difficult for them to reconstruct the complete data, thereby ensuring data security.

2.
Front Public Health ; 12: 1408222, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39005996

RESUMEN

Understanding the health outcomes of military exposures is of critical importance for Veterans, their health care team, and national leaders. Approximately 43% of Veterans report military exposure concerns to their VA providers. Understanding the causal influences of environmental exposures on health is a complex exposure science task and often requires interpreting multiple data sources; particularly when exposure pathways and multi-exposure interactions are ill-defined, as is the case for complex and emerging military service exposures. Thus, there is a need to standardize clinically meaningful exposure metrics from different data sources to guide clinicians and researchers with a consistent model for investigating and communicating exposure risk profiles. The Linked Exposures Across Databases (LEAD) framework provides a unifying model for characterizing exposures from different exposure databases with a focus on providing clinically relevant exposure metrics. Application of LEAD is demonstrated through comparison of different military exposure data sources: Veteran Military Occupational and Environmental Exposure Assessment Tool (VMOAT), Individual Longitudinal Exposure Record (ILER) database, and a military incident report database, the Explosive Ordnance Disposal Information Management System (EODIMS). This cohesive method for evaluating military exposures leverages established information with new sources of data and has the potential to influence how military exposure data is integrated into exposure health care and investigational models.


Asunto(s)
Bases de Datos Factuales , Exposición a Riesgos Ambientales , Personal Militar , Humanos , Personal Militar/estadística & datos numéricos , Veteranos/estadística & datos numéricos , Elementos de Datos Comunes , Exposición Profesional , Estados Unidos
3.
Can J Public Health ; 2024 May 28.
Artículo en Inglés | MEDLINE | ID: mdl-38806937

RESUMEN

SETTING: The potential for exposure to indoor radon varies dramatically across British Columbia (BC) due to varied geology. Individuals may struggle to understand their exposure risk and agencies may struggle to understand the value of population-level programs and policies to mitigate risk. INTERVENTION: The BC Centre for Disease Control (BCCDC) established the BC Radon Data Repository (BCRDR) to facilitate radon research, public awareness, and action in the province. The BCRDR aggregates indoor radon measurements collected by government agencies, industry professionals and organizations, and research and advocacy groups. Participation was formalized with a data sharing agreement, which outlines how the BCCDC anonymizes and manages the shared data integrated into the BCRDR. OUTCOMES: The BCRDR currently holds 38,733 measurements from 18 data contributors. The repository continues to grow with new measurements from existing contributors and the addition of new contributors. A prominent use of the BCRDR was to create the online, interactive BC Radon Map, which includes regional concentration summaries, risk interpretation messaging, and health promotion information. Anonymized BCRDR data are also available for external release upon request. IMPLICATIONS: The BCCDC leverages existing radon measurement programs to create a large and integrated database with wide geographic coverage. The development and application of the BCRDR informs public health research and action beyond the BCCDC, and the repository can serve as a model for other regional or national initiatives.


RéSUMé: LIEU: Le potentiel d'exposition au radon à l'intérieur des bâtiments varie beaucoup d'une région à l'autre de la Colombie-Britannique en raison de la géologie variée. Les particuliers peuvent avoir du mal à comprendre leur risque d'exposition, et les organismes, à comprendre l'utilité des programmes et des politiques populationnels pour atténuer le risque. INTERVENTION: Le BC Centre for Disease Control (« le Centre ¼) a créé un organe d'archivage, le BC Radon Data Repository (BCRDR), pour faciliter la recherche, l'information, la sensibilisation du public et l'action liées au radon dans la province. Le BCRDR totalise les relevés du radon à l'intérieur des bâtiments pris par les organismes gouvernementaux, les professionnels et les organismes de l'industrie, ainsi que les groupes de recherche et de revendication. La participation est officialisée par un accord de partage de données qui décrit comment le Centre anonymise et gère les données communes du BCRDR. RéSULTATS: Le BCRDR contient actuellement 38 733 relevés de 18 contributeurs de données. Il continue de croître, avec de nouveaux relevés venant de contributeurs existants et l'ajout de nouveaux contributeurs. Il a servi, entre autres, à créer une carte du radon interactive en ligne pour la Colombie-Britannique, avec des résumés des concentrations régionales, des messages d'interprétation du risque et des informations de promotion de la santé. Sur demande, les données anonymisées du BCRDR sont également disponibles pour diffusion externe. CONSéQUENCES: Le Centre a exploité les programmes de prise de relevés du radon existants pour créer une grande base de données intégrée ayant une vaste couverture géographique. Le développement et les applications du BCRDR éclairent la recherche et l'action en santé publique au-delà du Centre, et l'organe d'archivage peut servir de modèle pour d'autres initiatives régionales ou nationales.

4.
Sensors (Basel) ; 24(7)2024 Mar 25.
Artículo en Inglés | MEDLINE | ID: mdl-38610301

RESUMEN

Existing secure data aggregation protocols are weaker to eliminate data redundancy and protect wireless sensor networks (WSNs). Only some existing approaches have solved this singular issue when aggregating data. However, there is a need for a multi-featured protocol to handle the multiple problems of data aggregation, such as energy efficiency, authentication, authorization, and maintaining the security of the network. Looking at the significant demand for multi-featured data aggregation protocol, we propose secure data aggregation using authentication and authorization (SDAAA) protocol to detect malicious attacks, particularly cyberattacks such as sybil and sinkhole, to extend network performance. These attacks are more complex to address through existing cryptographic protocols. The proposed SDAAA protocol comprises a node authorization algorithm that permits legitimate nodes to communicate within the network. This SDAAA protocol's methods help improve the quality of service (QoS) parameters. Furthermore, we introduce a mathematical model to improve accuracy, energy efficiency, data freshness, authorization, and authentication. Finally, our protocol is tested in an intelligent healthcare WSN patient-monitoring application scenario and verified using an OMNET++ simulator. Based upon the results, we confirm that our proposed SDAAA protocol attains a throughput of 444 kbs, representing a 98% of data/network channel capacity rate; an energy consumption of 2.6 joules, representing 99% network energy efficiency; an effected network of 2.45, representing 99.5% achieved overall performance of the network; and time complexity of 0.08 s, representing 98.5% efficiency of the proposed SDAAA approach. By contrast, contending protocols such as SD, EEHA, HAS, IIF, and RHC have throughput ranges between 415-443, representing 85-90% of the data rate/channel capacity of the network; energy consumption in the range of 3.0-3.6 joules, representing 88-95% energy efficiency of the network; effected network range of 2.98, representing 72-89% improved overall performance of the network; and time complexity in the range of 0.20 s, representing 72-89% efficiency of the proposed SDAAA approach. Therefore, our proposed SDAAA protocol outperforms other known approaches, such as SD, EEHA, HAS, IIF, and RHC, designed for secure data aggregation in a similar environment.

5.
Heliyon ; 10(7): e27177, 2024 Apr 15.
Artículo en Inglés | MEDLINE | ID: mdl-38601685

RESUMEN

The Internet of Things (IoT) is a network of intelligent devices especially in healthcare-based systems. Internet of Medical Things (IoMT) uses wearable sensors to collect data and transmit to central repositories. The security and privacy of healthcare data is a challenging task. The aim of the study is to provide a secure data sharing mechanism. The existing studies provide secure data sharing schemes but still have limitations in terms of hiding the patient identify in the messages exchanged to upload the data on central repositories. This paper presents a Secure Aggregated Data Collection and Transmission (SADCT) that provides anonymity for the identities of patient's mobile device and the intermediate fog nodes. Our system involves an authenticated server for node registration and authentication by saving security credentials. The proposed scheme presents the novel data aggregation algorithm at the mobile device, and the data extraction algorithm at the fog node. The work is validated through extensive simulations in NS along with a security analysis. Results prove the supremacy of SADCT in terms of energy consumption, storage, communication, and computational costs.

6.
PeerJ Comput Sci ; 10: e1932, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38660199

RESUMEN

Data aggregation plays a critical role in sensor networks for efficient data collection. However, the assumption of uniform initial energy levels among sensors in existing algorithms is unrealistic in practical production applications. This discrepancy in initial energy levels significantly impacts data aggregation in sensor networks. To address this issue, we propose Data Aggregation with Different Initial Energy (DADIE), a novel algorithm that aims to enhance energy-saving, privacy-preserving efficiency, and reduce node death rates in sensor networks with varying initial energy nodes. DADIE considers the transmission distance between nodes and their initial energy levels when forming the network topology, while also limiting the number of child nodes. Furthermore, DADIE reconstructs the aggregation tree before each round of data transmission. This allows nodes closer to the receiving end with higher initial energy to undertake more data aggregation and transmission tasks while limiting energy consumption. As a result, DADIE effectively reduces the node death rate and improves the efficiency of data transmission throughout the network. To enhance network security, DADIE establishes secure transmission channels between transmission nodes prior to data transmission, and it employs slice-and-mix technology within the network. Our experimental simulations demonstrate that the proposed DADIE algorithm effectively resolves the data aggregation challenges in sensor networks with varying initial energy nodes. It achieves 5-20% lower communication overhead and energy consumption, 10-20% higher security, and 10-30% lower node mortality than existing algorithms.

7.
Front Plant Sci ; 15: 1265073, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38450403

RESUMEN

Advancements in phenotyping technology have enabled plant science researchers to gather large volumes of information from their experiments, especially those that evaluate multiple genotypes. To fully leverage these complex and often heterogeneous data sets (i.e. those that differ in format and structure), scientists must invest considerable time in data processing, and data management has emerged as a considerable barrier for downstream application. Here, we propose a pipeline to enhance data collection, processing, and management from plant science studies comprising of two newly developed open-source programs. The first, called AgTC, is a series of programming functions that generates comma-separated values file templates to collect data in a standard format using either a lab-based computer or a mobile device. The second series of functions, AgETL, executes steps for an Extract-Transform-Load (ETL) data integration process where data are extracted from heterogeneously formatted files, transformed to meet standard criteria, and loaded into a database. There, data are stored and can be accessed for data analysis-related processes, including dynamic data visualization through web-based tools. Both AgTC and AgETL are flexible for application across plant science experiments without programming knowledge on the part of the domain scientist, and their functions are executed on Jupyter Notebook, a browser-based interactive development environment. Additionally, all parameters are easily customized from central configuration files written in the human-readable YAML format. Using three experiments from research laboratories in university and non-government organization (NGO) settings as test cases, we demonstrate the utility of AgTC and AgETL to streamline critical steps from data collection to analysis in the plant sciences.

8.
bioRxiv ; 2024 Jan 23.
Artículo en Inglés | MEDLINE | ID: mdl-38328080

RESUMEN

Background: Gene co-expression networks (GCNs) describe relationships among expressed genes key to maintaining cellular identity and homeostasis. However, the small sample size of typical RNA-seq experiments which is several orders of magnitude fewer than the number of genes is too low to infer GCNs reliably. recount3, a publicly available dataset comprised of 316,443 uniformly processed human RNA-seq samples, provides an opportunity to improve power for accurate network reconstruction and obtain biological insight from the resulting networks. Results: We compared alternate aggregation strategies to identify an optimal workflow for GCN inference by data aggregation and inferred three consensus networks: a universal network, a non-cancer network, and a cancer network in addition to 27 tissue context-specific networks. Central network genes from our consensus networks were enriched for evolutionarily constrained genes and ubiquitous biological pathways, whereas central context-specific network genes included tissue-specific transcription factors and factorization based on the hubs led to clustering of related tissue contexts. We discovered that annotations corresponding to context-specific networks inferred from aggregated data were enriched for trait heritability beyond known functional genomic annotations and were significantly more enriched when we aggregated over a larger number of samples. Conclusion: This study outlines best practices for network GCN inference and evaluation by data aggregation. We recommend estimating and regressing confounders in each data set before aggregation and prioritizing large sample size studies for GCN reconstruction. Increased statistical power in inferring context-specific networks enabled the derivation of variant annotations that were enriched for concordant trait heritability independent of functional genomic annotations that are context-agnostic. While we observed strictly increasing held-out log-likelihood with data aggregation, we noted diminishing marginal improvements. Future directions aimed at alternate methods for estimating confounders and integrating orthogonal information from modalities such as Hi-C and ChIP-seq can further improve GCN inference.

9.
10.
J Appl Stat ; 50(15): 3062-3087, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37969541

RESUMEN

Goodness of fit tests for two probabilistic multigraph models are presented. The first model is random stub matching given fixed degrees (RSM) so that edge assignments to vertex pair sites are dependent, and the second is independent edge assignments (IEA) according to a common probability distribution. Tests are performed using goodness of fit measures between the edge multiplicity sequence of an observed multigraph, and the expected one according to a simple or composite hypothesis. Test statistics of Pearson type and of likelihood ratio type are used, and the expected values of the Pearson statistic under the different models are derived. Test performances based on simulations indicate that even for small number of edges, the null distributions of both statistics are well approximated by their asymptotic χ2-distribution. The non-null distributions of the test statistics can be well approximated by proposed adjusted χ2-distributions used for power approximations. The influence of RSM on both test statistics is substantial for small number of edges and implies a shift of their distributions towards smaller values compared to what holds true for the null distributions under IEA. Two applications on social networks are included to illustrate how the tests can guide in the analysis of social structure.

11.
Behav Res Methods ; 2023 Nov 29.
Artículo en Inglés | MEDLINE | ID: mdl-38030927

RESUMEN

Threatened species monitoring can produce enormous quantities of acoustic and visual recordings which must be searched for animal detections. Data coding is extremely time-consuming for humans and even though machine algorithms are emerging as useful tools to tackle this task, they too require large amounts of known detections for training. Citizen scientists are often recruited via crowd-sourcing to assist. However, the results of their coding can be difficult to interpret because citizen scientists lack comprehensive training and typically each codes only a small fraction of the full dataset. Competence may vary between citizen scientists, but without knowing the ground truth of the dataset, it is difficult to identify which citizen scientists are most competent. We used a quantitative cognitive model, cultural consensus theory, to analyze both empirical and simulated data from a crowdsourced analysis of audio recordings of Australian frogs. Several hundred citizen scientists were asked whether the calls of nine frog species were present on 1260 brief audio recordings, though most only coded a fraction of these recordings. Through modeling, characteristics of both the citizen scientist cohort and the recordings were estimated. We then compared the model's output to expert coding of the recordings and found agreement between the cohort's consensus and the expert evaluation. This finding adds to the evidence that crowdsourced analyses can be utilized to understand large-scale datasets, even when the ground truth of the dataset is unknown. The model-based analysis provides a promising tool to screen large datasets prior to investing expert time and resources.

12.
BMC Med Imaging ; 23(1): 134, 2023 09 18.
Artículo en Inglés | MEDLINE | ID: mdl-37718458

RESUMEN

Continuous release of image databases with fully or partially identical inner categories dramatically deteriorates the production of autonomous Computer-Aided Diagnostics (CAD) systems for true comprehensive medical diagnostics. The first challenge is the frequent massive bulk release of medical image databases, which often suffer from two common drawbacks: image duplication and corruption. The many subsequent releases of the same data with the same classes or categories come with no clear evidence of success in the concatenation of those identical classes among image databases. This issue stands as a stumbling block in the path of hypothesis-based experiments for the production of a single learning model that can successfully classify all of them correctly. Removing redundant data, enhancing performance, and optimizing energy resources are among the most challenging aspects. In this article, we propose a global data aggregation scale model that incorporates six image databases selected from specific global resources. The proposed valid learner is based on training all the unique patterns within any given data release, thereby creating a unique dataset hypothetically. The Hash MD5 algorithm (MD5) generates a unique hash value for each image, making it suitable for duplication removal. The T-Distributed Stochastic Neighbor Embedding (t-SNE), with a tunable perplexity parameter, can represent data dimensions. Both the Hash MD5 and t-SNE algorithms are applied recursively, producing a balanced and uniform database containing equal samples per category: normal, pneumonia, and Coronavirus Disease of 2019 (COVID-19). We evaluated the performance of all proposed data and the new automated version using the Inception V3 pre-trained model with various evaluation metrics. The performance outcome of the proposed scale model showed more respectable results than traditional data aggregation, achieving a high accuracy of 98.48%, along with high precision, recall, and F1-score. The results have been proved through a statistical t-test, yielding t-values and p-values. It's important to emphasize that all t-values are undeniably significant, and the p-values provide irrefutable evidence against the null hypothesis. Furthermore, it's noteworthy that the Final dataset outperformed all other datasets across all metric values when diagnosing various lung infections with the same factors.


Asunto(s)
COVID-19 , Neumonía , Humanos , COVID-19/diagnóstico por imagen , Rayos X , Neumonía/diagnóstico por imagen , Algoritmos , Pulmón/diagnóstico por imagen
13.
SSM Popul Health ; 24: 101511, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37711359

RESUMEN

Stakeholders need data on health and drivers of health parsed to the boundaries of essential policy-relevant geographies. US Congressional Districts are an example of a policy-relevant geography which generally lack health data. One strategy to generate Congressional District heath data metric estimates is to aggregate estimates from other geographies, for example, from counties or census tracts to Congressional Districts. Doing so requires several methodological decisions. We refine a method to aggregate health metric estimates from one geography to another, using a population weighted approach. The method's accuracy is evaluated by comparing three aggregated metric estimates to metric estimates from the US Census American Community Survey for the same years: Broadband Access, High School Completion, and Unemployment. We then conducted four sensitivity analyses testing: the effect of aggregating counts vs. percentages; impacts of component geography size and data missingness; and extent of population overlap between component and target geographies. Aggregated estimates were very similar to estimates for identical metrics drawn directly from the data source. Sensitivity analyses suggest the following best practices for Congressional district-based metrics: utilizing smaller, more plentiful geographies like census tracts as opposed to larger, less plentiful geographies like counties, despite potential for less stable estimates in smaller geographies; favoring geographies with higher percentage population overlap.

14.
Sensors (Basel) ; 23(18)2023 Sep 11.
Artículo en Inglés | MEDLINE | ID: mdl-37765857

RESUMEN

The Internet of Things (IoT) is an advanced technology that comprises numerous devices with carrying sensors to collect, send, and receive data. Due to its vast popularity and efficiency, it is employed in collecting crucial data for the health sector. As the sensors generate huge amounts of data, it is better for the data to be aggregated before being transmitting the data further. These sensors generate redundant data frequently and transmit the same values again and again unless there is no variation in the data. The base scheme has no mechanism to comprehend duplicate data. This problem has a negative effect on the performance of heterogeneous networks.It increases energy consumption; and requires high control overhead, and additional transmission slots are required to send data. To address the above-mentioned challenges posed by duplicate data in the IoT-based health sector, this paper presents a fuzzy data aggregation system (FDAS) that aggregates data proficiently and reduces the same range of normal data sizes to increase network performance and decrease energy consumption. The appropriate parent node is selected by implementing fuzzy logic, considering important input parameters that are crucial from the parent node selection perspective and share Boolean digit 0 for the redundant values to store in a repository for future use. This increases the network lifespan by reducing the energy consumption of sensors in heterogeneous environments. Therefore, when the complexity of the environment surges, the efficiency of FDAS remains stable. The performance of the proposed scheme has been validated using the network simulator and compared with base schemes. According to the findings, the proposed technique (FDAS) dominates in terms of reducing energy consumption in both phases, achieves better aggregation, reduces control overhead, and requires the fewest transmission slots.

15.
Sci Total Environ ; 899: 165981, 2023 Nov 15.
Artículo en Inglés | MEDLINE | ID: mdl-37572898

RESUMEN

Groundwater quality management, crucial for ensuring sustainable water resources and public health, is the scope of this study. Our objective is to demonstrate the significance of secondary data analysis for the spatiotemporal characterization of groundwater quality. To this end, we develop and employ a robust trend analysis method, in tandem with a spatiotemporal data aggregation method, to accurately identify shifts in groundwater quality over time, even in the face of inflection points or breakpoints. The methods and results reveal diverse trends and characteristics in water quality over space and time across the entire dataset from selected regions in South Korea, emphasizing the importance of analyzing aggregated data beyond individual business locations. The conclusions indicate that this study contributes to the development of more reliable and effective groundwater quality management strategies by addressing gaps in traditional monitoring methods and the challenges of limited monitoring resources and uneven data quality. Future research directions include the application of the developed methods to other regions and data sources, opening avenues for further advances in groundwater quality management.

16.
Sensors (Basel) ; 23(13)2023 Jul 06.
Artículo en Inglés | MEDLINE | ID: mdl-37448038

RESUMEN

By definition, the aggregating methodology ensures that transmitted data remain visible in clear text in the aggregated units or nodes. Data transmission without encryption is vulnerable to security issues such as data confidentiality, integrity, authentication and attacks by adversaries. On the other hand, encryption at each hop requires extra computation for decrypting, aggregating, and then re-encrypting the data, which results in increased complexity, not only in terms of computation but also due to the required sharing of keys. Sharing the same key across various nodes makes the security more vulnerable. An alternative solution to secure the aggregation process is to provide an end-to-end security protocol, wherein intermediary nodes combine the data without decoding the acquired data. As a consequence, the intermediary aggregating nodes do not have to maintain confidential key values, enabling end-to-end security across sensor devices and base stations. This research presents End-to-End Homomorphic Encryption (EEHE)-based safe and secure data gathering in IoT-based Wireless Sensor Networks (WSNs), whereby it protects end-to-end security and enables the use of aggregator functions such as COUNT, SUM and AVERAGE upon encrypted messages. Such an approach could also employ message authentication codes (MAC) to validate data integrity throughout data aggregation and transmission activities, allowing fraudulent content to also be identified as soon as feasible. Additionally, if data are communicated across a WSN, then there is a higher likelihood of a wormhole attack within the data aggregation process. The proposed solution also ensures the early detection of wormhole attacks during data aggregation.


Asunto(s)
Seguridad Computacional , Agregación de Datos , Redes de Comunicación de Computadores , Algoritmos , Confidencialidad
17.
Heliyon ; 9(6): e16297, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-37346350

RESUMEN

Background: The daily monitoring of the physiological parameters is essential for monitoring health condition and to prevent health problems. This is possible due to the democratization of numerous types of medical devices and promoted by the interconnection between these and smartphones. Nevertheless, medical devices that connect to smartphones are typically limited to manufacturers applications. Objectives: This paper proposes an intelligent scanning system to simplify the collection of data displayed on different medical devices screens, recognizing the values, and optionally integrating them, through open protocols, with centralized databases. Methods: To develop this system, a dataset comprising 1614 images of medical devices was created, obtained from manufacturer catalogs, photographs and other public datasets. Then, three object detector algorithms (yolov3, Single-Shot Detector [SSD] 320 × 320 and SSD 640 × 640) were trained to detect digits and acronyms/units of measurements presented by medical devices. These models were tested under 3 different conditions to detect digits and acronyms/units as a single object (single label), digits and acronyms/units as independent objects (two labels), and digits and acronyms/units individually (fifteen labels). Models trained for single and two labels were completed with a convolutional neural network (CNN) to identify the detected objects. To group the recognized digits, a condition-tree based strategy on density spatial clustering was used. Results: The most promising approach was the use of the SSD 640 × 640 for fifteen labels. Conclusion: Lastly, as future work, it is intended to convert this system to a mobile environment to accelerate and streamline the process of inserting data into mobile health (mhealth) applications.

18.
Heliyon ; 9(6): e16116, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-37265623

RESUMEN

The digitalisation of healthcare services is a major resource to inform policy-makers. However, the availability of data and the establishment of a data flow present new issues to address, such as data anonymisation, records' reliability, and data linkage. The veterans' population in the UK presents complex needs and many organisations provide social and healthcare support, but their databases are not linked or aggregated to provide a comprehensive overview of service planning. This study aims to test the sensitivity and specificity of a Secure Hashing Algorithm to generate a unique anonymous identifier for data linkage across different organisations in the veterans' population. A Secure Hashing Algorithm was performed by considering two input variables from two different datasets. The uniqueness of the identifier was compared against the single personal key adopted as a current standard identifier. Chi-square, sensitivity, and specificity were calculated. The results demonstrated that the unique identifier generated by the Secure Hashing Algorithm detected more unique records when compared to the current gold standard. The identifier demonstrated optimal sensitivity and specificity and it allowed an enhanced data linkage between different datasets. The adoption of a Secure Hashing Algorithm improved the uniqueness of records. Moreover, it ensured data anonymity by transforming personal information into an encrypted identifier. This approach is beneficial for big data management and for creating an aggregated system for linking different organisations and, in this way, for providing a more comprehensive overview of healthcare provision and the foundation for precision public health strategies.

20.
Stud Health Technol Inform ; 301: 121-122, 2023 May 02.
Artículo en Inglés | MEDLINE | ID: mdl-37172164

RESUMEN

The JITAI is an intervention design to support health behavior change. We designed a multi-level modeling framework for JITAIs and developed a proof-of-concept prototype (POC). This study aimed at investigating the usability of the POC by conducting two usability tests with students. We assessed the usability and the students' workload and success in completing tasks. In the second usability test, however, they faced difficulties in completing the tasks. We will work on hiding the complexity of the framework as well as improving the frontend and the instructions.


Asunto(s)
Telemedicina , Diseño Centrado en el Usuario , Humanos , Interfaz Usuario-Computador , Conductas Relacionadas con la Salud , Carga de Trabajo
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA