Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 8.571
Filtrar
1.
BMC Med Ethics ; 25(1): 88, 2024 Aug 10.
Artículo en Inglés | MEDLINE | ID: mdl-39127660

RESUMEN

BACKGROUND: Personal Health Monitoring (PHM) has the potential to enhance soldier health outcomes. To promote morally responsible development, implementation, and use of PHM in the armed forces, it is important to be aware of the inherent ethical dimension of PHM. In order to improve the understanding of the ethical dimension, a scoping review of the existing academic literature on the ethical dimension of PHM was conducted. METHODS: Four bibliographical databases (Ovid/Medline, Embase.com, Clarivate Analytics/Web of Science Core Collection, and Elsevier/SCOPUS) were searched for relevant literature from their inception to June 1, 2023. Studies were included if they sufficiently addressed the ethical dimension of PHM and were related to or claimed relevance for the military. After selection and extraction, the data was analysed using a qualitative thematic approach. RESULTS: A total of 9,071 references were screened. After eligibility screening, 19 articles were included for this review. The review identifies and describes three categories reflecting the ethical dimension of PHM in the military: (1) utilitarian considerations, (2) value-based considerations, and (3) regulatory responsibilities. The four main values that have been identified as being of concern are those of privacy, security, trust, and autonomy. CONCLUSIONS: This review demonstrates that PHM in the armed forces is primarily approached from a utilitarian perspective, with a focus on its benefits, without explicit critical deliberation on PHM's potential moral downsides. Also, the review highlights a significant research gap with a specific lack of empirical studies focussing specifically on the ethical dimension of PHM. Awareness of the inherent ethical dimension of PHM in the military, including value conflicts and how to balance them, can help to contribute to a morally responsible development, implementation, and use of PHM in the armed forces.


Asunto(s)
Personal Militar , Humanos , Privacidad , Autonomía Personal
2.
J Empir Res Hum Res Ethics ; 19(3): 113-123, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-39096208

RESUMEN

This research identifies the circumstances in which Human Research Ethics Committees (HRECs) are trusted by Australians to approve the use of genomic data - without express consent - and considers the impact of genomic data sharing settings, and respondent attributes, on public trust. Survey results (N = 3013) show some circumstances are more conducive to public trust than others, with waivers endorsed when future research is beneficial and when privacy is protected, but receiving less support in other instances. Still, results imply attitudes are influenced by more than these specific circumstances, with different data sharing settings, and participant attributes, affecting views. Ultimately, this research raises questions and concerns in relation to the criteria HRECs use when authorising waivers of consent in Australia.


Asunto(s)
Actitud , Comités de Ética en Investigación , Genómica , Difusión de la Información , Consentimiento Informado , Confianza , Humanos , Australia , Genómica/ética , Masculino , Femenino , Adulto , Encuestas y Cuestionarios , Persona de Mediana Edad , Ética en Investigación , Privacidad , Anciano , Adulto Joven , Opinión Pública , Adolescente , Confidencialidad
3.
PLoS One ; 19(8): e0309075, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39159171

RESUMEN

Pre-exposure prophylaxis (PrEP) is being scaled up to prevent HIV acquisition among adolescent girls and young women (AGYW) in Eastern and Southern Africa. In a prior study more than one-third of AGYW 'mystery shoppers' stated they would not return to care based on interactions with health providers. We examined the experiences of AGYW in this study to identify main barriers to effective PrEP services. Unannounced patient actors (USP/'mystery shoppers') posed as AGYWs seeking PrEP using standardized scenarios 8 months after providers had received training to improve PrEP services. We conducted targeted debriefings using open-ended questions to assess PrEP service provision and counseling quality with USPs immediately following their visit. Debriefings were audio-recorded and transcribed. Transcripts were analyzed using thematic analysis to explore why USPs reported either positive or negative encounters. We conducted 91 USP debriefings at 24 facilities and identified three primary influences on PrEP service experiences: 1) Privacy improved likelihood of continuing care, 2) respectful attitudes created a safe environment for USPs, and 3) patient-centered communication improved the experience and increased confidence for PrEP initiation among USPs. Privacy and provider attitudes were primary drivers that influenced decision-making around PrEP in USP debriefs. Access to privacy and improving provider attitudes is important for scale-up of PrEP to AGYW.


Asunto(s)
Consejo , Infecciones por VIH , Profilaxis Pre-Exposición , Humanos , Femenino , Adolescente , Kenia , Infecciones por VIH/prevención & control , Adulto Joven , Privacidad , Adulto , Fármacos Anti-VIH/uso terapéutico
4.
Medicine (Baltimore) ; 103(33): e39370, 2024 Aug 16.
Artículo en Inglés | MEDLINE | ID: mdl-39151500

RESUMEN

With the rapid development of emerging information technologies such as artificial intelligence, cloud computing, and the Internet of Things, the world has entered the era of big data. In the face of growing medical big data, research on the privacy protection of personal information has attracted more and more attention, but few studies have analyzed and forecasted the research hotspots and future development trends on the privacy protection. Presently, to systematically and comprehensively summarize the relevant privacy protection literature in the context of big healthcare data, a bibliometric analysis was conducted to clarify the spatial and temporal distribution and research hotspots of privacy protection using the information visualization software CiteSpace. The literature papers related to privacy protection in the Web of Science were collected from 2012 to 2023. Through analysis of the time, author and countries distribution of relevant publications, we found that after 2013, research on the privacy protection has received increasing attention and the core institution of privacy protection research is the university, but the countries show weak cooperation. Additionally, keywords like privacy, big data, internet, challenge, care, and information have high centralities and frequency, indicating the research hotspots and research trends in the field of the privacy protection. All the findings will provide a comprehensive privacy protection research knowledge structure for scholars in the field of privacy protection research under the background of health big data, which can help them quickly grasp the research hotspots and choose future research projects.


Asunto(s)
Macrodatos , Seguridad Computacional , Confidencialidad , Privacidad , Humanos , Bibliometría
5.
BMC Med Res Methodol ; 24(1): 181, 2024 Aug 14.
Artículo en Inglés | MEDLINE | ID: mdl-39143466

RESUMEN

BACKGROUND: Synthetic Electronic Health Records (EHRs) are becoming increasingly popular as a privacy enhancing technology. However, for longitudinal EHRs specifically, little research has been done into how to properly evaluate synthetically generated samples. In this article, we provide a discussion on existing methods and recommendations when evaluating the quality of synthetic longitudinal EHRs. METHODS: We recommend to assess synthetic EHR quality through similarity to real EHRs in low-dimensional projections, accuracy of a classifier discriminating synthetic from real samples, performance of synthetic versus real trained algorithms in clinical tasks, and privacy risk through risk of attribute inference. For each metric we discuss strengths and weaknesses, next to showing how it can be applied on a longitudinal dataset. RESULTS: To support the discussion on evaluation metrics, we apply discussed metrics on a dataset of synthetic EHRs generated from the Medical Information Mart for Intensive Care-IV (MIMIC-IV) repository. CONCLUSIONS: The discussion on evaluation metrics provide guidance for researchers on how to use and interpret different metrics when evaluating the quality of synthetic longitudinal EHRs.


Asunto(s)
Algoritmos , Registros Electrónicos de Salud , Registros Electrónicos de Salud/estadística & datos numéricos , Registros Electrónicos de Salud/normas , Humanos , Estudios Longitudinales , Privacidad
6.
Comput Biol Med ; 179: 108792, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38964242

RESUMEN

BACKGROUND AND OBJECTIVE: Concerns about patient privacy issues have limited the application of medical deep learning models in certain real-world scenarios. Differential privacy (DP) can alleviate this problem by injecting random noise into the model. However, naively applying DP to medical models will not achieve a satisfactory balance between privacy and utility due to the high dimensionality of medical models and the limited labeled samples. METHODS: This work proposed the DP-SSLoRA model, a privacy-preserving classification model for medical images combining differential privacy with self-supervised low-rank adaptation. In this work, a self-supervised pre-training method is used to obtain enhanced representations from unlabeled publicly available medical data. Then, a low-rank decomposition method is employed to mitigate the impact of differentially private noise and combined with pre-trained features to conduct the classification task on private datasets. RESULTS: In the classification experiments using three real chest-X ray datasets, DP-SSLoRA achieves good performance with strong privacy guarantees. Under the premise of ɛ=2, with the AUC of 0.942 in RSNA, the AUC of 0.9658 in Covid-QU-mini, and the AUC of 0.9886 in Chest X-ray 15k. CONCLUSION: Extensive experiments on real chest X-ray datasets show that DP-SSLoRA can achieve satisfactory performance with stronger privacy guarantees. This study provides guidance for studying privacy-preserving in the medical field. Source code is publicly available online. https://github.com/oneheartforone/DP-SSLoRA.


Asunto(s)
Privacidad , Humanos , Aprendizaje Profundo , COVID-19 , SARS-CoV-2 , Algoritmos
7.
Radiother Oncol ; 198: 110419, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38969106

RESUMEN

OBJECTIVES: This work aims to explore the impact of multicenter data heterogeneity on deep learning brain metastases (BM) autosegmentation performance, and assess the efficacy of an incremental transfer learning technique, namely learning without forgetting (LWF), to improve model generalizability without sharing raw data. MATERIALS AND METHODS: A total of six BM datasets from University Hospital Erlangen (UKER), University Hospital Zurich (USZ), Stanford, UCSF, New York University (NYU), and BraTS Challenge 2023 were used. First, the performance of the DeepMedic network for BM autosegmentation was established for exclusive single-center training and mixed multicenter training, respectively. Subsequently privacy-preserving bilateral collaboration was evaluated, where a pretrained model is shared to another center for further training using transfer learning (TL) either with or without LWF. RESULTS: For single-center training, average F1 scores of BM detection range from 0.625 (NYU) to 0.876 (UKER) on respective single-center test data. Mixed multicenter training notably improves F1 scores at Stanford and NYU, with negligible improvement at other centers. When the UKER pretrained model is applied to USZ, LWF achieves a higher average F1 score (0.839) than naive TL (0.570) and single-center training (0.688) on combined UKER and USZ test data. Naive TL improves sensitivity and contouring accuracy, but compromises precision. Conversely, LWF demonstrates commendable sensitivity, precision and contouring accuracy. When applied to Stanford, similar performance was observed. CONCLUSION: Data heterogeneity (e.g., variations in metastases density, spatial distribution, and image spatial resolution across centers) results in varying performance in BM autosegmentation, posing challenges to model generalizability. LWF is a promising approach to peer-to-peer privacy-preserving model training.


Asunto(s)
Neoplasias Encefálicas , Aprendizaje Profundo , Humanos , Neoplasias Encefálicas/secundario , Neoplasias Encefálicas/radioterapia , Privacidad
8.
Sci Eng Ethics ; 30(4): 28, 2024 Jul 16.
Artículo en Inglés | MEDLINE | ID: mdl-39012561

RESUMEN

The rapidly advancing field of brain-computer (BCI) and brain-to-brain interfaces (BBI) is stimulating interest across various sectors including medicine, entertainment, research, and military. The developers of large-scale brain-computer networks, sometimes dubbed 'Mindplexes' or 'Cloudminds', aim to enhance cognitive functions by distributing them across expansive networks. A key technical challenge is the efficient transmission and storage of information. One proposed solution is employing blockchain technology over Web 3.0 to create decentralised cognitive entities. This paper explores the potential of a decentralised web for coordinating large brain-computer constellations, and its associated benefits, focusing in particular on the conceptual and ethical challenges this innovation may pose pertaining to (1) Identity, (2) Sovereignty (encompassing Autonomy, Authenticity, and Ownership), (3) Responsibility and Accountability, and (4) Privacy, Safety, and Security. We suggest that while a decentralised web can address some concerns and mitigate certain risks, underlying ethical issues persist. Fundamental questions about entity definition within these networks, the distinctions between individuals and collectives, and responsibility distribution within and between networks, demand further exploration.


Asunto(s)
Interfaces Cerebro-Computador , Internet , Autonomía Personal , Privacidad , Humanos , Interfaces Cerebro-Computador/ética , Responsabilidad Social , Cadena de Bloques/ética , Seguridad Computacional/ética , Propiedad/ética , Política , Cognición , Seguridad , Tecnología/ética
9.
Front Public Health ; 12: 1414076, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39022418

RESUMEN

While healthcare big data brings great opportunities and convenience to the healthcare industry, it also inevitably raises the issue of privacy leakage. Nowadays, the whole world is facing the security threat of healthcare big data, for which a sound policy framework can help reduce privacy risks of healthcare big data. In recent years, the Chinese government and industry self-regulatory organizations have issued a series of policy documents to reduce privacy risks of healthcare big data. However, China's policy framework suffers from the drawbacks of the mismatched operational model, the inappropriate operational method, and the poorly actionable operational content. Based on the experiences of the European Union, Australia, the United States, and other extra-territorial regions, strategies are proposed for China to amend the operational model of the policy framework, improve the operational method of the policy framework, and enhance the operability of the operational content of the policy framework. This study enriches the research on China's policy framework to reduce privacy risks of healthcare big data and provides some inspiration for other countries.


Asunto(s)
Macrodatos , Política de Salud , China , Humanos , Privacidad , Confidencialidad , Seguridad Computacional
10.
PLoS One ; 19(7): e0306420, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39038028

RESUMEN

The widespread adoption of cloud computing necessitates privacy-preserving techniques that allow information to be processed without disclosure. This paper proposes a method to increase the accuracy and performance of privacy-preserving Convolutional Neural Networks with Homomorphic Encryption (CNN-HE) by Self-Learning Activation Functions (SLAF). SLAFs are polynomials with trainable coefficients updated during training, together with synaptic weights, for each polynomial independently to learn task-specific and CNN-specific features. We theoretically prove its feasibility to approximate any continuous activation function to the desired error as a function of the SLAF degree. Two CNN-HE models are proposed: CNN-HE-SLAF and CNN-HE-SLAF-R. In the first model, all activation functions are replaced by SLAFs, and CNN is trained to find weights and coefficients. In the second one, CNN is trained with the original activation, then weights are fixed, activation is substituted by SLAF, and CNN is shortly re-trained to adapt SLAF coefficients. We show that such self-learning can achieve the same accuracy 99.38% as a non-polynomial ReLU over non-homomorphic CNNs and lead to an increase in accuracy (99.21%) and higher performance (6.26 times faster) than the state-of-the-art CNN-HE CryptoNets on the MNIST optical character recognition benchmark dataset.


Asunto(s)
Seguridad Computacional , Redes Neurales de la Computación , Privacidad , Humanos , Algoritmos , Nube Computacional
11.
BMC Med Ethics ; 25(1): 79, 2024 Jul 22.
Artículo en Inglés | MEDLINE | ID: mdl-39034385

RESUMEN

BACKGROUND: Historically, epidemics have been accompanied by the concurrent emergence of stigma, prejudice, and xenophobia. This scoping review aimed to describe and map published research targeting ethical values concerning monkeypox (mpox). In addition, it aimed to understand the research gaps related to mpox associated stigma. METHODS: We comprehensively searched databases (PubMed Central, PubMed Medline, Scopus, Web of Science, Ovid, and Google Scholar) to identify published literature concerning mpox ethical issues and stigma from May 6, 2022, to February 15, 2023. The key search terms used were "monkeypox", "ethics", "morals", "social stigma", "privacy", "confidentiality", "secrecy", "privilege", "egoism", and "metaethics". This scoping review followed the framework proposed by Arksey and O'Malley in 2005 and was further improved by the recommendations of Levac et al. in 2010. RESULTS: The search strategies employed in the scoping review yielded a total of 454 articles. We analyzed the sources, types, and topics of the retrieved articles/studies. The authors were able to identify 32 studies that met inclusion criteria. Six of the 32 included studies were primary research. The study revealed that the ongoing mpox outbreak is contending with a notable surge in misinformation and societal stigma. It highlights the adverse impacts of stigma and ethical concerns associated with mpox, which can negatively affect people with the disease. CONCLUSION: The study's findings underscore the imperative need to enhance public awareness; involve civil society; and promote collaboration among policymakers, medical communities, and social media platforms. These collective endeavors are crucial for mitigating stigma, averting human-to-human transmission, tackling racism, and dispelling misconceptions associated with the outbreak.


Asunto(s)
Brotes de Enfermedades , Mpox , Estigma Social , Humanos , Brotes de Enfermedades/ética , Mpox/epidemiología , Confidencialidad/ética , Privacidad , Principios Morales
12.
PLoS One ; 19(7): e0307686, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39078999

RESUMEN

To ensure optimal use of images while preserving privacy, it is necessary to partition the shared image into public and private areas, with public areas being openly accessible and private areas being shared in a controlled and privacy-preserving manner. Current works only facilitate image-level sharing and use common cryptographic algorithms. To ensure efficient, controlled, and privacy-preserving image sharing at the area level, this paper proposes an image partition security-sharing mechanism based on blockchain and chaotic encryption, which mainly includes a fine-grained access control method based on Attribute-Based Access Control (ABAC) and an image-specific chaotic encryption scheme. The proposed fine-grained access control method employs smart contracts based on the ABAC model to achieve automatic access control for private areas. It employs a Cuckoo filter-based transaction retrieval technique to enhance the efficiency of smart contracts in retrieving security attributes and policies on the blockchain. The proposed chaotic encryption scheme generates keys based on the private areas' security attributes, largely reducing the number of keys required. It also provides efficient encryption with vector operation acceleration. The security analysis and performance evaluation were conducted comprehensively. The results show that the proposed mechanism has lower time overhead than current works as the number of images increases.


Asunto(s)
Algoritmos , Cadena de Bloques , Seguridad Computacional , Privacidad
13.
J Law Med Ethics ; 52(S1): 70-74, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38995251

RESUMEN

Here, we analyze the public health implications of recent legal developments - including privacy legislation, intergovernmental data exchange, and artificial intelligence governance - with a view toward the future of public health informatics and the potential of diverse data to inform public health actions and drive population health outcomes.


Asunto(s)
Inteligencia Artificial , Humanos , Inteligencia Artificial/legislación & jurisprudencia , Estados Unidos , Confidencialidad/legislación & jurisprudencia , Informática en Salud Pública/legislación & jurisprudencia , Salud Pública/legislación & jurisprudencia , Privacidad/legislación & jurisprudencia
15.
Sensors (Basel) ; 24(14)2024 Jul 09.
Artículo en Inglés | MEDLINE | ID: mdl-39065842

RESUMEN

This paper presents an on-device semi-supervised human activity detection system that can learn and predict human activity patterns in real time. The clinical objective is to monitor and detect the unhealthy sedentary lifestyle of a user. The proposed semi-supervised learning (SSL) framework uses sparsely labelled user activity events acquired from Inertial Measurement Unit sensors installed as wearable devices. The proposed cluster-based learning model in this approach is trained with data from the same target user, thus preserving data privacy while providing personalized activity detection services. Two different cluster labelling strategies, namely, population-based and distance-based strategies, are employed to achieve the desired classification performance. The proposed system is shown to be highly accurate and computationally efficient for different algorithmic parameters, which is relevant in the context of limited computing resources on typical wearable devices. Extensive experimentation and simulation study have been conducted on multi-user human activity data from the public domain in order to analyze the trade-off between classification accuracy and computation complexity of the proposed learning paradigm with different algorithmic hyper-parameters. With 4.17 h of training time for 8000 activity episodes, the proposed SSL approach consumes at most 20 KB of CPU memory space, while providing a maximum accuracy of 90% and 100% classification rates.


Asunto(s)
Algoritmos , Dispositivos Electrónicos Vestibles , Humanos , Monitoreo Fisiológico/métodos , Monitoreo Fisiológico/instrumentación , Privacidad , Aprendizaje Automático Supervisado , Actividades Humanas , Medicina de Precisión/métodos
16.
Sci Rep ; 14(1): 15763, 2024 07 09.
Artículo en Inglés | MEDLINE | ID: mdl-38982129

RESUMEN

The timely identification of autism spectrum disorder (ASD) in children is imperative to prevent potential challenges as they grow. When sharing data related to autism for an accurate diagnosis, safeguarding its security and privacy is a paramount concern to fend off unauthorized access, modification, or theft during transmission. Researchers have devised diverse security and privacy models or frameworks, most of which often leverage proprietary algorithms or adapt existing ones to address data leakage. However, conventional anonymization methods, although effective in the sanitization process, proved inadequate for the restoration process. Furthermore, despite numerous scholarly contributions aimed at refining the restoration process, the accuracy of restoration remains notably deficient. Based on the problems identified above, this paper presents a novel approach to data restoration for sanitized sensitive autism datasets with improved performance. In the prior study, we constructed an optimal key for the sanitization process utilizing the proposed Enhanced Combined PSO-GWO framework. This key was implemented to conceal sensitive autism data in the database, thus avoiding information leakage. In this research, the same key was employed during the data restoration process to enhance the accuracy of the original data recovery. Therefore, the study enhanced the restoration process for ASD data's security and privacy by utilizing an optimal key produced via the Enhanced Combined PSO-GWO framework. When compared to existing meta-heuristic algorithms, the simulation results from the autism data restoration experiments demonstrated highly competitive accuracies with 99.90%, 99.60%, 99.50%, 99.25%, and 99.70%, respectively. Among the four types of datasets used, this method outperforms other existing methods on the 30-month autism children dataset, mostly.


Asunto(s)
Algoritmos , Trastorno del Espectro Autista , Bases de Datos Factuales , Humanos , Trastorno Autístico/diagnóstico , Seguridad Computacional , Niño , Privacidad
17.
J Med Internet Res ; 26: e60083, 2024 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-38971715

RESUMEN

This viewpoint article first explores the ethical challenges associated with the future application of large language models (LLMs) in the context of medical education. These challenges include not only ethical concerns related to the development of LLMs, such as artificial intelligence (AI) hallucinations, information bias, privacy and data risks, and deficiencies in terms of transparency and interpretability but also issues concerning the application of LLMs, including deficiencies in emotional intelligence, educational inequities, problems with academic integrity, and questions of responsibility and copyright ownership. This paper then analyzes existing AI-related legal and ethical frameworks and highlights their limitations with regard to the application of LLMs in the context of medical education. To ensure that LLMs are integrated in a responsible and safe manner, the authors recommend the development of a unified ethical framework that is specifically tailored for LLMs in this field. This framework should be based on 8 fundamental principles: quality control and supervision mechanisms; privacy and data protection; transparency and interpretability; fairness and equal treatment; academic integrity and moral norms; accountability and traceability; protection and respect for intellectual property; and the promotion of educational research and innovation. The authors further discuss specific measures that can be taken to implement these principles, thereby laying a solid foundation for the development of a comprehensive and actionable ethical framework. Such a unified ethical framework based on these 8 fundamental principles can provide clear guidance and support for the application of LLMs in the context of medical education. This approach can help establish a balance between technological advancement and ethical safeguards, thereby ensuring that medical education can progress without compromising the principles of fairness, justice, or patient safety and establishing a more equitable, safer, and more efficient environment for medical education.


Asunto(s)
Inteligencia Artificial , Educación Médica , Educación Médica/ética , Humanos , Inteligencia Artificial/ética , Lenguaje , Privacidad
18.
Soc Sci Med ; 356: 117137, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39059129

RESUMEN

This study investigates the factors influencing the comfort level of the general public when disclosing personal information for the coronavirus disease 2019 contact tracing. This is a secondary analysis of the American Trends Panel, a national probability-based online panel of American adults, with data collected by the Pew Research Center between July 13 and 19, 2020. Grounded in privacy management theories, ordered logistic regression analyses examined three types of information disclosure: places visited, names of contacts, and location data from cell phones. Key independent variables measured trust in the stakeholders' ability to protect data and perceived risks to health and finances. The findings suggest that higher levels of trust in entities' data security capabilities can predict individuals' comfort levels when disclosing personal data. Additionally, the participants were more comfortable with noncommercial data use, especially when it was used by researchers and state and local officials. However, financial threats showed variations in sharing certain types of data. Individuals were less likely to feel at ease sharing contact tracing data as concerns about personal finances increased. Similarly, when individuals perceived threats to the U.S. economy, they were less likely to feel comfortable sharing their location data from cell phones, which might have been perceived as intrusive. Public health outreach efforts should account for individual differences and the nature of the information requested in commercial and noncommercial contexts. Future studies can enhance the explanatory capacity of data disclosure models by incorporating additional relevant contextual and environmental variables.


Asunto(s)
COVID-19 , Trazado de Contacto , Privacidad , Confianza , Humanos , Trazado de Contacto/métodos , COVID-19/epidemiología , COVID-19/prevención & control , Femenino , Masculino , Adulto , Persona de Mediana Edad , Estados Unidos , Confianza/psicología , Anciano , Confidencialidad , Adulto Joven , Revelación
20.
Comput Methods Programs Biomed ; 254: 108289, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38905988

RESUMEN

BACKGROUND AND OBJECTIVE: Cardiovascular disease (CD) is a major global health concern, affecting millions with symptoms like fatigue and chest discomfort. Timely identification is crucial due to its significant contribution to global mortality. In healthcare, artificial intelligence (AI) holds promise for advancing disease risk assessment and treatment outcome prediction. However, machine learning (ML) evolution raises concerns about data privacy and biases, especially in sensitive healthcare applications. The objective is to develop and implement a responsible AI model for CD prediction that prioritize patient privacy, security, ensuring transparency, explainability, fairness, and ethical adherence in healthcare applications. METHODS: To predict CD while prioritizing patient privacy, our study employed data anonymization involved adding Laplace noise to sensitive features like age and gender. The anonymized dataset underwent analysis using a differential privacy (DP) framework to preserve data privacy. DP ensured confidentiality while extracting insights. Compared with Logistic Regression (LR), Gaussian Naïve Bayes (GNB), and Random Forest (RF), the methodology integrated feature selection, statistical analysis, and SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) for interpretability. This approach facilitates transparent and interpretable AI decision-making, aligning with responsible AI development principles. Overall, it combines privacy preservation, interpretability, and ethical considerations for accurate CD predictions. RESULTS: Our investigations from the DP framework with LR were promising, with an area under curve (AUC) of 0.848 ± 0.03, an accuracy of 0.797 ± 0.02, precision at 0.789 ± 0.02, recall at 0.797 ± 0.02, and an F1 score of 0.787 ± 0.02, with a comparable performance with the non-privacy framework. The SHAP and LIME based results support clinical findings, show a commitment to transparent and interpretable AI decision-making, and aligns with the principles of responsible AI development. CONCLUSIONS: Our study endorses a novel approach in predicting CD, amalgamating data anonymization, privacy-preserving methods, interpretability tools SHAP, LIME, and ethical considerations. This responsible AI framework ensures accurate predictions, privacy preservation, and user trust, underscoring the significance of comprehensive and transparent ML models in healthcare. Therefore, this research empowers the ability to forecast CD, providing a vital lifeline to millions of CD patients globally and potentially preventing numerous fatalities.


Asunto(s)
Inteligencia Artificial , Enfermedades Cardiovasculares , Aprendizaje Automático , Humanos , Enfermedades Cardiovasculares/diagnóstico , Teorema de Bayes , Femenino , Masculino , Privacidad , Modelos Logísticos , Confidencialidad , Algoritmos , Persona de Mediana Edad , Anonimización de la Información , Medición de Riesgo/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...