Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 8.331
Filtrar
1.
Sci Rep ; 14(1): 8690, 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38622216

RESUMO

In the era of artificial intelligence, privacy empowerment illusion has become a crucial means for digital enterprises and platforms to "manipulate" users and create an illusion of control. This topic has also become an urgent and pressing concern for current research. However, the existing studies are limited in terms of their perspectives and methodologies, making it challenging to fully explain why users express concerns about privacy empowerment illusion but repeatedly disclose their personal information. This study combines the associative-propositional evaluation model (APE) and cognitive load theory, using event-related potential (ERP) technology to investigate the underlying mechanisms of how the comprehensibility and interpretability of privacy empowerment illusion cues affect users' immediate attitudes and privacy disclosure behaviours; these mechanisms are mediated by psychological processing and cognitive load differences. Behavioural research results indicate that in the context of privacy empowerment illusion cues with low comprehensibility, users are more inclined to disclose their private information when faced with high interpretability than they are when faced with low interpretability. EEG results show that in the context of privacy empowerment illusion cues with low comprehensibility, high interpretability induces greater P2 amplitudes than does low interpretability; low interpretability induces greater N2 amplitudes than does high interpretability. This study extends the scopes of the APE model and cognitive load theory in the field of privacy research, providing new insights into privacy attitudes. Doing so offers a valuable framework through which digital enterprises can gain a deeper understanding of users' genuine privacy attitudes and immediate reactions under privacy empowerment illusion situations. This understanding can help increase user privacy protection and improve their overall online experience, making it highly relevant and beneficial.


Assuntos
Hominidae , Ilusões , Humanos , Animais , Privacidade/psicologia , Revelação , Sinais (Psicologia) , Inteligência Artificial , Cognição
2.
JAMA Netw Open ; 7(4): e245861, 2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38602678

RESUMO

Importance: Hospital websites frequently use tracking technologies that transfer user information to third parties. It is not known whether hospital websites include privacy policies that disclose relevant details regarding tracking. Objective: To determine whether hospital websites have accessible privacy policies and whether those policies contain key information related to third-party tracking. Design, Setting, and Participants: In this cross-sectional content analysis of website privacy policies of a nationally representative sample of nonfederal acute care hospitals, hospital websites were first measured to determine whether they included tracking technologies that transferred user information to third parties. Hospital website privacy policies were then identified using standardized searches. Policies were assessed for length and readability. Policy content was analyzed using a data abstraction form. Tracking measurement and privacy policy retrieval and analysis took place from November 2023 to January 2024. The prevalence of privacy policy characteristics was analyzed using standard descriptive statistics. Main Outcomes and Measures: The primary study outcome was the availability of a website privacy policy. Secondary outcomes were the length and readability of privacy policies and the inclusion of privacy policy content addressing user information collected by the website, potential uses of user information, third-party recipients of user information, and user rights regarding tracking and information collection. Results: Of 100 hospital websites, 96 (96.0%; 95% CI, 90.1%-98.9%) transferred user information to third parties. Privacy policies were found on 71 websites (71.0%; 95% CI, 61.6%-79.4%). Policies were a mean length of 2527 words (95% CI, 2058-2997 words) and were written at a mean grade level of 13.7 (95% CI, 13.4-14.1). Among 71 privacy policies, 69 (97.2%; 95% CI, 91.4%-99.5%) addressed types of user information automatically collected by the website, 70 (98.6%; 95% CI, 93.8%-99.9%) addressed how collected information would be used, 66 (93.0%; 95% CI, 85.3%-97.5%) addressed categories of third-party recipients of user information, and 40 (56.3%; 95% CI, 44.5%-67.7%) named specific third-party companies or services receiving user information. Conclusions and Relevance: In this cross-sectional study of hospital website privacy policies, a substantial number of hospital websites did not present users with adequate information about the privacy implications of website use, either because they lacked a privacy policy or had a privacy policy that contained limited content about third-party recipients of user information.


Assuntos
Hospitais , Privacidade , Humanos , Estudos Transversais , Disseminação de Informação , Políticas
3.
BMC Health Serv Res ; 24(1): 439, 2024 Apr 08.
Artigo em Inglês | MEDLINE | ID: mdl-38589922

RESUMO

BACKGROUND: Electronic health records (EHR) are becoming an integral part of the health system in many developed countries, though implementations and settings vary across countries. Some countries have adopted an opt-out policy, in which patients are enrolled in the EHR system following a default nudge, while others have applied an opt-in policy, where patients have to take action to opt into the system. While opt-in systems may exhibit lower levels of active user requests for access, this contrasts with opt-out systems where a notable percentage of users may passively retain access. Thus, our research endeavor aims to explore facilitators and barriers that contribute to explaining EHR usage (i.e., actively accessing the EHR system) in two countries with either an opt-in or opt-out setting, exemplified by France and Austria. METHODS: A qualitative exploratory approach using a semi-structured interview guideline was undertaken in both countries: 1) In Austria, with four homogenously composed group discussions, and 2) in France, with 19 single patient interviews. The data were collected from October 2020 to January 2021. RESULTS: Influencing factors were categorized into twelve subcategories. Patients have similar experiences in both countries with regard to all facilitating categories, for instance, the role of health providers, awareness of EHR and social norms. However, we highlighted important differences between the two systems regarding hurdles impeding EHR usage, namely, a lack of communication as well as transparency or information security about EHR. CONCLUSION: Implementing additional safeguards to enhance privacy protection and supporting patients to improve their digital ability may help to diminish the perception of EHR-induced barriers and improve patients' health and commitment in the long term. PRACTICAL IMPLICATIONS: Understanding the differences and similarities will help to develop practical implications to tackle the problem of low EHR usage rates in the long run. This problem is prevalent in countries with both types of EHR default settings.


Assuntos
Comunicação , Registros Eletrônicos de Saúde , Humanos , Áustria , Privacidade , Pacientes
4.
AJOB Neurosci ; 15(2): 146-148, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38568702
9.
AJOB Neurosci ; 15(2): 136-138, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38568711
12.
IEEE Trans Image Process ; 33: 2714-2729, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38557629

RESUMO

Billions of people share images from their daily lives on social media every day. However, their biometric information (e.g., fingerprints) could be easily stolen from these images. The threat of fingerprint leakage from social media has created a strong desire to anonymize shared images while maintaining image quality, since fingerprints act as a lifelong individual biometric password. To guard the fingerprint leakage, adversarial attack that involves adding imperceptible perturbations to fingerprint images have emerged as a feasible solution. However, existing works of this kind are either weak in black-box transferability or cause the images to have an unnatural appearance. Motivated by the visual perception hierarchy (i.e., high-level perception exploits model-shared semantics that transfer well across models while low-level perception extracts primitive stimuli that result in high visual sensitivity when a suspicious stimulus is provided), we propose FingerSafe, a hierarchical perceptual protective noise injection framework to address the above mentioned problems. For black-box transferability, we inject protective noises into the fingerprint orientation field to perturb the model-shared high-level semantics (i.e., fingerprint ridges). Considering visual naturalness, we suppress the low-level local contrast stimulus by regularizing the response of the Lateral Geniculate Nucleus. Our proposed FingerSafe is the first to provide feasible fingerprint protection in both digital (up to 94.12%) and realistic scenarios (Twitter and Facebook, up to 68.75%). Our code can be found at https://github.com/nlsde-safety-team/FingerSafe.


Assuntos
Mídias Sociais , Humanos , Dermatoglifia , Privacidade , Percepção Visual
13.
Science ; 384(6691): eado9298, 2024 Apr 05.
Artigo em Inglês | MEDLINE | ID: mdl-38574154

RESUMO

Concerns about the ethical use of data, privacy, and data harms are front of mind in many jurisdictions as regulators move to impose tighter controls on data privacy and protection, and the use of artificial intelligence (AI). Although efforts to hold corporations to account for their deployment of data and data-driven technologies have been largely welcomed by academics and civil society, there is a growing recognition of the limits to individual data rights, given the capacity of tech giants to link, surveil, target, and make inferences about groups. Questions about whether collective data rights exist, and how they can be recognized and protected, have provided fertile ground for researchers but have yet to penetrate the broader discourse on data rights and regulation.


Assuntos
Inteligência Artificial , Fertilidade , Nova Zelândia , Privacidade , Reconhecimento Psicológico
14.
Sci Eng Ethics ; 30(2): 13, 2024 Apr 04.
Artigo em Inglês | MEDLINE | ID: mdl-38575812

RESUMO

Controversies surrounding social media platforms have provided opportunities for institutional reflexivity amongst users and regulators on how to understand and govern platforms. Amidst contestation, platform companies have continued to enact projects that draw upon existing modes of privatized governance. We investigate how social media companies have attempted to achieve closure by continuing to set the terms around platform governance. We investigate two projects implemented by Facebook (Meta)-authenticity regulation and privacy controls-in response to the Russian Interference and Cambridge Analytica controversies surrounding the 2016 U.S. Presidential Election. Drawing on Goffman's metaphor of stage management, we analyze the techniques deployed by Facebook to reinforce a division between what is visible and invisible to the user experience. These platform governance projects propose to act upon front-stage data relations: information that users can see from other users-whether that is content that users can see from "bad actors", or information that other users can see about oneself. At the same time, these projects relegate back-stage data relations-information flows between users constituted by recommendation and targeted advertising systems-to invisibility and inaction. As such, Facebook renders the user experience actionable for governance, while foreclosing governance of back-stage data relations central to the economic value of the platform. As social media companies continue to perform platform governance projects following controversies, our paper invites reflection on the politics of these projects. By destabilizing the boundaries drawn by platform companies, we open space for continuous reflexivity on how platforms should be understood and governed.


Assuntos
Mídias Sociais , Humanos , Política , Privacidade
16.
Swiss Med Wkly ; 154: 3538, 2024 Jan 22.
Artigo em Inglês | MEDLINE | ID: mdl-38579329

RESUMO

BACKGROUND: While health data sharing for research purposes is strongly supported in principle, it can be challenging to implement in practice. Little is known about the actual bottlenecks to health data sharing in Switzerland. AIMS OF THE STUDY: This study aimed to assess the obstacles to Swiss health data sharing, including legal, ethical and logistical bottlenecks. METHODS: We identified 37 key stakeholders in data sharing via the Swiss Personalised Health Network ecosystem, defined as being an expert on sharing sensitive health data for research purposes at a Swiss university hospital (or a Swiss disease cohort) or being a stakeholder in data sharing at a public or private institution that uses such data. We conducted semi-structured interviews, which were transcribed, translated when necessary, and de-identified. The entire research team discussed the transcripts and notes taken during each interview before an inductive coding process occurred. RESULTS: Eleven semi-structured interviews were conducted (primarily in English) with 17 individuals representing lawyers, data protection officers, ethics committee members, scientists, project managers, bioinformaticians, clinical trials unit members, and biobank stakeholders. Most respondents felt that it was not the actual data transfer that was the bottleneck but rather the processes and systems around it, which were considered time-intensive and confusing. The templates developed by the Swiss Personalised Health Network and the Swiss General Consent process were generally felt to have streamlined processes significantly. However, these logistics and data quality issues remain practical bottlenecks in Swiss health data sharing. Areas of legal uncertainty include privacy laws when sharing data internationally, questions of "who owns the data", inconsistencies created because the Swiss general consent is perceived as being implemented differently across different institutions, and definitions and operationalisation of anonymisation and pseudo-anonymisation. Many participants desired to create a "culture of data sharing" and to recognise that data sharing is a process with many steps, not an event, that requires sustainability efforts and personnel. Some participants also stressed a desire to move away from data sharing and the current privacy focus towards processes that facilitate data access. CONCLUSIONS: Facilitating a data access culture in Switzerland may require legal clarifications, further education about the process and resources to support data sharing, and further investment in sustainable infrastructureby funders and institutions.


Assuntos
Ecossistema , Privacidade , Humanos , Suíça , Disseminação de Informação , Pesquisa Qualitativa
17.
Comput Biol Med ; 173: 108351, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38520921

RESUMO

Single-cell transcriptomics data provides crucial insights into patients' health, yet poses significant privacy concerns. Genomic data privacy attacks can have deep implications, encompassing not only the patients' health information but also extending widely to compromise their families'. Moreover, the permanence of leaked data exacerbates the challenges, making retraction an impossibility. While extensive efforts have been directed towards clustering single-cell transcriptomics data, addressing critical challenges, especially in the realm of privacy, remains pivotal. This paper introduces an efficient, fast, privacy-preserving approach for clustering single-cell RNA-sequencing (scRNA-seq) datasets. The key contributions include ensuring data privacy, achieving high-quality clustering, accommodating the high dimensionality inherent in the datasets, and maintaining reasonable computation time for big-scale datasets. Our proposed approach utilizes the map-reduce scheme to parallelize clustering, addressing intensive calculation challenges. Intel Software Guard eXtension (SGX) processors are used to ensure the security of sensitive code and data during processing. Additionally, the approach incorporates a logarithm transformation as a preprocessing step, employs non-negative matrix factorization for dimensionality reduction, and utilizes parallel k-means for clustering. The approach fully leverages the computing capabilities of all processing resources within a secure private cloud environment. Experimental results demonstrate the efficacy of our approach in preserving patient privacy while surpassing state-of-the-art methods in both clustering quality and computation time. Our method consistently achieves a minimum of 7% higher Adjusted Rand Index (ARI) than existing approaches, contingent on dataset size. Additionally, due to parallel computations and dimensionality reduction, our approach exhibits efficiency, converging to very good results in less than 10 seconds for a scRNA-seq dataset with 5000 genes and 6000 cells when prioritizing privacy and under two seconds without privacy considerations. Availability and implementation Code and datasets availability: https://github.com/University-of-Windsor/PPPCT.


Assuntos
Privacidade , Software , Humanos , Algoritmos , Perfilação da Expressão Gênica , Análise por Conglomerados , Análise de Sequência de RNA
18.
BMC Med Inform Decis Mak ; 24(1): 67, 2024 Mar 06.
Artigo em Inglês | MEDLINE | ID: mdl-38448921

RESUMO

Deep learning has been increasingly utilized in the medical field and achieved many goals. Since the size of data dominates the performance of deep learning, several medical institutions are conducting joint research to obtain as much data as possible. However, sharing data is usually prohibited owing to the risk of privacy invasion. Federated learning is a reasonable idea to train distributed multicenter data without direct access; however, a central server to merge and distribute models is needed, which is expensive and hardly approved due to various legal regulations. This paper proposes a continual learning framework for a multicenter study, which does not require a central server and can prevent catastrophic forgetting of previously trained knowledge. The proposed framework contains the continual learning method selection process, assuming that a single method is not omnipotent for all involved datasets in a real-world setting and that there could be a proper method to be selected for specific data. We utilized the fake data based on a generative adversarial network to evaluate methods prospectively, not ex post facto. We used four independent electrocardiogram datasets for a multicenter study and trained the arrhythmia detection model. Our proposed framework was evaluated against supervised and federated learning methods, as well as finetuning approaches that do not include any regulation to preserve previous knowledge. Even without a central server and access to the past data, our framework achieved stable performance (AUROC 0.897) across all involved datasets, achieving comparable performance to federated learning (AUROC 0.901).


Assuntos
Eletrocardiografia , Estudos Multicêntricos como Assunto , Humanos , Conhecimento , Privacidade
19.
JMIR Mhealth Uhealth ; 12: e48986, 2024 Mar 07.
Artigo em Inglês | MEDLINE | ID: mdl-38451602

RESUMO

BACKGROUND: Contact tracing technology has been adopted in many countries to aid in identifying, evaluating, and handling individuals who have had contact with those infected with COVID-19. Singapore was among the countries that actively implemented the government-led contact tracing program known as TraceTogether. Despite the benefits the contact tracing program could provide to individuals and the community, privacy issues were a significant barrier to individuals' acceptance of the program. OBJECTIVE: Building on the privacy calculus model, this study investigates how the perceptions of the 2 key groups (ie, government and community members) involved in the digital contact tracing factor into individuals' privacy calculus of digital contact tracing. METHODS: Using a mixed method approach, we conducted (1) a 2-wave survey (n=674) and (2) in-depth interviews (n=12) with TraceTogether users in Singapore. Using structural equation modeling, this study investigated how trust in the government and the sense of community exhibited by individuals during the early stage of implementation (time 1) predicted privacy concerns, perceived benefits, and future use intentions, measured after the program was fully implemented (time 2). Expanding on the survey results, this study conducted one-on-one interviews to gain in-depth insights into the privacy considerations involved in digital contact tracing. RESULTS: The results from the survey showed that trust in the government increased perceived benefits while decreasing privacy concerns regarding the use of TraceTogether. Furthermore, individuals who felt a connection to community members by participating in the program (ie, the sense of community) were more inclined to believe in its benefits. The sense of community also played a moderating role in the influence of government trust on perceived benefits. Follow-up in-depth interviews highlighted that having a sense of control over information and transparency in the government's data management were crucial factors in privacy considerations. The interviews also highlighted surveillance as the most prevalent aspect of privacy concerns regarding TraceTogether use. In addition, our findings revealed that trust in the government, particularly the perceived transparency of government actions, was most strongly associated with concerns regarding the secondary use of data. CONCLUSIONS: Using a mixed method approach involving a 2-wave survey and in-depth interview data, we expanded our understanding of privacy decisions and the privacy calculus in the context of digital contact tracing. The opposite influences of privacy concerns and perceived benefit on use intention suggest that the privacy calculus in TraceTogether might be viewed as a rational process of weighing between privacy risks and use benefits to make an uptake decision. However, our study demonstrated that existing perceptions toward the provider and the government in the contact tracing context, as well as the perception of the community triggered by TraceTogether use, may bias user appraisals of privacy risks and the benefits of contact tracing.


Assuntos
COVID-19 , Busca de Comunicante , Confiança , Humanos , COVID-19/epidemiologia , COVID-19/prevenção & controle , Governo , Privacidade , Coesão Social
20.
J Med Internet Res ; 26: e53008, 2024 Mar 08.
Artigo em Inglês | MEDLINE | ID: mdl-38457208

RESUMO

As advances in artificial intelligence (AI) continue to transform and revolutionize the field of medicine, understanding the potential uses of generative AI in health care becomes increasingly important. Generative AI, including models such as generative adversarial networks and large language models, shows promise in transforming medical diagnostics, research, treatment planning, and patient care. However, these data-intensive systems pose new threats to protected health information. This Viewpoint paper aims to explore various categories of generative AI in health care, including medical diagnostics, drug discovery, virtual health assistants, medical research, and clinical decision support, while identifying security and privacy threats within each phase of the life cycle of such systems (ie, data collection, model development, and implementation phases). The objectives of this study were to analyze the current state of generative AI in health care, identify opportunities and privacy and security challenges posed by integrating these technologies into existing health care infrastructure, and propose strategies for mitigating security and privacy risks. This study highlights the importance of addressing the security and privacy threats associated with generative AI in health care to ensure the safe and effective use of these systems. The findings of this study can inform the development of future generative AI systems in health care and help health care organizations better understand the potential benefits and risks associated with these systems. By examining the use cases and benefits of generative AI across diverse domains within health care, this paper contributes to theoretical discussions surrounding AI ethics, security vulnerabilities, and data privacy regulations. In addition, this study provides practical insights for stakeholders looking to adopt generative AI solutions within their organizations.


Assuntos
Inteligência Artificial , Pesquisa Biomédica , Humanos , Privacidade , Coleta de Dados , Idioma
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...