Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 8.684
Filtrar
1.
Sci Rep ; 14(1): 8690, 2024 04 15.
Artigo em Inglês | MEDLINE | ID: mdl-38622216

RESUMO

In the era of artificial intelligence, privacy empowerment illusion has become a crucial means for digital enterprises and platforms to "manipulate" users and create an illusion of control. This topic has also become an urgent and pressing concern for current research. However, the existing studies are limited in terms of their perspectives and methodologies, making it challenging to fully explain why users express concerns about privacy empowerment illusion but repeatedly disclose their personal information. This study combines the associative-propositional evaluation model (APE) and cognitive load theory, using event-related potential (ERP) technology to investigate the underlying mechanisms of how the comprehensibility and interpretability of privacy empowerment illusion cues affect users' immediate attitudes and privacy disclosure behaviours; these mechanisms are mediated by psychological processing and cognitive load differences. Behavioural research results indicate that in the context of privacy empowerment illusion cues with low comprehensibility, users are more inclined to disclose their private information when faced with high interpretability than they are when faced with low interpretability. EEG results show that in the context of privacy empowerment illusion cues with low comprehensibility, high interpretability induces greater P2 amplitudes than does low interpretability; low interpretability induces greater N2 amplitudes than does high interpretability. This study extends the scopes of the APE model and cognitive load theory in the field of privacy research, providing new insights into privacy attitudes. Doing so offers a valuable framework through which digital enterprises can gain a deeper understanding of users' genuine privacy attitudes and immediate reactions under privacy empowerment illusion situations. This understanding can help increase user privacy protection and improve their overall online experience, making it highly relevant and beneficial.


Assuntos
Hominidae , Ilusões , Humanos , Animais , Privacidade/psicologia , Revelação , Sinais (Psicologia) , Inteligência Artificial , Cognição
2.
Rev Med Suisse ; 20(870): 808-812, 2024 Apr 17.
Artigo em Francês | MEDLINE | ID: mdl-38630042

RESUMO

Health and risk of disease are determined by exposure to the physical, socio-economic, and political environment and to this has been added exposure to the digital environment. Our increasingly digital lives have major implications for people's health and its monitoring, as well as for prevention and care. Digital health, which encompasses the use of health applications, connected devices and artificial intelligence medical tools, is transforming medical and healthcare practices. Used properly, it could facilitate patient-centered, inter-professional and data-driven care. However, its implementation raises major concerns and ethical issues, particularly in relation to privacy, equity, and the therapeutic relationship.


La santé et le risque de maladies sont déterminés par l'exposition aux environnements physiques, socio-économiques et politiques, et à cela s'est ajouté l'exposition à l'environnement digital. Notre vie digitale a des implications majeures, d'une part, sur la santé des populations et son monitoring et, d'autre part, sur la prévention et les soins. Ainsi, la santé digitale (digital health), qui englobe l'utilisation d'applications de santé, d'appareils connectés, ou d'outils médicaux d'intelligence artificielle, modifie les pratiques médico-soignantes. Bien utilisée, elle pourrait faciliter les soins centrés sur le patient, interprofessionnels et guidés par les données. Cependant, sa mise en œuvre soulève d'importants craintes et enjeux éthiques en lien notamment avec la protection des données, l'équité et la relation thérapeutique.


Assuntos
Inteligência Artificial , Saúde da População , Humanos , 60713 , Exame Físico , Privacidade
3.
JAMA Netw Open ; 7(4): e245861, 2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38602678

RESUMO

Importance: Hospital websites frequently use tracking technologies that transfer user information to third parties. It is not known whether hospital websites include privacy policies that disclose relevant details regarding tracking. Objective: To determine whether hospital websites have accessible privacy policies and whether those policies contain key information related to third-party tracking. Design, Setting, and Participants: In this cross-sectional content analysis of website privacy policies of a nationally representative sample of nonfederal acute care hospitals, hospital websites were first measured to determine whether they included tracking technologies that transferred user information to third parties. Hospital website privacy policies were then identified using standardized searches. Policies were assessed for length and readability. Policy content was analyzed using a data abstraction form. Tracking measurement and privacy policy retrieval and analysis took place from November 2023 to January 2024. The prevalence of privacy policy characteristics was analyzed using standard descriptive statistics. Main Outcomes and Measures: The primary study outcome was the availability of a website privacy policy. Secondary outcomes were the length and readability of privacy policies and the inclusion of privacy policy content addressing user information collected by the website, potential uses of user information, third-party recipients of user information, and user rights regarding tracking and information collection. Results: Of 100 hospital websites, 96 (96.0%; 95% CI, 90.1%-98.9%) transferred user information to third parties. Privacy policies were found on 71 websites (71.0%; 95% CI, 61.6%-79.4%). Policies were a mean length of 2527 words (95% CI, 2058-2997 words) and were written at a mean grade level of 13.7 (95% CI, 13.4-14.1). Among 71 privacy policies, 69 (97.2%; 95% CI, 91.4%-99.5%) addressed types of user information automatically collected by the website, 70 (98.6%; 95% CI, 93.8%-99.9%) addressed how collected information would be used, 66 (93.0%; 95% CI, 85.3%-97.5%) addressed categories of third-party recipients of user information, and 40 (56.3%; 95% CI, 44.5%-67.7%) named specific third-party companies or services receiving user information. Conclusions and Relevance: In this cross-sectional study of hospital website privacy policies, a substantial number of hospital websites did not present users with adequate information about the privacy implications of website use, either because they lacked a privacy policy or had a privacy policy that contained limited content about third-party recipients of user information.


Assuntos
Hospitais , Privacidade , Humanos , Estudos Transversais , Disseminação de Informação , Políticas
4.
Swiss Med Wkly ; 154: 3538, 2024 Jan 22.
Artigo em Inglês | MEDLINE | ID: mdl-38579329

RESUMO

BACKGROUND: While health data sharing for research purposes is strongly supported in principle, it can be challenging to implement in practice. Little is known about the actual bottlenecks to health data sharing in Switzerland. AIMS OF THE STUDY: This study aimed to assess the obstacles to Swiss health data sharing, including legal, ethical and logistical bottlenecks. METHODS: We identified 37 key stakeholders in data sharing via the Swiss Personalised Health Network ecosystem, defined as being an expert on sharing sensitive health data for research purposes at a Swiss university hospital (or a Swiss disease cohort) or being a stakeholder in data sharing at a public or private institution that uses such data. We conducted semi-structured interviews, which were transcribed, translated when necessary, and de-identified. The entire research team discussed the transcripts and notes taken during each interview before an inductive coding process occurred. RESULTS: Eleven semi-structured interviews were conducted (primarily in English) with 17 individuals representing lawyers, data protection officers, ethics committee members, scientists, project managers, bioinformaticians, clinical trials unit members, and biobank stakeholders. Most respondents felt that it was not the actual data transfer that was the bottleneck but rather the processes and systems around it, which were considered time-intensive and confusing. The templates developed by the Swiss Personalised Health Network and the Swiss General Consent process were generally felt to have streamlined processes significantly. However, these logistics and data quality issues remain practical bottlenecks in Swiss health data sharing. Areas of legal uncertainty include privacy laws when sharing data internationally, questions of "who owns the data", inconsistencies created because the Swiss general consent is perceived as being implemented differently across different institutions, and definitions and operationalisation of anonymisation and pseudo-anonymisation. Many participants desired to create a "culture of data sharing" and to recognise that data sharing is a process with many steps, not an event, that requires sustainability efforts and personnel. Some participants also stressed a desire to move away from data sharing and the current privacy focus towards processes that facilitate data access. CONCLUSIONS: Facilitating a data access culture in Switzerland may require legal clarifications, further education about the process and resources to support data sharing, and further investment in sustainable infrastructureby funders and institutions.


Assuntos
Privacidade , Humanos , Disseminação de Informação , Pesquisa Qualitativa , Suíça
5.
AJOB Neurosci ; 15(2): 146-148, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38568702
10.
AJOB Neurosci ; 15(2): 136-138, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38568711
13.
BMC Health Serv Res ; 24(1): 439, 2024 Apr 08.
Artigo em Inglês | MEDLINE | ID: mdl-38589922

RESUMO

BACKGROUND: Electronic health records (EHR) are becoming an integral part of the health system in many developed countries, though implementations and settings vary across countries. Some countries have adopted an opt-out policy, in which patients are enrolled in the EHR system following a default nudge, while others have applied an opt-in policy, where patients have to take action to opt into the system. While opt-in systems may exhibit lower levels of active user requests for access, this contrasts with opt-out systems where a notable percentage of users may passively retain access. Thus, our research endeavor aims to explore facilitators and barriers that contribute to explaining EHR usage (i.e., actively accessing the EHR system) in two countries with either an opt-in or opt-out setting, exemplified by France and Austria. METHODS: A qualitative exploratory approach using a semi-structured interview guideline was undertaken in both countries: 1) In Austria, with four homogenously composed group discussions, and 2) in France, with 19 single patient interviews. The data were collected from October 2020 to January 2021. RESULTS: Influencing factors were categorized into twelve subcategories. Patients have similar experiences in both countries with regard to all facilitating categories, for instance, the role of health providers, awareness of EHR and social norms. However, we highlighted important differences between the two systems regarding hurdles impeding EHR usage, namely, a lack of communication as well as transparency or information security about EHR. CONCLUSION: Implementing additional safeguards to enhance privacy protection and supporting patients to improve their digital ability may help to diminish the perception of EHR-induced barriers and improve patients' health and commitment in the long term. PRACTICAL IMPLICATIONS: Understanding the differences and similarities will help to develop practical implications to tackle the problem of low EHR usage rates in the long run. This problem is prevalent in countries with both types of EHR default settings.


Assuntos
Comunicação , Registros Eletrônicos de Saúde , Humanos , Áustria , Privacidade , Pacientes
14.
Sci Data ; 11(1): 397, 2024 Apr 18.
Artigo em Inglês | MEDLINE | ID: mdl-38637602

RESUMO

Modeling and predicting human mobility trajectories in urban areas is an essential task for various applications including transportation modeling, disaster management, and urban planning. The recent availability of large-scale human movement data collected from mobile devices has enabled the development of complex human mobility prediction models. However, human mobility prediction methods are often trained and tested on different datasets, due to the lack of open-source large-scale human mobility datasets amid privacy concerns, posing a challenge towards conducting transparent performance comparisons between methods. To this end, we created an open-source, anonymized, metropolitan scale, and longitudinal (75 days) dataset of 100,000 individuals' human mobility trajectories, using mobile phone location data provided by Yahoo Japan Corporation (currently renamed to LY Corporation), named YJMob100K. The location pings are spatially and temporally discretized, and the metropolitan area is undisclosed to protect users' privacy. The 90-day period is composed of 75 days of business-as-usual and 15 days during an emergency, to test human mobility predictability during both normal and anomalous situations.


Assuntos
Telefone Celular , Humanos , Movimento , Cidades , Privacidade , Japão
15.
PLoS One ; 19(4): e0301897, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38630709

RESUMO

With the continuous development of vehicular ad hoc networks (VANET) security, using federated learning (FL) to deploy intrusion detection models in VANET has attracted considerable attention. Compared to conventional centralized learning, FL retains local training private data, thus protecting privacy. However, sensitive information about the training data can still be inferred from the shared model parameters in FL. Differential privacy (DP) is sophisticated technique to mitigate such attacks. A key challenge of implementing DP in FL is that non-selectively adding DP noise can adversely affect model accuracy, while having many perturbed parameters also increases privacy budget consumption and communication costs for detection models. To address this challenge, we propose FFIDS, a FL algorithm integrating model parameter pruning with differential privacy. It employs a parameter pruning technique based on the Fisher Information Matrix to reduce the privacy budget consumption per iteration while ensuring no accuracy loss. Specifically, FFIDS evaluates parameter importance and prunes unimportant parameters to generate compact sub-models, while recording the positions of parameters in each sub-model. This not only reduces model size to lower communication costs, but also maintains accuracy stability. DP noise is then added to the sub-models. By not perturbing unimportant parameters, more budget can be reserved to retain important parameters for more iterations. Finally, the server can promptly recover the sub-models using the parameter position information and complete aggregation. Extensive experiments on two public datasets and two F2MD simulation datasets have validated the utility and superior performance of the FFIDS algorithm.


Assuntos
Mustelidae , Privacidade , Animais , Aprendizagem , Algoritmos , Orçamentos , Comunicação
16.
IEEE Trans Image Process ; 33: 2714-2729, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38557629

RESUMO

Billions of people share images from their daily lives on social media every day. However, their biometric information (e.g., fingerprints) could be easily stolen from these images. The threat of fingerprint leakage from social media has created a strong desire to anonymize shared images while maintaining image quality, since fingerprints act as a lifelong individual biometric password. To guard the fingerprint leakage, adversarial attack that involves adding imperceptible perturbations to fingerprint images have emerged as a feasible solution. However, existing works of this kind are either weak in black-box transferability or cause the images to have an unnatural appearance. Motivated by the visual perception hierarchy (i.e., high-level perception exploits model-shared semantics that transfer well across models while low-level perception extracts primitive stimuli that result in high visual sensitivity when a suspicious stimulus is provided), we propose FingerSafe, a hierarchical perceptual protective noise injection framework to address the above mentioned problems. For black-box transferability, we inject protective noises into the fingerprint orientation field to perturb the model-shared high-level semantics (i.e., fingerprint ridges). Considering visual naturalness, we suppress the low-level local contrast stimulus by regularizing the response of the Lateral Geniculate Nucleus. Our proposed FingerSafe is the first to provide feasible fingerprint protection in both digital (up to 94.12%) and realistic scenarios (Twitter and Facebook, up to 68.75%). Our code can be found at https://github.com/nlsde-safety-team/FingerSafe.


Assuntos
Mídias Sociais , Humanos , Dermatoglifia , Privacidade , Percepção Visual
17.
PLoS One ; 19(4): e0297534, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38635816

RESUMO

The secret keys produced by current image cryptosystems, which rely on chaotic sequences, exhibit a direct correlation with the size of the image. As the image dimensions expand, the generation of extensive chaotic sequences in the encryption and decryption procedures becomes more computationally intensive. Secondly, a common problem in existing image encryption schemes is the compromise between privacy and efficiency. Some existing lightweight schemes reveal patterns in encrypted images, while others impose heavy computational burdens during encryption/decryption due to the need for large chaotic sequences. In this study, we introduce a lightweight image encryption scheme that involves partitioning the image into uniformly sized tiles and generating a chaotic sequence accordingly. This approach diminishes the necessity to create extensive chaotic sequences equal to the tile size, which is significantly smaller than the original image. As a result, it alleviates the processing burden associated with generating sequences equivalent to the original image size. The results confirm that our proposed scheme is lightweight and secure compared to the latest state-of-the-art image encryption schemes. Additionally, sensitivity analysis demonstrates that the proposed image encryption technique, with a UACI value of 33.48 and NPRC value of 99.96, affirms its resistance to differential attacks.


Assuntos
Privacidade , Resiliência Psicológica
18.
PLoS One ; 19(4): e0297958, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38625866

RESUMO

It is well known that the performance of any classification model is effective if the dataset used for the training process and the test process satisfy some specific requirements. In other words, the more the dataset size is large, balanced, and representative, the more one can trust the proposed model's effectiveness and, consequently, the obtained results. Unfortunately, large-size anonymous datasets are generally not publicly available in biomedical applications, especially those dealing with pathological human face images. This concern makes using deep-learning-based approaches challenging to deploy and difficult to reproduce or verify some published results. In this paper, we propose an efficient method to generate a realistic anonymous synthetic dataset of human faces, focusing on attributes related to acne disorders at three distinct levels of severity (Mild, Moderate, and Severe). Notably, our approach initiates from a small dataset of facial acne images, leveraging generative techniques to augment and diversify the dataset, ensuring comprehensive coverage of acne severity levels while maintaining anonymity and realism in the synthetic data. Therefore, a specific hierarchy StyleGAN-based algorithm trained at distinct levels is considered. Moreover, the utilization of generative adversarial networks for augmentation offers a means to circumvent potential privacy or legal concerns associated with acquiring medical datasets. This is attributed to the synthetic nature of the generated data, where no actual subjects are present, thereby ensuring compliance with privacy regulations and legal considerations. To evaluate the performance of the proposed scheme, we consider a CNN-based classification system, trained using the generated synthetic acneic face images and tested using authentic face images. Consequently, we show that an accuracy of 97.6% is achieved using InceptionResNetv2. As a result, this work allows the scientific community to employ the generated synthetic dataset for any data processing application without restrictions on legal or ethical concerns. Moreover, this approach can also be extended to other applications requiring the generation of synthetic medical images.


Assuntos
Acne Vulgar , Humanos , Algoritmos , Privacidade , Confiança
19.
Sci Eng Ethics ; 30(2): 13, 2024 Apr 04.
Artigo em Inglês | MEDLINE | ID: mdl-38575812

RESUMO

Controversies surrounding social media platforms have provided opportunities for institutional reflexivity amongst users and regulators on how to understand and govern platforms. Amidst contestation, platform companies have continued to enact projects that draw upon existing modes of privatized governance. We investigate how social media companies have attempted to achieve closure by continuing to set the terms around platform governance. We investigate two projects implemented by Facebook (Meta)-authenticity regulation and privacy controls-in response to the Russian Interference and Cambridge Analytica controversies surrounding the 2016 U.S. Presidential Election. Drawing on Goffman's metaphor of stage management, we analyze the techniques deployed by Facebook to reinforce a division between what is visible and invisible to the user experience. These platform governance projects propose to act upon front-stage data relations: information that users can see from other users-whether that is content that users can see from "bad actors", or information that other users can see about oneself. At the same time, these projects relegate back-stage data relations-information flows between users constituted by recommendation and targeted advertising systems-to invisibility and inaction. As such, Facebook renders the user experience actionable for governance, while foreclosing governance of back-stage data relations central to the economic value of the platform. As social media companies continue to perform platform governance projects following controversies, our paper invites reflection on the politics of these projects. By destabilizing the boundaries drawn by platform companies, we open space for continuous reflexivity on how platforms should be understood and governed.


Assuntos
Mídias Sociais , Humanos , Política , Privacidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...