Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 8.570
Filtrar
Mais filtros








Intervalo de ano de publicação
1.
J Clin Ethics ; 35(2): 85-92, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38728697

RESUMO

AbstractDespite broad ethical consensus supporting developmentally appropriate disclosure of health information to older children and adolescents, cases in which parents and caregivers request nondisclosure continue to pose moral dilemmas for clinicians. State laws vary considerably regarding adolescents' rights to autonomy, privacy, and confidentiality, with many states not specifically addressing adolescents' right to their own healthcare information. The requirements of the 21st Century Cures Act have raised important ethical concerns for pediatricians and adolescent healthcare professionals regarding the protection of adolescent privacy and confidentiality, given requirements that chart notes and results be made readily available to patients via electronic portals. Less addressed have been the implications of the act for adolescents' access to their health information, since many healthcare systems' electronic portals are available to patients beginning at age 12, sometimes requiring that the patients themselves authorize their parents' access to the same information. In this article, we present a challenging case of protracted disagreement about an adolescent's right to honest information regarding his devastating prognosis. We then review the legal framework governing adolescents' rights to their own healthcare information, the limitations of ethics consultation to resolve such disputes, and the potential for the Cures Act's impact on electronic medical record systems to provide one form of resolution. We conclude that although parents in cases like the one presented here have the legal right to consent to medical treatment on their children's behalf, they do not have a corresponding right to direct the withholding of medical information from the patient.


Assuntos
Confidencialidade , Pais , Humanos , Adolescente , Confidencialidade/legislação & jurisprudência , Confidencialidade/ética , Masculino , Estados Unidos , Revelação/legislação & jurisprudência , Revelação/ética , Autonomia Pessoal , Consentimento dos Pais/legislação & jurisprudência , Consentimento dos Pais/ética , Direitos do Paciente/legislação & jurisprudência , Criança , Privacidade/legislação & jurisprudência , Registros Eletrônicos de Saúde/ética , Registros Eletrônicos de Saúde/legislação & jurisprudência , Acesso à Informação/legislação & jurisprudência , Acesso à Informação/ética
2.
Cien Saude Colet ; 29(5): e15552022, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38747777

RESUMO

The conceptions, values, and experiences of students from public and private high schools in two Brazilian state capitals, Vitória-ES and Campo Grande-MS, were analyzed regarding digital control and monitoring between intimate partners and the unauthorized exposure of intimate material on the Internet. Data from eight focus groups with 77 adolescents were submitted to thematic analysis, complemented by a questionnaire answered by a sample of 530 students. Most students affirmed that they do not tolerate the control/monitoring and unauthorized exposure of intimate materials but recognized that such activity is routine. They point out jealousy, insecurity, and "curiosity" as their main reasons. They detail the various dynamics of unauthorized exposure of intimate material and see it as a severe invasion of privacy and a breach of trust between partners. Their accounts suggest that such practices are gender violence. They also reveal that each platform has its cultural appropriation and that platforms used by the family, such as Facebook, cause more significant damage to the victim's reputation.


Assuntos
Grupos Focais , Parceiros Sexuais , Estudantes , Humanos , Brasil , Adolescente , Feminino , Masculino , Inquéritos e Questionários , Estudantes/psicologia , Parceiros Sexuais/psicologia , Internet , Violência por Parceiro Íntimo/estatística & dados numéricos , Privacidade , Violência de Gênero , Relações Interpessoais , Ciúme , Instituições Acadêmicas , Adulto Jovem
3.
Sci Eng Ethics ; 30(3): 19, 2024 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-38748085

RESUMO

This study investigated people's ethical concerns of surveillance technology. By adopting the spectrum of technological utopian and dystopian narratives, how people perceive a society constructed through the compulsory use of surveillance technology was explored. This study empirically examined the anonymous online expression of attitudes toward the society-wide, compulsory adoption of a contact tracing app that affected almost every aspect of all people's everyday lives at a societal level. By applying the structural topic modeling approach to analyze comments on four Hong Kong anonymous discussion forums, topics concerning the technological utopian, dystopian, and pragmatic views on the surveillance app were discovered. The findings showed that people with a technological utopian view on this app believed that the implementation of compulsory app use can facilitate social good and maintain social order. In contrast, individuals who had a technological dystopian view expressed privacy concerns and distrust of this surveillance technology. Techno-pragmatists took a balanced approach and evaluated its implementation practically.


Assuntos
Atitude , Aplicativos Móveis , Privacidade , Humanos , Hong Kong , Busca de Comunicante/ética , Busca de Comunicante/métodos , Confiança , Confidencialidade , Tecnologia/ética , Internet , Feminino , Masculino , Adulto , Narração
4.
JMIR Nurs ; 7: e53592, 2024 May 09.
Artigo em Inglês | MEDLINE | ID: mdl-38723253

RESUMO

BACKGROUND: Health monitoring technologies help patients and older adults live better and stay longer in their own homes. However, there are many factors influencing their adoption of these technologies. Privacy is one of them. OBJECTIVE: The aim of this study was to provide an overview of the privacy barriers in health monitoring from current research, analyze the factors that influence patients to adopt assisted living technologies, provide a social psychological explanation, and propose suggestions for mitigating these barriers in future research. METHODS: A scoping review was conducted, and web-based literature databases were searched for published studies to explore the available research on privacy barriers in a health monitoring environment. RESULTS: In total, 65 articles met the inclusion criteria and were selected and analyzed. Contradictory findings and results were found in some of the included articles. We analyzed the contradictory findings and provided possible explanations for current barriers, such as demographic differences, information asymmetry, researchers' conceptual confusion, inducible experiment design and its psychological impacts on participants, researchers' confirmation bias, and a lack of distinction among different user roles. We found that few exploratory studies have been conducted so far to collect privacy-related legal norms in a health monitoring environment. Four research questions related to privacy barriers were raised, and an attempt was made to provide answers. CONCLUSIONS: This review highlights the problems of some research, summarizes patients' privacy concerns and legal concerns from the studies conducted, and lists the factors that should be considered when gathering and analyzing people's privacy attitudes.


Assuntos
Privacidade , Humanos , Privacidade/legislação & jurisprudência , Monitorização Fisiológica/métodos
5.
Pediatr Crit Care Med ; 25(5): e258-e262, 2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38695704

RESUMO

Caring for children and their families at the end-of-life is an essential but challenging aspect of care in the PICU. During and following a child's death, families often report a simultaneous need for protected privacy and ongoing supportive presence from staff. Balancing these seemingly paradoxical needs can be difficult for PICU staff and can often lead to the family feeling intruded upon or abandoned during their end-of-life experience. In this "Pediatric Critical Care Medicine Perspectives" piece, we reframe provision of privacy at the end-of-life in the PICU and describe an essential principle that aims to help the interprofessional PICU team simultaneously meet these two opposing family needs: "Supported Privacy." In addition, we offer concrete recommendations to actualize "Supported Privacy" in the PICU, focusing on environmental considerations, practical needs, and emotional responses. By incorporating the principles of "Supported Privacy" into end-of-life care practices, clinicians can support the delivery of high-quality care that meets the needs of children and families navigating the challenges and supports of end-of-life in the PICU.


Assuntos
Unidades de Terapia Intensiva Pediátrica , Privacidade , Assistência Terminal , Humanos , Assistência Terminal/ética , Assistência Terminal/psicologia , Unidades de Terapia Intensiva Pediátrica/organização & administração , Criança , Relações Profissional-Família , Família/psicologia
6.
Sci Adv ; 10(18): eadl2524, 2024 May 03.
Artigo em Inglês | MEDLINE | ID: mdl-38691613

RESUMO

The U.S. Census Bureau faces a difficult trade-off between the accuracy of Census statistics and the protection of individual information. We conduct an independent evaluation of bias and noise induced by the Bureau's two main disclosure avoidance systems: the TopDown algorithm used for the 2020 Census and the swapping algorithm implemented for the three previous Censuses. Our evaluation leverages the Noisy Measurement File (NMF) as well as two independent runs of the TopDown algorithm applied to the 2010 decennial Census. We find that the NMF contains too much noise to be directly useful without measurement error modeling, especially for Hispanic and multiracial populations. TopDown's postprocessing reduces the NMF noise and produces data whose accuracy is similar to that of swapping. While the estimated errors for both TopDown and swapping algorithms are generally no greater than other sources of Census error, they can be relatively substantial for geographies with small total populations.


Assuntos
Algoritmos , Viés , Censos , Estados Unidos , Humanos , Privacidade
7.
Trends Genet ; 40(5): 383-386, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38637270

RESUMO

Artificial intelligence (AI) in omics analysis raises privacy threats to patients. Here, we briefly discuss risk factors to patient privacy in data sharing, model training, and release, as well as methods to safeguard and evaluate patient privacy in AI-driven omics methods.


Assuntos
Inteligência Artificial , Genômica , Humanos , Genômica/métodos , Privacidade , Disseminação de Informação
8.
JAMA ; 331(18): 1527-1528, 2024 May 14.
Artigo em Inglês | MEDLINE | ID: mdl-38619831

RESUMO

This Viewpoint summarizes existing federal regulations aimed at protecting research data, describes the challenges of enforcing these regulations, and discusses how evolving privacy technologies could be used to reduce health disparities and advance health equity among pregnant and LGBTQ+ research participants.


Assuntos
Confidencialidade , Sujeitos da Pesquisa , Minorias Sexuais e de Gênero , Humanos , Feminino , Gravidez , Confidencialidade/legislação & jurisprudência , Privacidade/legislação & jurisprudência , Estados Unidos , Consentimento Livre e Esclarecido
9.
Sci Data ; 11(1): 397, 2024 Apr 18.
Artigo em Inglês | MEDLINE | ID: mdl-38637602

RESUMO

Modeling and predicting human mobility trajectories in urban areas is an essential task for various applications including transportation modeling, disaster management, and urban planning. The recent availability of large-scale human movement data collected from mobile devices has enabled the development of complex human mobility prediction models. However, human mobility prediction methods are often trained and tested on different datasets, due to the lack of open-source large-scale human mobility datasets amid privacy concerns, posing a challenge towards conducting transparent performance comparisons between methods. To this end, we created an open-source, anonymized, metropolitan scale, and longitudinal (75 days) dataset of 100,000 individuals' human mobility trajectories, using mobile phone location data provided by Yahoo Japan Corporation (currently renamed to LY Corporation), named YJMob100K. The location pings are spatially and temporally discretized, and the metropolitan area is undisclosed to protect users' privacy. The 90-day period is composed of 75 days of business-as-usual and 15 days during an emergency, to test human mobility predictability during both normal and anomalous situations.


Assuntos
Telefone Celular , Movimento , Humanos , Cidades , Japão , Privacidade
10.
Sci Eng Ethics ; 30(2): 13, 2024 Apr 04.
Artigo em Inglês | MEDLINE | ID: mdl-38575812

RESUMO

Controversies surrounding social media platforms have provided opportunities for institutional reflexivity amongst users and regulators on how to understand and govern platforms. Amidst contestation, platform companies have continued to enact projects that draw upon existing modes of privatized governance. We investigate how social media companies have attempted to achieve closure by continuing to set the terms around platform governance. We investigate two projects implemented by Facebook (Meta)-authenticity regulation and privacy controls-in response to the Russian Interference and Cambridge Analytica controversies surrounding the 2016 U.S. Presidential Election. Drawing on Goffman's metaphor of stage management, we analyze the techniques deployed by Facebook to reinforce a division between what is visible and invisible to the user experience. These platform governance projects propose to act upon front-stage data relations: information that users can see from other users-whether that is content that users can see from "bad actors", or information that other users can see about oneself. At the same time, these projects relegate back-stage data relations-information flows between users constituted by recommendation and targeted advertising systems-to invisibility and inaction. As such, Facebook renders the user experience actionable for governance, while foreclosing governance of back-stage data relations central to the economic value of the platform. As social media companies continue to perform platform governance projects following controversies, our paper invites reflection on the politics of these projects. By destabilizing the boundaries drawn by platform companies, we open space for continuous reflexivity on how platforms should be understood and governed.


Assuntos
Mídias Sociais , Humanos , Política , Privacidade
11.
Sci Rep ; 14(1): 8690, 2024 04 15.
Artigo em Inglês | MEDLINE | ID: mdl-38622216

RESUMO

In the era of artificial intelligence, privacy empowerment illusion has become a crucial means for digital enterprises and platforms to "manipulate" users and create an illusion of control. This topic has also become an urgent and pressing concern for current research. However, the existing studies are limited in terms of their perspectives and methodologies, making it challenging to fully explain why users express concerns about privacy empowerment illusion but repeatedly disclose their personal information. This study combines the associative-propositional evaluation model (APE) and cognitive load theory, using event-related potential (ERP) technology to investigate the underlying mechanisms of how the comprehensibility and interpretability of privacy empowerment illusion cues affect users' immediate attitudes and privacy disclosure behaviours; these mechanisms are mediated by psychological processing and cognitive load differences. Behavioural research results indicate that in the context of privacy empowerment illusion cues with low comprehensibility, users are more inclined to disclose their private information when faced with high interpretability than they are when faced with low interpretability. EEG results show that in the context of privacy empowerment illusion cues with low comprehensibility, high interpretability induces greater P2 amplitudes than does low interpretability; low interpretability induces greater N2 amplitudes than does high interpretability. This study extends the scopes of the APE model and cognitive load theory in the field of privacy research, providing new insights into privacy attitudes. Doing so offers a valuable framework through which digital enterprises can gain a deeper understanding of users' genuine privacy attitudes and immediate reactions under privacy empowerment illusion situations. This understanding can help increase user privacy protection and improve their overall online experience, making it highly relevant and beneficial.


Assuntos
Hominidae , Ilusões , Humanos , Animais , Privacidade/psicologia , Revelação , Sinais (Psicologia) , Inteligência Artificial , Cognição
12.
PLoS One ; 19(4): e0297958, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38625866

RESUMO

It is well known that the performance of any classification model is effective if the dataset used for the training process and the test process satisfy some specific requirements. In other words, the more the dataset size is large, balanced, and representative, the more one can trust the proposed model's effectiveness and, consequently, the obtained results. Unfortunately, large-size anonymous datasets are generally not publicly available in biomedical applications, especially those dealing with pathological human face images. This concern makes using deep-learning-based approaches challenging to deploy and difficult to reproduce or verify some published results. In this paper, we propose an efficient method to generate a realistic anonymous synthetic dataset of human faces, focusing on attributes related to acne disorders at three distinct levels of severity (Mild, Moderate, and Severe). Notably, our approach initiates from a small dataset of facial acne images, leveraging generative techniques to augment and diversify the dataset, ensuring comprehensive coverage of acne severity levels while maintaining anonymity and realism in the synthetic data. Therefore, a specific hierarchy StyleGAN-based algorithm trained at distinct levels is considered. Moreover, the utilization of generative adversarial networks for augmentation offers a means to circumvent potential privacy or legal concerns associated with acquiring medical datasets. This is attributed to the synthetic nature of the generated data, where no actual subjects are present, thereby ensuring compliance with privacy regulations and legal considerations. To evaluate the performance of the proposed scheme, we consider a CNN-based classification system, trained using the generated synthetic acneic face images and tested using authentic face images. Consequently, we show that an accuracy of 97.6% is achieved using InceptionResNetv2. As a result, this work allows the scientific community to employ the generated synthetic dataset for any data processing application without restrictions on legal or ethical concerns. Moreover, this approach can also be extended to other applications requiring the generation of synthetic medical images.


Assuntos
Acne Vulgar , Humanos , Algoritmos , Privacidade , Confiança
13.
PLoS One ; 19(4): e0297534, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38635816

RESUMO

The secret keys produced by current image cryptosystems, which rely on chaotic sequences, exhibit a direct correlation with the size of the image. As the image dimensions expand, the generation of extensive chaotic sequences in the encryption and decryption procedures becomes more computationally intensive. Secondly, a common problem in existing image encryption schemes is the compromise between privacy and efficiency. Some existing lightweight schemes reveal patterns in encrypted images, while others impose heavy computational burdens during encryption/decryption due to the need for large chaotic sequences. In this study, we introduce a lightweight image encryption scheme that involves partitioning the image into uniformly sized tiles and generating a chaotic sequence accordingly. This approach diminishes the necessity to create extensive chaotic sequences equal to the tile size, which is significantly smaller than the original image. As a result, it alleviates the processing burden associated with generating sequences equivalent to the original image size. The results confirm that our proposed scheme is lightweight and secure compared to the latest state-of-the-art image encryption schemes. Additionally, sensitivity analysis demonstrates that the proposed image encryption technique, with a UACI value of 33.48 and NPRC value of 99.96, affirms its resistance to differential attacks.


Assuntos
Privacidade , Resiliência Psicológica
14.
Swiss Med Wkly ; 154: 3538, 2024 Jan 22.
Artigo em Inglês | MEDLINE | ID: mdl-38579329

RESUMO

BACKGROUND: While health data sharing for research purposes is strongly supported in principle, it can be challenging to implement in practice. Little is known about the actual bottlenecks to health data sharing in Switzerland. AIMS OF THE STUDY: This study aimed to assess the obstacles to Swiss health data sharing, including legal, ethical and logistical bottlenecks. METHODS: We identified 37 key stakeholders in data sharing via the Swiss Personalised Health Network ecosystem, defined as being an expert on sharing sensitive health data for research purposes at a Swiss university hospital (or a Swiss disease cohort) or being a stakeholder in data sharing at a public or private institution that uses such data. We conducted semi-structured interviews, which were transcribed, translated when necessary, and de-identified. The entire research team discussed the transcripts and notes taken during each interview before an inductive coding process occurred. RESULTS: Eleven semi-structured interviews were conducted (primarily in English) with 17 individuals representing lawyers, data protection officers, ethics committee members, scientists, project managers, bioinformaticians, clinical trials unit members, and biobank stakeholders. Most respondents felt that it was not the actual data transfer that was the bottleneck but rather the processes and systems around it, which were considered time-intensive and confusing. The templates developed by the Swiss Personalised Health Network and the Swiss General Consent process were generally felt to have streamlined processes significantly. However, these logistics and data quality issues remain practical bottlenecks in Swiss health data sharing. Areas of legal uncertainty include privacy laws when sharing data internationally, questions of "who owns the data", inconsistencies created because the Swiss general consent is perceived as being implemented differently across different institutions, and definitions and operationalisation of anonymisation and pseudo-anonymisation. Many participants desired to create a "culture of data sharing" and to recognise that data sharing is a process with many steps, not an event, that requires sustainability efforts and personnel. Some participants also stressed a desire to move away from data sharing and the current privacy focus towards processes that facilitate data access. CONCLUSIONS: Facilitating a data access culture in Switzerland may require legal clarifications, further education about the process and resources to support data sharing, and further investment in sustainable infrastructureby funders and institutions.


Assuntos
Privacidade , Humanos , Disseminação de Informação , Pesquisa Qualitativa , Suíça
15.
Rev Med Suisse ; 20(870): 808-812, 2024 Apr 17.
Artigo em Francês | MEDLINE | ID: mdl-38630042

RESUMO

Health and risk of disease are determined by exposure to the physical, socio-economic, and political environment and to this has been added exposure to the digital environment. Our increasingly digital lives have major implications for people's health and its monitoring, as well as for prevention and care. Digital health, which encompasses the use of health applications, connected devices and artificial intelligence medical tools, is transforming medical and healthcare practices. Used properly, it could facilitate patient-centered, inter-professional and data-driven care. However, its implementation raises major concerns and ethical issues, particularly in relation to privacy, equity, and the therapeutic relationship.


La santé et le risque de maladies sont déterminés par l'exposition aux environnements physiques, socio-économiques et politiques, et à cela s'est ajouté l'exposition à l'environnement digital. Notre vie digitale a des implications majeures, d'une part, sur la santé des populations et son monitoring et, d'autre part, sur la prévention et les soins. Ainsi, la santé digitale (digital health), qui englobe l'utilisation d'applications de santé, d'appareils connectés, ou d'outils médicaux d'intelligence artificielle, modifie les pratiques médico-soignantes. Bien utilisée, elle pourrait faciliter les soins centrés sur le patient, interprofessionnels et guidés par les données. Cependant, sa mise en œuvre soulève d'importants craintes et enjeux éthiques en lien notamment avec la protection des données, l'équité et la relation thérapeutique.


Assuntos
Inteligência Artificial , Saúde da População , Humanos , Saúde Digital , Exame Físico , Privacidade
16.
PLoS One ; 19(4): e0301897, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38630709

RESUMO

With the continuous development of vehicular ad hoc networks (VANET) security, using federated learning (FL) to deploy intrusion detection models in VANET has attracted considerable attention. Compared to conventional centralized learning, FL retains local training private data, thus protecting privacy. However, sensitive information about the training data can still be inferred from the shared model parameters in FL. Differential privacy (DP) is sophisticated technique to mitigate such attacks. A key challenge of implementing DP in FL is that non-selectively adding DP noise can adversely affect model accuracy, while having many perturbed parameters also increases privacy budget consumption and communication costs for detection models. To address this challenge, we propose FFIDS, a FL algorithm integrating model parameter pruning with differential privacy. It employs a parameter pruning technique based on the Fisher Information Matrix to reduce the privacy budget consumption per iteration while ensuring no accuracy loss. Specifically, FFIDS evaluates parameter importance and prunes unimportant parameters to generate compact sub-models, while recording the positions of parameters in each sub-model. This not only reduces model size to lower communication costs, but also maintains accuracy stability. DP noise is then added to the sub-models. By not perturbing unimportant parameters, more budget can be reserved to retain important parameters for more iterations. Finally, the server can promptly recover the sub-models using the parameter position information and complete aggregation. Extensive experiments on two public datasets and two F2MD simulation datasets have validated the utility and superior performance of the FFIDS algorithm.


Assuntos
Mustelidae , Privacidade , Animais , Aprendizagem , Algoritmos , Orçamentos , Comunicação
17.
AJOB Neurosci ; 15(2): 146-148, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38568702
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA