Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 251
Filter
1.
J Med Internet Res ; 26: e58939, 2024 Sep 09.
Article in English | MEDLINE | ID: mdl-39250796

ABSTRACT

Digital mental health interventions are routinely integrated into mental health services internationally and can contribute to reducing the global mental health treatment gap identified by the World Health Organization. Research teams designing and delivering evaluations frequently invest substantial effort in deliberating on ethical and legal challenges around digital mental health interventions. In this article, we reflect on our own research experience with digital mental health intervention design and evaluation to identify 8 of the most critical challenges that we or others have faced, and that have ethical or legal consequences. These include: (1) harm caused by online recruitment work; (2) monitoring of intervention safety; (3) exclusion of specific demographic or clinical groups; (4) inadequate robustness of effectiveness and cost-effectiveness findings; (5) adequately conceptualizing and supporting engagement and adherence; (6) structural barriers to implementation; (7) data protection and intellectual property; and (8) regulatory ambiguity relating to digital mental health interventions that are medical devices. As we describe these challenges, we have highlighted serious consequences that can or have occurred, such as substantial delays to studies if regulations around Software as a Medical Device (SaMD) are not fully understood, or if regulations change substantially during the study lifecycle. Collectively, the challenges we have identified highlight a substantial body of required knowledge and expertise, either within the team or through access to external experts. Ensuring access to knowledge requires careful planning and adequate financial resources (for example, paying public contributors to engage in debate on critical ethical issues or paying for legal opinions on regulatory issues). Access to such resources can be planned for on a per-study basis and enabled through funding proposals. However, organizations regularly engaged in the development and evaluation of digital mental health interventions should consider creating or supporting structures such as advisory groups that can retain necessary competencies, such as in medical device regulation.


Subject(s)
Mental Health , Humans , Retrospective Studies , Mental Health Services/legislation & jurisprudence , Mental Health Services/ethics , Telemedicine/ethics , Telemedicine/legislation & jurisprudence , Digital Health
2.
Medicine (Baltimore) ; 103(33): e39136, 2024 Aug 16.
Article in English | MEDLINE | ID: mdl-39151529

ABSTRACT

The accelerated adoption of digital health technologies in the last decades has raised important ethical and safety concerns. Despite the potency and usefulness of digital health technologies, addressing safety, and ethical considerations needs to take greater prominence. This review paper focuses on ethical and safety facets, including health technology-related risks, users' safety and well-being risks, security and privacy concerns, and risks to transparency and diminished accountability associated with the utilization of digital health technologies. In order to maximize the potential of health technology benefits, awareness of safety risks, and ethical concerns should be increased, and the use of appropriate strategies and measures should be considered.


Subject(s)
Digital Health , Digital Technology , Humans , Computer Security/ethics , Confidentiality/ethics , Digital Health/ethics , Digital Technology/ethics , Patient Safety , Telemedicine/ethics
3.
Cuad Bioet ; 35(114): 125-141, 2024.
Article in Spanish | MEDLINE | ID: mdl-39135282

ABSTRACT

During the COVID-19 pandemic, bioethical concerns were raised and there was even a ″resurgence of bioethics. ″ In this work, we review the scientific articles published by Spanish authors in relation to bioethical issues in the three years following the declaration of the pandemic. Seventy publications have been selected. Of all of them, the topic that lent itself to the most debate was that of prioritization in the use of health resources. A consensus was reached that ruled out that age could be considered as a sole exclusion criterion in healthcare or in a possible admission to the ICU. And the importance of taking special care of the most vulnerable and adapting care to the conditions of each patient without excluding anyone was recalled. Other relevant topics were the contrast between autonomy and the common good, the immune passport, vaccination, rigor in research and the publication of results, the professionalism of health personnel, misinformation, care for nursing homes, telemedicine, and the importance of the exercise of virtues. After the experience of both vulnerability and the need to exercise solidarity, many works raise the desire and the possibility of being able to overcome the pandemic being better.


Subject(s)
Bioethical Issues , COVID-19 , Pandemics , COVID-19/epidemiology , Humans , Spain , Pandemics/ethics , Personal Autonomy , SARS-CoV-2 , Telemedicine/ethics , Vulnerable Populations , Age Factors , Vaccination/ethics , Nursing Homes/ethics
4.
Cuad Bioet ; 35(114): 143-155, 2024.
Article in Spanish | MEDLINE | ID: mdl-39135283

ABSTRACT

The digitization of mental health enables significant shifts in clinical practice by harnessing vast amounts of data derived from the use of apps and wearables to enhance medical research, patient care, and health system efficiency. However, this process brings forth pertinent ethical and legal risks. Ethically, concerns primarily revolve around safeguarding the privacy and confidentiality of sensitive data, alongside the transformation of the doctor-patient relationship through technological interaction. Within the regulatory realm, issues encompass the classification of these tools as medical products, ensuring normative assurance of effective protection of mental health data, and addressing potential legal risks within this domain. This article aims to provide an overarching view of this landscape, serving as a catalyst for the technological, ethical, and legal discourse necessitated by digital mental health.


Subject(s)
Confidentiality , Mental Health , Mobile Applications , Humans , Confidentiality/legislation & jurisprudence , Confidentiality/ethics , Mobile Applications/ethics , Mobile Applications/legislation & jurisprudence , Physician-Patient Relations/ethics , Telemedicine/ethics , Telemedicine/legislation & jurisprudence , Wearable Electronic Devices/ethics , Computer Security/legislation & jurisprudence
5.
Stud Health Technol Inform ; 316: 2-6, 2024 Aug 22.
Article in English | MEDLINE | ID: mdl-39176659

ABSTRACT

Currently, there are no adequate methods for dealing with changes in the healthcare system brought about by electronic health applications (eHealth) or the associated ethical implications in practice. This can be attributed to the lack of comprehensive interdisciplinary approaches that could support teams in integrating ethical considerations into the agile software development process. To close this gap, the DARE approach has been developed and tested in interdisciplinary collaborative research. The DARE method is a modular system designed to improve the development of ethically sound software in a deliberative, agile, and responsive manner.


Subject(s)
Codes of Ethics , Telemedicine , Telemedicine/ethics , Software Design , Software , Humans , Electronic Health Records/ethics
6.
Cas Lek Cesk ; 163(3): 106-114, 2024.
Article in English | MEDLINE | ID: mdl-38981731

ABSTRACT

Telemedicine, defined as the practice of delivering healthcare services remotely using information and communications technologies, raises a plethora of ethical considerations. As telemedicine evolves, its ethical dimensions play an increasingly pivotal role in balancing the benefits of advanced technologies, ensuring responsible healthcare practices within telemedicine environments, and safeguarding patient rights. Healthcare providers, patients, policymakers, and technology developers involved in telemedicine encounter numerous ethical challenges that need to be addressed. Key ethical topics include prioritizing the protection of patient rights and privacy, which entails ensuring equitable access to remote healthcare services and maintaining the doctor-patient relationship in virtual settings. Additional areas of focus encompass data security concerns and the quality of healthcare delivery, underscoring the importance of upholding ethical standards in the digital realm. A critical examination of these ethical dimensions highlights the necessity of establishing binding ethical guidelines and legal regulations. These measures could assist stakeholders in formulating effective strategies and methodologies to navigate the complex telemedicine landscape, ensuring adherence to the highest ethical standards and promoting patient welfare. A balanced approach to telemedicine ethics should integrate the benefits of telemedicine with proactive measures to address emerging ethical challenges and should be grounded in a well-prepared and respected ethical framework.


Subject(s)
Telemedicine , Telemedicine/ethics , Humans , Patient Rights/ethics , Confidentiality/ethics , Computer Security/ethics , Physician-Patient Relations/ethics
7.
Medicine (Baltimore) ; 103(28): e38834, 2024 Jul 12.
Article in English | MEDLINE | ID: mdl-38996110

ABSTRACT

Epidemic outbreaks of infectious diseases in conflict zones are complex threats to public health and humanitarian activities that require creativity approaches of reducing their damage. This narrative review focuses on the technology intersection with infectious disease response in conflict zones, and complexity of healthcare infrastructure, population displacement, and security risks. This narrative review explores how conflict-related destruction is harmful towards healthcare systems and the impediments to disease surveillance and response activities. In this regards, the review also considered the contributions of technological innovations, such as the improvement of epidemiological surveillance, mobile health (mHealth) technologies, genomic sequencing, and surveillance technologies, in strengthening infectious disease management in conflict settings. Ethical issues related to data privacy, security and fairness are also covered. By advisement on policy that focuses on investment in surveillance systems, diagnostic capacity, capacity building, collaboration, and even ethical governance, stakeholders can leverage technology to enhance the response to infectious disease in conflict settings and, thus, protect the global health security. This review is full of information for researchers, policymakers, and practitioners who are dealing with the issues of infectious disease outbreaks in conflicts worn areas.


Subject(s)
Communicable Diseases , Humans , Communicable Diseases/epidemiology , Armed Conflicts , Disease Outbreaks/prevention & control , Health Policy , Communicable Disease Control/methods , Telemedicine/ethics
8.
10.
JMIR Ment Health ; 11: e57155, 2024 May 08.
Article in English | MEDLINE | ID: mdl-38717799

ABSTRACT

BACKGROUND: Digital approaches may be helpful in augmenting care to address unmet mental health needs, particularly for schizophrenia and severe mental illness (SMI). OBJECTIVE: An international multidisciplinary group was convened to reach a consensus on the challenges and potential solutions regarding collecting data, delivering treatment, and the ethical challenges in digital mental health approaches for schizophrenia and SMI. METHODS: The consensus development panel method was used, with an in-person meeting of 2 groups: the expert group and the panel. Membership was multidisciplinary including those with lived experience, with equal participation at all stages and coproduction of the consensus outputs and summary. Relevant literature was shared in advance of the meeting, and a systematic search of the recent literature on digital mental health interventions for schizophrenia and psychosis was completed to ensure that the panel was informed before the meeting with the expert group. RESULTS: Four broad areas of challenge and proposed solutions were identified: (1) user involvement for real coproduction; (2) new approaches to methodology in digital mental health, including agreed standards, data sharing, measuring harms, prevention strategies, and mechanistic research; (3) regulation and funding issues; and (4) implementation in real-world settings (including multidisciplinary collaboration, training, augmenting existing service provision, and social and population-focused approaches). Examples are provided with more detail on human-centered research design, lived experience perspectives, and biomedical ethics in digital mental health approaches for SMI. CONCLUSIONS: The group agreed by consensus on a number of recommendations: (1) a new and improved approach to digital mental health research (with agreed reporting standards, data sharing, and shared protocols), (2) equal emphasis on social and population research as well as biological and psychological approaches, (3) meaningful collaborations across varied disciplines that have previously not worked closely together, (4) increased focus on the business model and product with planning and new funding structures across the whole development pathway, (5) increased focus and reporting on ethical issues and potential harms, and (6) organizational changes to allow for true communication and coproduction with those with lived experience of SMI. This study approach, combining an international expert meeting with patient and public involvement and engagement throughout the process, consensus methodology, discussion, and publication, is a helpful way to identify directions for future research and clinical implementation in rapidly evolving areas and can be combined with measurements of real-world clinical impact over time. Similar initiatives will be helpful in other areas of digital mental health and similarly fast-evolving fields to focus research and organizational change and effect improved real-world clinical implementation.


Subject(s)
Consensus , Schizophrenia , Humans , Schizophrenia/therapy , Telemedicine/ethics , Telemedicine/methods , Mental Health Services/organization & administration , Mental Disorders/therapy
11.
Theor Med Bioeth ; 45(3): 199-209, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38789701

ABSTRACT

In the management of the Covid 19 crisis, digital technologies were used in a major way. This article defends the hypothesis that these technologies took the form of a "tacit social experimentation". This article justifies this concept in three levels. The first part uses this concept to qualify the form of biopolitics that was implemented to manage the crisis. Digital technologies were used to discipline the population and, literally speaking, as instruments of knowledge of the population. Uncertainty forced experts to make preliminary observations and act to produce knowledge. Second, this article shows that the use of digital technologies during the crisis was experimental in a second sense. By promoting telemedicine within a more flexible legal framework, the authorities authorised an experimental use of telemedicine without knowledge or control of its side effects. Finally, the article defends the use of the concept of "tacit social experimentation" for ethical and political purposes. For indeed, understanding the experiments carried out during the crisis begs the question of the involvement of the participants and their democratic steering.


Subject(s)
COVID-19 , Digital Technology , SARS-CoV-2 , Telemedicine , COVID-19/prevention & control , COVID-19/epidemiology , Humans , Telemedicine/ethics , Pandemics/ethics , Politics
12.
Stud Health Technol Inform ; 314: 147-148, 2024 May 23.
Article in English | MEDLINE | ID: mdl-38785021

ABSTRACT

This paper explores the security, privacy, and ethical implications of e-health data in Iran's healthcare network. A framework is proposed to ensure security and privacy in electronic health information processing across various institutions. The framework addresses aspects such as software/hardware, communication networks, patient safety, privacy, confidentiality, online health service regulations, commercial and judicial exploitation, and education/research. The study categorizes these requirements into seven main categories to safeguard health-oriented service recipients' security and privacy.


Subject(s)
Computer Security , Confidentiality , Electronic Health Records , Iran , Computer Security/ethics , Confidentiality/ethics , Electronic Health Records/ethics , Telemedicine/ethics , Humans
13.
J Am Med Inform Assoc ; 31(9): 2125-2136, 2024 Sep 01.
Article in English | MEDLINE | ID: mdl-38441296

ABSTRACT

OBJECTIVE: This scoping review aims to assess the current research landscape of the application and use of large language models (LLMs) and generative Artificial Intelligence (AI), through tools such as ChatGPT in telehealth. Additionally, the review seeks to identify key areas for future research, with a particular focus on AI ethics considerations for responsible use and ensuring trustworthy AI. MATERIALS AND METHODS: Following the scoping review methodological framework, a search strategy was conducted across 6 databases. To structure our review, we employed AI ethics guidelines and principles, constructing a concept matrix for investigating the responsible use of AI in telehealth. Using the concept matrix in our review enabled the identification of gaps in the literature and informed future research directions. RESULTS: Twenty studies were included in the review. Among the included studies, 5 were empirical, and 15 were reviews and perspectives focusing on different telehealth applications and healthcare contexts. Benefit and reliability concepts were frequently discussed in these studies. Privacy, security, and accountability were peripheral themes, with transparency, explainability, human agency, and contestability lacking conceptual or empirical exploration. CONCLUSION: The findings emphasized the potential of LLMs, especially ChatGPT, in telehealth. They provide insights into understanding the use of LLMs, enhancing telehealth services, and taking ethical considerations into account. By proposing three future research directions with a focus on responsible use, this review further contributes to the advancement of this emerging phenomenon of healthcare AI.


Subject(s)
Artificial Intelligence , Telemedicine , Telemedicine/ethics , Artificial Intelligence/ethics , Humans
14.
Aten Primaria ; 56(7): 102901, 2024 Jul.
Article in Spanish | MEDLINE | ID: mdl-38452658

ABSTRACT

The medical history underscores the significance of ethics in each advancement, with bioethics playing a pivotal role in addressing emerging ethical challenges in digital health (DH). This article examines the ethical dilemmas of innovations in DH, focusing on the healthcare system, professionals, and patients. Artificial Intelligence (AI) raises concerns such as confidentiality and algorithmic biases. Mobile applications (Apps) empower but pose challenges of access and digital literacy. Telemedicine (TM) democratizes and reduces healthcare costs but requires addressing the digital divide and interconsultation dilemmas; it necessitates high-quality standards with patient information protection and attention to equity in access. Wearables and the Internet of Things (IoT) transform healthcare but face ethical challenges like privacy and equity. 21st-century bioethics must be adaptable as DH tools demand constant review and consensus, necessitating health science faculties' preparedness for the forthcoming changes.


Subject(s)
Artificial Intelligence , Telemedicine , Telemedicine/ethics , Humans , Artificial Intelligence/ethics , Bioethical Issues , Bioethics , Confidentiality/ethics , Mobile Applications/ethics , Digital Technology/ethics , Internet of Things/ethics , Digital Health
15.
Br J Dermatol ; 190(6): 789-797, 2024 May 17.
Article in English | MEDLINE | ID: mdl-38330217

ABSTRACT

The field of dermatology is experiencing the rapid deployment of artificial intelligence (AI), from mobile applications (apps) for skin cancer detection to large language models like ChatGPT that can answer generalist or specialist questions about skin diagnoses. With these new applications, ethical concerns have emerged. In this scoping review, we aimed to identify the applications of AI to the field of dermatology and to understand their ethical implications. We used a multifaceted search approach, searching PubMed, MEDLINE, Cochrane Library and Google Scholar for primary literature, following the PRISMA Extension for Scoping Reviews guidance. Our advanced query included terms related to dermatology, AI and ethical considerations. Our search yielded 202 papers. After initial screening, 68 studies were included. Thirty-two were related to clinical image analysis and raised ethical concerns for misdiagnosis, data security, privacy violations and replacement of dermatologist jobs. Seventeen discussed limited skin of colour representation in datasets leading to potential misdiagnosis in the general population. Nine articles about teledermatology raised ethical concerns, including the exacerbation of health disparities, lack of standardized regulations, informed consent for AI use and privacy challenges. Seven addressed inaccuracies in the responses of large language models. Seven examined attitudes toward and trust in AI, with most patients requesting supplemental assessment by a physician to ensure reliability and accountability. Benefits of AI integration into clinical practice include increased patient access, improved clinical decision-making, efficiency and many others. However, safeguards must be put in place to ensure the ethical application of AI.


The use of artificial intelligence (AI) in dermatology is rapidly increasing, with applications in dermatopathology, medical dermatology, cutaneous surgery, microscopy/spectroscopy and the identification of prognostic biomarkers (characteristics that provide information on likely patient health outcomes). However, with the rise of AI in dermatology, ethical concerns have emerged. We reviewed the existing literature to identify applications of AI in the field of dermatology and understand the ethical implications. Our search initially identified 202 papers, and after we went through them (screening), 68 were included in our review. We found that ethical concerns are related to the use of AI in the areas of clinical image analysis, teledermatology, natural language processing models, privacy, skin of colour representation, and patient and provider attitudes toward AI. We identified nine ethical principles to facilitate the safe use of AI in dermatology. These ethical principles include fairness, inclusivity, transparency, accountability, security, privacy, reliability, informed consent and conflict of interest. Although there are many benefits of integrating AI into clinical practice, our findings highlight how safeguards must be put in place to reduce rising ethical concerns.


Subject(s)
Artificial Intelligence , Dermatology , Humans , Artificial Intelligence/ethics , Dermatology/ethics , Dermatology/methods , Telemedicine/ethics , Informed Consent/ethics , Confidentiality/ethics , Diagnostic Errors/ethics , Diagnostic Errors/prevention & control , Computer Security/ethics , Skin Diseases/diagnosis , Skin Diseases/therapy , Mobile Applications/ethics
17.
J Int Bioethique Ethique Sci ; 33(2): 15-25, 2023.
Article in French | MEDLINE | ID: mdl-36894337

ABSTRACT

The practice of telemedicine is likely to raise ethical and legal problems that affect the doctor-patient relationship. Therefore, the respect of ethical principles is necessary, in addition to the involvement of the legislator, who must enact specific instruments capable of identifying all the problems caused by telemedicine and contributing to a certain humanization of the doctor-patient relationship.


Subject(s)
Physician-Patient Relations , Telemedicine , Humans , Physician-Patient Relations/ethics , Telemedicine/ethics , Telemedicine/legislation & jurisprudence , Telemedicine/methods
19.
Rev. Hosp. Clin. Univ. Chile ; 33(3): 234-241, 2022.
Article in Spanish | LILACS | ID: biblio-1417240

ABSTRACT

The paper proposes, as the topic of analysis, the emergence of telemedicine, a tool that has been intensively used by doctors and other professionals during the covid pandemic. The essay, divided into two parts, first describes the current situation of telemedicine and afterwards proposes a few precautionary theses, related to telemedicine and the doctor-patient relationship according to the undestanding that the latter has been inherited and transmitted by medical anthropology and the medical humanities. (AU)


Subject(s)
Humans , Physician-Patient Relations/ethics , Telemedicine/ethics , Information Technology/ethics
20.
S Afr Med J ; 111(5): 416-420, 2021 04 30.
Article in English | MEDLINE | ID: mdl-34852881

ABSTRACT

Digital technologies continue to penetrate the South African (SA) healthcare sector at an increasing rate. Clinician-to-clinician diagnostic and management assistance through mHealth is expanding rapidly, reducing professional isolation and unnecessary referrals, and promoting better patient outcomes and more equitable healthcare systems. However, the widespread uptake of mHealth use raises ethical concerns around patient autonomy and safety, and guidance for healthcare workers around the ethical use of mHealth is needed. This article presents the results of a multi-stakeholder workshop at which the 'dos and don'ts' pertaining to mHealth ethics in the SA context were formulated and aligned to seven basic recommendations derived from the literature and previous multi-stakeholder, multi-country meetings.


Subject(s)
Delivery of Health Care/organization & administration , Health Personnel/organization & administration , Telemedicine/organization & administration , Delivery of Health Care/ethics , Humans , Personal Autonomy , Referral and Consultation , South Africa , Telemedicine/ethics
SELECTION OF CITATIONS
SEARCH DETAIL