Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
Mais filtros

Bases de dados
Tipo de documento
Intervalo de ano de publicação
1.
Nat Med ; 29(11): 2929-2938, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37884627

RESUMO

Artificial intelligence as a medical device is increasingly being applied to healthcare for diagnosis, risk stratification and resource allocation. However, a growing body of evidence has highlighted the risk of algorithmic bias, which may perpetuate existing health inequity. This problem arises in part because of systemic inequalities in dataset curation, unequal opportunity to participate in research and inequalities of access. This study aims to explore existing standards, frameworks and best practices for ensuring adequate data diversity in health datasets. Exploring the body of existing literature and expert views is an important step towards the development of consensus-based guidelines. The study comprises two parts: a systematic review of existing standards, frameworks and best practices for healthcare datasets; and a survey and thematic analysis of stakeholder views of bias, health equity and best practices for artificial intelligence as a medical device. We found that the need for dataset diversity was well described in literature, and experts generally favored the development of a robust set of guidelines, but there were mixed views about how these could be implemented practically. The outputs of this study will be used to inform the development of standards for transparency of data diversity in health datasets (the STANDING Together initiative).


Assuntos
Inteligência Artificial , Atenção à Saúde , Humanos , Consenso , Revisões Sistemáticas como Assunto
2.
JMIR AI ; 2: e47353, 2023 Oct 31.
Artigo em Inglês | MEDLINE | ID: mdl-38875571

RESUMO

BACKGROUND: Artificial intelligence (AI) is often promoted as a potential solution for many challenges health care systems face worldwide. However, its implementation in clinical practice lags behind its technological development. OBJECTIVE: This study aims to gain insights into the current state and prospects of AI technology from the stakeholders most directly involved in its adoption in the health care sector whose perspectives have received limited attention in research to date. METHODS: For this purpose, the perspectives of AI researchers and health care IT professionals in North America and Western Europe were collected and compared for profession-specific and regional differences. In this preregistered, mixed methods, cross-sectional study, 23 experts were interviewed using a semistructured guide. Data from the interviews were analyzed using deductive and inductive qualitative methods for the thematic analysis along with topic modeling to identify latent topics. RESULTS: Through our thematic analysis, four major categories emerged: (1) the current state of AI systems in health care, (2) the criteria and requirements for implementing AI systems in health care, (3) the challenges in implementing AI systems in health care, and (4) the prospects of the technology. Experts discussed the capabilities and limitations of current AI systems in health care in addition to their prevalence and regional differences. Several criteria and requirements deemed necessary for the successful implementation of AI systems were identified, including the technology's performance and security, smooth system integration and human-AI interaction, costs, stakeholder involvement, and employee training. However, regulatory, logistical, and technical issues were identified as the most critical barriers to an effective technology implementation process. In the future, our experts predicted both various threats and many opportunities related to AI technology in the health care sector. CONCLUSIONS: Our work provides new insights into the current state, criteria, challenges, and outlook for implementing AI technology in health care from the perspective of AI researchers and IT professionals in North America and Western Europe. For the full potential of AI-enabled technologies to be exploited and for them to contribute to solving current health care challenges, critical implementation criteria must be met, and all groups involved in the process must work together.

3.
Gut ; 71(9): 1909-1915, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-35688612

RESUMO

Artificial intelligence (AI) and machine learning (ML) systems are increasingly used in medicine to improve clinical decision-making and healthcare delivery. In gastroenterology and hepatology, studies have explored a myriad of opportunities for AI/ML applications which are already making the transition to bedside. Despite these advances, there is a risk that biases and health inequities can be introduced or exacerbated by these technologies. If unrecognised, these technologies could generate or worsen systematic racial, ethnic and sex disparities when deployed on a large scale. There are several mechanisms through which AI/ML could contribute to health inequities in gastroenterology and hepatology, including diagnosis of oesophageal cancer, management of inflammatory bowel disease (IBD), liver transplantation, colorectal cancer screening and many others. This review adapts a framework for ethical AI/ML development and application to gastroenterology and hepatology such that clinical practice is advanced while minimising bias and optimising health equity.


Assuntos
Gastroenterologia , Equidade em Saúde , Inteligência Artificial , Tomada de Decisão Clínica , Humanos , Aprendizado de Máquina
4.
Patterns (N Y) ; 3(1): 100392, 2022 Jan 14.
Artigo em Inglês | MEDLINE | ID: mdl-35079713

RESUMO

Machine learning has traditionally operated in a space where data and labels are assumed to be anchored in objective truths. Unfortunately, much evidence suggests that the "embodied" data acquired from and about human bodies does not create systems that function as desired. The complexity of health care data can be linked to a long history of discrimination, and research in this space forbids naive applications. To improve health care, machine learning models must strive to recognize, reduce, or remove such biases from the start. We aim to enumerate many examples to demonstrate the depth and breadth of biases that exist and that have been present throughout the history of medicine. We hope that outrage over algorithms automating biases will lead to changes in the underlying practices that generated such data, leading to reduced health disparities.

6.
Annu Rev Biomed Data Sci ; 4: 123-144, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-34396058

RESUMO

The use of machine learning (ML) in healthcare raises numerous ethical concerns, especially as models can amplify existing health inequities. Here, we outline ethical considerations for equitable ML in the advancement of healthcare. Specifically, we frame ethics of ML in healthcare through the lens of social justice. We describe ongoing efforts and outline challenges in a proposed pipeline of ethical ML in health, ranging from problem selection to postdeployment considerations. We close by summarizing recommendations to address these challenges.


Assuntos
Atenção à Saúde , Justiça Social , Instalações de Saúde , Aprendizado de Máquina , Princípios Morais
9.
AMA J Ethics ; 21(2): E167-179, 2019 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-30794127

RESUMO

Background: As machine learning becomes increasingly common in health care applications, concerns have been raised about bias in these systems' data, algorithms, and recommendations. Simply put, as health care improves for some, it might not improve for all. Methods: Two case studies are examined using a machine learning algorithm on unstructured clinical and psychiatric notes to predict intensive care unit (ICU) mortality and 30-day psychiatric readmission with respect to race, gender, and insurance payer type as a proxy for socioeconomic status. Results: Clinical note topics and psychiatric note topics were heterogenous with respect to race, gender, and insurance payer type, which reflects known clinical findings. Differences in prediction accuracy and therefore machine bias are shown with respect to gender and insurance type for ICU mortality and with respect to insurance policy for psychiatric 30-day readmission. Conclusions: This analysis can provide a framework for assessing and identifying disparate impacts of artificial intelligence in health care.


Assuntos
Inteligência Artificial , Atenção à Saúde/organização & administração , Disparidades em Assistência à Saúde/organização & administração , Disparidades em Assistência à Saúde/estatística & dados numéricos , Unidades de Terapia Intensiva/estatística & dados numéricos , Serviços de Saúde Mental/organização & administração , Readmissão do Paciente/estatística & dados numéricos , Adulto , Idoso , Idoso de 80 Anos ou mais , Atenção à Saúde/estatística & dados numéricos , Feminino , Humanos , Masculino , Serviços de Saúde Mental/estatística & dados numéricos , Pessoa de Meia-Idade , Mortalidade , Fatores Sexuais
11.
PLoS One ; 13(12): e0209017, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30571719

RESUMO

Phonotraumatic vocal hyperfunction (PVH) is associated with chronic misuse and/or abuse of voice that can result in lesions such as vocal fold nodules. The clinical aerodynamic assessment of vocal function has been recently shown to differentiate between patients with PVH and healthy controls to provide meaningful insight into pathophysiological mechanisms associated with these disorders. However, all current clinical assessment of PVH is incomplete because of its inability to objectively identify the type and extent of detrimental phonatory function that is associated with PVH during daily voice use. The current study sought to address this issue by incorporating, for the first time in a comprehensive ambulatory assessment, glottal airflow parameters estimated from a neck-mounted accelerometer and recorded to a smartphone-based voice monitor. We tested this approach on 48 patients with vocal fold nodules and 48 matched healthy-control subjects who each wore the voice monitor for a week. Seven glottal airflow features were estimated every 50 ms using an impedance-based inverse filtering scheme, and seven high-order summary statistics of each feature were computed every 5 minutes over voiced segments. Based on a univariate hypothesis testing, eight glottal airflow summary statistics were found to be statistically different between patient and healthy-control groups. L1-regularized logistic regression for a supervised classification task yielded a mean (standard deviation) area under the ROC curve of 0.82 (0.25) and an accuracy of 0.83 (0.14). These results outperform the state-of-the-art classification for the same classification task and provide a new avenue to improve the assessment and treatment of hyperfunctional voice disorders.


Assuntos
Glote/fisiopatologia , Testes Imediatos , Distúrbios da Voz/diagnóstico , Distúrbios da Voz/fisiopatologia , Acelerometria , Adulto , Movimentos do Ar , Diagnóstico por Computador , Feminino , Humanos , Pessoa de Meia-Idade , Smartphone , Prega Vocal/fisiopatologia , Voz , Distúrbios da Voz/etiologia , Adulto Jovem
12.
Proc AAAI Conf Artif Intell ; 2015: 446-453, 2015 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-27182460

RESUMO

The ability to determine patient acuity (or severity of illness) has immediate practical use for clinicians. We evaluate the use of multivariate timeseries modeling with the multi-task Gaussian process (GP) models using noisy, incomplete, sparse, heterogeneous and unevenly-sampled clinical data, including both physiological signals and clinical notes. The learned multi-task GP (MTGP) hyperparameters are then used to assess and forecast patient acuity. Experiments were conducted with two real clinical data sets acquired from ICU patients: firstly, estimating cerebrovascular pressure reactivity, an important indicator of secondary damage for traumatic brain injury patients, by learning the interactions between intracranial pressure and mean arterial blood pressure signals, and secondly, mortality prediction using clinical progress notes. In both cases, MTGPs provided improved results: an MTGP model provided better results than single-task GP models for signal interpolation and forecasting (0.91 vs 0.69 RMSE), and the use of MTGP hyperparameters obtained improved results when used as additional classification features (0.812 vs 0.788 AUC).

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA