Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros

Bases de datos
Tipo del documento
Intervalo de año de publicación
1.
Radiographics ; 44(5): e230067, 2024 05.
Artículo en Inglés | MEDLINE | ID: mdl-38635456

RESUMEN

Artificial intelligence (AI) algorithms are prone to bias at multiple stages of model development, with potential for exacerbating health disparities. However, bias in imaging AI is a complex topic that encompasses multiple coexisting definitions. Bias may refer to unequal preference to a person or group owing to preexisting attitudes or beliefs, either intentional or unintentional. However, cognitive bias refers to systematic deviation from objective judgment due to reliance on heuristics, and statistical bias refers to differences between true and expected values, commonly manifesting as systematic error in model prediction (ie, a model with output unrepresentative of real-world conditions). Clinical decisions informed by biased models may lead to patient harm due to action on inaccurate AI results or exacerbate health inequities due to differing performance among patient populations. However, while inequitable bias can harm patients in this context, a mindful approach leveraging equitable bias can address underrepresentation of minority groups or rare diseases. Radiologists should also be aware of bias after AI deployment such as automation bias, or a tendency to agree with automated decisions despite contrary evidence. Understanding common sources of imaging AI bias and the consequences of using biased models can guide preventive measures to mitigate its impact. Accordingly, the authors focus on sources of bias at stages along the imaging machine learning life cycle, attempting to simplify potentially intimidating technical terminology for general radiologists using AI tools in practice or collaborating with data scientists and engineers for AI tool development. The authors review definitions of bias in AI, describe common sources of bias, and present recommendations to guide quality control measures to mitigate the impact of bias in imaging AI. Understanding the terms featured in this article will enable a proactive approach to identifying and mitigating bias in imaging AI. Published under a CC BY 4.0 license. Test Your Knowledge questions for this article are available in the supplemental material. See the invited commentary by Rouzrokh and Erickson in this issue.


Asunto(s)
Algoritmos , Inteligencia Artificial , Humanos , Automatización , Aprendizaje Automático , Sesgo
2.
Am J Clin Pathol ; 2024 Apr 20.
Artículo en Inglés | MEDLINE | ID: mdl-38642073

RESUMEN

OBJECTIVES: Iron-deficiency anemia (IDA) is a common health problem worldwide, and up to 10% of adult patients with incidental IDA may have gastrointestinal cancer. A diagnosis of IDA can be established through a combination of laboratory tests, but it is often underrecognized until a patient becomes symptomatic. Based on advances in machine learning, we hypothesized that we could reduce the time to diagnosis by developing an IDA prediction model. Our goal was to develop 3 neural networks by using retrospective longitudinal outpatient laboratory data to predict the risk of IDA 3 to 6 months before traditional diagnosis. METHODS: We analyzed retrospective outpatient electronic health record data between 2009 and 2020 from an academic medical center in northern Texas. We included laboratory features from 30,603 patients to develop 3 types of neural networks: artificial neural networks, long short-term memory cells, and gated recurrent units. The classifiers were trained using the Adam Optimizer across 200 random training-validation splits. We calculated accuracy, area under the receiving operating characteristic curve, sensitivity, and specificity in the testing split. RESULTS: Although all models demonstrated comparable performance, the gated recurrent unit model outperformed the other 2, achieving an accuracy of 0.83, an area under the receiving operating characteristic curve of 0.89, a sensitivity of 0.75, and a specificity of 0.85 across 200 epochs. CONCLUSIONS: Our results showcase the feasibility of employing deep learning techniques for early prediction of IDA in the outpatient setting based on sequences of laboratory data, offering a substantial lead time for clinical intervention.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA