Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 222
Filtrar
1.
Neural Netw ; 178: 106461, 2024 Jun 12.
Artículo en Inglés | MEDLINE | ID: mdl-38906054

RESUMEN

Hard-label black-box textual adversarial attacks present a highly challenging task due to the discrete and non-differentiable nature of text data and the lack of direct access to the model's predictions. Research in this issue is still in its early stages, and the performance and efficiency of existing methods has potential for improvement. For instance, exchange-based and gradient-based attacks may become trapped in local optima and require excessive queries, hindering the generation of adversarial examples with high semantic similarity and low perturbation under limited query conditions. To address these issues, we propose a novel framework called HyGloadAttack (adversarial Attacks via Hybrid optimization and Global random initialization) for crafting high-quality adversarial examples. HyGloadAttack utilizes a perturbation matrix in the word embedding space to find nearby adversarial examples after global initialization and selects synonyms that maximize similarity while maintaining adversarial properties. Furthermore, we introduce a gradient-based quick search method to accelerate the search process of optimization. Extensive experiments on five datasets of text classification and natural language inference, as well as two real APIs, demonstrate the significant superiority of our proposed HyGloadAttack method over state-of-the-art baseline methods.

2.
Front Med (Lausanne) ; 11: 1343456, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38887675

RESUMEN

Artificial intelligence (AI) is a multidisciplinary field intersecting computer science, cognitive science, and other disciplines, able to address the creation of systems that perform tasks generally requiring human intelligence. It consists of algorithms and computational methods that allow machines to learn from data, make decisions, and perform complex tasks, aiming to develop an intelligent system that can work independently or collaboratively with humans. Since AI technologies may help physicians in life-threatening disease prevention and diagnosis and make treatment smart and more targeted, they are spreading in health services. Indeed, humans and machines have unique strengths and weaknesses and can complement each other in providing and optimizing healthcare. However, the healthcare implementation of these technologies is related to emerging ethical and deontological issues regarding the fearsome reduction of doctors' decision-making autonomy and acting discretion, generally strongly conditioned by cognitive elements concerning the specific clinical case. Moreover, this new operational dimension also modifies the usual allocation system of responsibilities in case of adverse events due to healthcare malpractice, thus probably imposing a redefinition of the established medico-legal assessment criteria of medical professional liability. This article outlines the new challenges arising from AI healthcare integration and the possible ways to overcome them, with a focus on Italian legal framework. In this evolving and transitional context emerges the need to balance the human dimension with the artificial one, without mutual exclusion, for a new concept of medicine "with" machines and not "of" machines.

3.
Expert Rev Anti Infect Ther ; : 1-9, 2024 Jun 20.
Artículo en Inglés | MEDLINE | ID: mdl-38881100

RESUMEN

BACKGROUND: In 2017 and 2021, the National Medical Products Administration (NMPA) announced to revise the drug label of fluoroquinolones. We aimed to evaluate the association of fluoroquinolone prescribing with the NMPA announcements of label changes. RESEARCH DESIGN AND METHODS: Monthly prevalence of fluoroquinolone prescriptions for uncomplicated urinary tract infections (uUTI), acute exacerbation of chronic bronchitis (AECB), and acute sinusitis (AS) between 2016 and 2022 was calculated, and interrupted time series analysis was applied to assess the impacts of NMPA label changes on fluoroquinolone use. RESULTS: Prevalence of fluoroquinolone prescriptions decreased by 2.39% (95% CI, -4.72% to -0.07%) for uUTI but increased by 3.02% (95% CI, 1.71% to 4.34%) for AS immediately after the 2017 label change. Moreover, after the 2021 label change, fluoroquinolone use decreased shortly in all the three indications. However, a significant increasing trend was observed in fluoroquinolone use for AECB episodes, and fluoroquinolons were used for 61.4% of treated uUTI, 31.6% of treated AECB, and 5.42% of treated AS at the end of 2022, respectively. CONCLUSIONS: The label changes issued by the NMPA had no substantial impacts on fluoroquinolone prescribing in the study region in China. Fluoroquinolone prescribing was still highly prevalent for uUTI and AECB and thus requiring further antimicrobial stewardship.

4.
Epilepsy Res ; 203: 107382, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38761467

RESUMEN

BACKGROUND: Pharmacovigilance systems such as the FDA Adverse Event Reporting System (FAERS), are established models for adverse event surveillance that may have been missed during clinical trials. We aimed to analyze twenty-five anti-seizure medications (ASMs) in FAERS to assess for increased reporting of suicidal and self-injurious behavior. METHODS: Twenty-five ASMs were analyzed: brivaracetam, cannabidiol, carbamazepine, clobazam, clonazepam, diazepam, eslicarbazepine, felbamate, gabapentin, lacosamide, lamotrigine, levetiracetam, oxcarbazepine, perampanel, phenobarbital, phenytoin, pregabalin, primidone, rufinamide, stiripentol, tiagabine, topiramate, valproate, vigabatrin, zonisamide. Reports of "suicidal and self-injurious behavior" were collected from January 1, 2004, to December 31, 2020, using OpenVigil 2.1 tool with indication as "Epilepsy". Relative reporting ratio, proportional reporting ratio, and reporting odds ratio were calculated utilizing all other drug reports for epilepsy patients as a control. RESULTS: Significant relative operating ratio, ROR (greater than 1, p<0.05) were observed for diazepam (2.909), pregabalin (2.739), brivaracetam (2.462), gabapentin (2.185), clonazepam (1.649), zonisamide (1.462), lacosamide (1.333), and levetiracetam (1.286). CONCLUSIONS: Of the 25 ASMs that were analyzed in this study, 4 (16%) were identified to have been linked with a likely true adverse event. These drugs included diazepam, brivaracetam, gabapenetin, and pregabalin. Although several limitations are present with the FAERS database, it is imperative to closely monitor patient comorbidities for increased risk of suicidality with the use of several ASMs.


Asunto(s)
Sistemas de Registro de Reacción Adversa a Medicamentos , Anticonvulsivantes , Conducta Autodestructiva , United States Food and Drug Administration , Humanos , Anticonvulsivantes/efectos adversos , Conducta Autodestructiva/inducido químicamente , Conducta Autodestructiva/epidemiología , Estados Unidos/epidemiología , Masculino , Femenino , Sistemas de Registro de Reacción Adversa a Medicamentos/estadística & datos numéricos , Adulto , Adolescente , Persona de Mediana Edad , Suicidio/estadística & datos numéricos , Adulto Joven , Bases de Datos Factuales , Farmacovigilancia , Niño , Anciano
5.
Cureus ; 16(4): e57597, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38706997

RESUMEN

A black box warning, signaling potential life-threatening adverse effects of medications or medical devices, is crucial for public and healthcare professional awareness. Comprehending and adhering to these warnings can prevent serious harm. This review aims to elucidate their significance. Data on drugs with black box warnings were collected from the Food and Drug Administration's (FDA's) official website using the search term 'Boxed warnings' from January 1, 2015, to January 31, 2024. A Microsoft Excel spreadsheet (Microsoft Corporation, Redmond, WA, USA) containing black box warnings for this period was downloaded from the FDA's website. Additional parameters, such as drug class and whether the warnings were new or existing, were added to the downloaded spreadsheet. The collected data were organized by year, categorizing new and existing warnings, along with details on the evidence source, system-wise classification, and black box warnings for commonly used drugs, including their clinical significance. Results show that in the past decade, 40% of black box warnings were issued in 2023, followed by 12% in 2022. Most warnings (67%) comprised existing ones with minor revisions while 29% were new. Nine existing warnings were removed during the period. Post-marketing studies predominantly provided evidence for these warnings. Neuropsychiatric concerns like addiction potential (31%), suicidal tendency (7%), and hypersensitivity reactions (12%) were the frequently encountered black box warnings. Black box warnings play a crucial role in highlighting the serious adverse effects of medications. Neuropsychiatric warnings have been frequent over the past decade. Awareness of these warnings is essential to prevent adverse effects and enhance patient care, especially concerning drugs like guaifenesin/hydrocodone bitartrate, zolpidem, and montelukast commonly encountered in clinical practice.

6.
J Pak Med Assoc ; 74(4 (Supple-4)): S85-S89, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38712414

RESUMEN

The Operating Room Black Box (ORBB) is a relatively recent technology that provides a comprehensive solution for assessing technical and non-technical skills of the operating team. Originating from aviation, the ORBB enables real-time observation and continuous recording of intraoperative events allowing for an in-depth analysis of efficiency, safety, and adverse events. Its dual role as a teaching tool enhances transparency and patient safety in surgical training. In comparison to traditional methods, like checklists that have limitations, the ORBB offers a holistic understanding of clinical and non-clinical performances that are responsible for intraoperative patient outcomes. It facilitates systematic observation without additional personnel, allowing for review of numerous surgical cases. This review highlights the potential benefits of the ORBB in enhancing patient safety, its role as a surgical training tool, and addresses barriers especially in resource-constrained settings. It signifies a transformative step towards global surgical practices, emphasizing transparency and improved surgical outcomes.


Asunto(s)
Quirófanos , Seguridad del Paciente , Humanos , Quirófanos/normas , Lista de Verificación , Competencia Clínica , Cirugía General/educación
7.
Forensic Sci Int Synerg ; 8: 100472, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38737990

RESUMEN

In recent years, there has been discussion and controversy relating to the treatment of inconclusive decisions in forensic feature comparison disciplines when considering the reliability of examination methods and results. In this article, we offer a brief review of the various viewpoints and suggestions that have been recently put forth, followed by a solution that we believe addresses the treatment of inconclusive decisions. We consider the issues in the context of method conformance and method performance as two distinct concepts, both of which are necessary for the determination of reliability. Method conformance relates to an assessment of whether the outcome of a method is the result of the analyst's adherence to the procedures that define the method. Method performance reflects the capacity of a method to discriminate between different propositions of interest (e.g., mated and non-mated comparisons). We then discuss implications of these issues for the forensic science community.

8.
Neural Netw ; 175: 106310, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38663301

RESUMEN

Thermal infrared detectors have a vast array of potential applications in pedestrian detection and autonomous driving, and their safety performance is of great concern. Recent works use bulb plate, "QR" suit, and infrared patches as physical perturbations to perform white-box attacks on thermal infrared detectors, which are effective but not practical for real-world scenarios. Some researchers have tried to utilize hot and cold blocks as physical perturbations for black-box attacks on thermal infrared detectors. However, this attempts has not yielded robust and multi-view physical attacks, indicating limitations in the approach. To overcome the limitations of existing approaches, we introduce a novel black-box physical attack method, called adversarial infrared blocks (AdvIB). By optimizing the physical parameters of the infrared blocks and deploying them to pedestrians from multiple views, including the front, side, and back, AdvIB can execute robust and multi-view attacks on thermal infrared detectors. Our physical tests show that the proposed method achieves a success rate of over 80% under most distance and view conditions, validating its effectiveness. For stealthiness, our method involves attaching the adversarial infrared block to the inside of clothing, enhancing its stealthiness. Additionally, we perform comprehensive experiments and compare the experimental results with baseline to verify the robustness of our method. In summary, AdvIB allows for potent multi-view black-box attacks, profoundly influencing ethical considerations in today's society. Potential consequences, including disasters from technology misuse and attackers' legal liability, highlight crucial ethical and security issues associated with AdvIB. Considering these concerns, we urge heightened attention to the proposed AdvIB. Our code can be accessed from the following link: https://github.com/ChengYinHu/AdvIB.git.


Asunto(s)
Rayos Infrarrojos , Humanos , Seguridad Computacional , Algoritmos , Peatones , Redes Neurales de la Computación , Conducción de Automóvil
9.
Forensic Sci Int ; 358: 112009, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38581823

RESUMEN

Tire impression evidence can be a valuable tool during a crime scene investigation-it can link vehicles to scenes or secondary locations, and reveal information about the series of events surrounding a crime. The interpretation of tire impression evidence relies on the expertise of forensic tire examiners. To date, there have not been any studies published that empirically evaluate the accuracy and reproducibility of decisions made by tire impression examiners. This paper presents the results of a study in which 17 tire impression examiners and trainees conducted 238 comparisons on 77 distinct questioned impression-known tire comparison sets (QKsets). This study was conducted digitally and addressed examinations based solely upon the characteristics of the tire impression images provided. The quality and characteristics of the impressions were selected to be broadly representative of those encountered in casework. Participants reported their decisions using a multi-level conclusion scale: 68% of responses were class associations (Association of Class Characteristics or Limited Association of Class), 21% were definitive decisions (ID or Exclusion), 8% were probable decisions (High Degree of Association or Indications of Non-Association), and 3% were neutral responses (Not Suitable or Inconclusive). Although class associations were the most reported response type, when definitive decisions were reported, they were often correct: 96% of IDs and 89% of Exclusions were consistent with ground truth regarding the source of the known tire in the QKset. Overall, we observed 4 erroneous definitive decisions (3 Exclusions on mated QKsets; 1 ID on a nonmated QKset) and 1 incorrect probable decision (Indications of Non-Association on a mated QKset). Decision rates were notably associated with both quality (lower quality questioned impressions were more likely to result in class associations) and dimensionality (2D questioned impressions were more likely to result in definitive decisions), which were correlated factors. Although the study size limits the precision of the measured rates, the results of this study remain valuable to the forensic science and legal communities and provide empirical data regarding examiner performance for a discipline that previously did not have any such estimates.


Asunto(s)
Ciencias Forenses , Humanos , Reproducibilidad de los Resultados , Ciencias Forenses/métodos , Toma de Decisiones , Variaciones Dependientes del Observador
11.
Sensors (Basel) ; 24(5)2024 Feb 28.
Artículo en Inglés | MEDLINE | ID: mdl-38475085

RESUMEN

Sensor degradation and failure often undermine users' confidence in adopting a new data-driven decision-making model, especially in risk-sensitive scenarios. A risk assessment framework tailored to classification algorithms is introduced to evaluate the decision-making risks arising from sensor degradation and failures in such scenarios. The framework encompasses various steps, including on-site fault-free data collection, sensor failure data collection, fault data generation, simulated data-driven decision-making, risk identification, quantitative risk assessment, and risk prediction. Leveraging this risk assessment framework, users can evaluate the potential risks of decision errors under the current data collection status. Before model adoption, ranking risk sensitivity to sensor data provides a basis for optimizing data collection. During the use of decision algorithms, considering the expected lifespan of sensors enables the prediction of potential risks the system might face, offering comprehensive information for sensor maintenance. This method has been validated through a case study involving an access control.

12.
Cureus ; 16(1): e51631, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38318552

RESUMEN

Artificial intelligence (AI) is the capability of a machine to execute cognitive processes that are typically considered to be functions of the human brain. It is the study of algorithms that enable machines to reason and perform mental tasks, including problem-solving, object and word recognition, and decision-making. Once considered science fiction, AI today is a fact and an increasingly prevalent subject in both academic and popular literature. It is expected to reshape medicine, benefiting both healthcare professionals and patients. Machine learning (ML) is a subset of AI that allows machines to learn and make predictions by recognizing patterns, thus empowering the medical team to deliver better care to patients through accurate diagnosis and treatment. ML is expanding its footprint in a variety of surgical specialties, including general surgery, ophthalmology, cardiothoracic surgery, and vascular surgery, to name a few. In recent years, we have seen AI make its way into the operating theatres. Though it has not yet been able to replace the surgeon, it has the potential to become a highly valuable surgical tool. Rest assured that the day is not far off when AI shall play a significant intraoperative role, a projection that is currently marred by safety concerns. This review aims to explore the present application of AI in various surgical disciplines and how it benefits both patients and physicians, as well as the current obstacles and limitations facing its seemingly unstoppable rise.

13.
Med Health Care Philos ; 27(2): 227-240, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38353801

RESUMEN

This manuscript draws on the moral norms arising from the nuanced accounts of epistemic (in)justice and social identity in relational autonomy to normatively assess and articulate the ethical problems associated with using AI in patient care in light of the Black Box problem. The article also describes how black-boxed AI may be used within the healthcare system. The manuscript highlights what needs to happen to align AI with the moral norms it draws on. Deeper thinking - from other backgrounds other than decolonial scholarship and relational autonomy - about the impact of AI on the human experience needs to be done to appreciate any other barriers that may exist. Future studies can take up this task.


Asunto(s)
Filosofía Médica , Identificación Social , Justicia Social , Humanos , Inteligencia Artificial/ética , Principios Morales , Atención al Paciente/ética
14.
Eur J Radiol ; 173: 111393, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38417186

RESUMEN

Artificial intelligence (AI) is infiltrating nearly all fields of science by storm. One notorious property that AI algorithms bring is their so-called black box character. In particular, they are said to be inherently unexplainable algorithms. Of course, such characteristics would pose a problem for the medical world, including radiology. The patient journey is filled with explanations along the way, from diagnoses to treatment, follow-up, and more. If we were to replace part of these steps with non-explanatory algorithms, we could lose grip on vital aspects such as finding mistakes, patient trust, and even the creation of new knowledge. In this article, we argue that, even for the darkest of black boxes, there is hope of understanding them. In particular, we compare the situation of understanding black box models to that of understanding the laws of nature in physics. In the case of physics, we are given a 'black box' law of nature, about which there is no upfront explanation. However, as current physical theories show, we can learn plenty about them. During this discussion, we present the process by which we make such explanations and the human role therein, keeping a solid focus on radiological AI situations. We will outline the AI developers' roles in this process, but also the critical role fulfilled by the practitioners, the radiologists, in providing a healthy system of continuous improvement of AI models. Furthermore, we explore the role of the explainable AI (XAI) research program in the broader context we describe.


Asunto(s)
Algoritmos , Inteligencia Artificial , Humanos , Aprendizaje , Examen Físico , Radiólogos
15.
JMIR Hum Factors ; 11: e53378, 2024 Jan 25.
Artículo en Inglés | MEDLINE | ID: mdl-38271086

RESUMEN

BACKGROUND: Adverse events refer to incidents with potential or actual harm to patients in hospitals. These events are typically documented through patient safety event (PSE) reports, which consist of detailed narratives providing contextual information on the occurrences. Accurate classification of PSE reports is crucial for patient safety monitoring. However, this process faces challenges due to inconsistencies in classifications and the sheer volume of reports. Recent advancements in text representation, particularly contextual text representation derived from transformer-based language models, offer a promising solution for more precise PSE report classification. Integrating the machine learning (ML) classifier necessitates a balance between human expertise and artificial intelligence (AI). Central to this integration is the concept of explainability, which is crucial for building trust and ensuring effective human-AI collaboration. OBJECTIVE: This study aims to investigate the efficacy of ML classifiers trained using contextual text representation in automatically classifying PSE reports. Furthermore, the study presents an interface that integrates the ML classifier with the explainability technique to facilitate human-AI collaboration for PSE report classification. METHODS: This study used a data set of 861 PSE reports from a large academic hospital's maternity units in the Southeastern United States. Various ML classifiers were trained with both static and contextual text representations of PSE reports. The trained ML classifiers were evaluated with multiclass classification metrics and the confusion matrix. The local interpretable model-agnostic explanations (LIME) technique was used to provide the rationale for the ML classifier's predictions. An interface that integrates the ML classifier with the LIME technique was designed for incident reporting systems. RESULTS: The top-performing classifier using contextual representation was able to obtain an accuracy of 75.4% (95/126) compared to an accuracy of 66.7% (84/126) by the top-performing classifier trained using static text representation. A PSE reporting interface has been designed to facilitate human-AI collaboration in PSE report classification. In this design, the ML classifier recommends the top 2 most probable event types, along with the explanations for the prediction, enabling PSE reporters and patient safety analysts to choose the most suitable one. The LIME technique showed that the classifier occasionally relies on arbitrary words for classification, emphasizing the necessity of human oversight. CONCLUSIONS: This study demonstrates that training ML classifiers with contextual text representations can significantly enhance the accuracy of PSE report classification. The interface designed in this study lays the foundation for human-AI collaboration in the classification of PSE reports. The insights gained from this research enhance the decision-making process in PSE report classification, enabling hospitals to more efficiently identify potential risks and hazards and enabling patient safety analysts to take timely actions to prevent patient harm.


Asunto(s)
Inteligencia Artificial , Compuestos de Calcio , Óxidos , Seguridad del Paciente , Femenino , Embarazo , Humanos , Algoritmos , Aprendizaje Automático
16.
Camb Q Healthc Ethics ; : 1-10, 2024 Jan 15.
Artículo en Inglés | MEDLINE | ID: mdl-38220470

RESUMEN

In the ethics of algorithms, a specifically epistemological analysis is rarely undertaken in order to gain a critique (or a defense) of the handling of or trust in medical black box algorithms (BBAs). This article aims to begin to fill this research gap. Specifically, the thesis is examined according to which such algorithms are regarded as epistemic authorities (EAs) and that the results of a medical algorithm must completely replace other convictions that patients have (preemptionism). If this were true, it would be a reason to distrust medical BBAs. First, the author describes what EAs are and why BBAs can be considered EAs. Then, preemptionism will be outlined and criticized as an answer to the question of how to deal with an EA. The discussion leads to some requirements for dealing with a BBA as an EA.

17.
ChemMedChem ; 19(3): e202300586, 2024 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-37983655

RESUMEN

The use of black box machine learning models whose decisions cannot be understood limits the acceptance of predictions in interdisciplinary research and camouflages artificial learning characteristics leading to predictions for other than anticipated reasons. Consequently, there is increasing interest in explainable artificial intelligence to rationalize predictions and uncover potential pitfalls. Among others, relevant approaches include feature attribution methods to identify molecular structures determining predictions and counterfactuals (CFs) or contrastive explanations. CFs are defined as variants of test instances with minimal modifications leading to opposing predictions. In medicinal chemistry, CFs have thus far only been little investigated although they are particularly intuitive from a chemical perspective. We introduce a new methodology for the systematic generation of CFs that is centered on well-defined structural analogues of test compounds. The approach is transparent, computationally straightforward, and shown to provide a wealth of CFs for test sets. The method is made freely available.


Asunto(s)
Inteligencia Artificial , Aprendizaje Automático , Química Farmacéutica , Recombinación Genética
18.
Hum Reprod ; 39(2): 285-292, 2024 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-38061074

RESUMEN

With the exponential growth of computing power and accumulation of embryo image data in recent years, artificial intelligence (AI) is starting to be utilized in embryo selection in IVF. Amongst different AI technologies, machine learning (ML) has the potential to reduce operator-related subjectivity in embryo selection while saving labor time on this task. However, as modern deep learning (DL) techniques, a subcategory of ML, are increasingly used, its integrated black-box attracts growing concern owing to the well-recognized issues regarding lack of interpretability. Currently, there is a lack of randomized controlled trials to confirm the effectiveness of such black-box models. Recently, emerging evidence has shown underperformance of black-box models compared to the more interpretable traditional ML models in embryo selection. Meanwhile, glass-box AI, such as interpretable ML, is being increasingly promoted across a wide range of fields and is supported by its ethical advantages and technical feasibility. In this review, we propose a novel classification system for traditional and AI-driven systems from an embryology standpoint, defining different morphology-based selection approaches with an emphasis on subjectivity, explainability, and interpretability.


Asunto(s)
Inteligencia Artificial , Aprendizaje Automático , Humanos , Embrión de Mamíferos
19.
Graefes Arch Clin Exp Ophthalmol ; 262(3): 975-982, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37747539

RESUMEN

PURPOSE: This narrative review aims to provide an overview of the dangers, controversial aspects, and implications of artificial intelligence (AI) use in ophthalmology and other medical-related fields. METHODS: We conducted a decade-long comprehensive search (January 2013-May 2023) of both academic and grey literature, focusing on the application of AI in ophthalmology and healthcare. This search included key web-based academic databases, non-traditional sources, and targeted searches of specific organizations and institutions. We reviewed and selected documents for relevance to AI, healthcare, ethics, and guidelines, aiming for a critical analysis of ethical, moral, and legal implications of AI in healthcare. RESULTS: Six main issues were identified, analyzed, and discussed. These include bias and clinical safety, cybersecurity, health data and AI algorithm ownership, the "black-box" problem, medical liability, and the risk of widening inequality in healthcare. CONCLUSION: Solutions to address these issues include collecting high-quality data of the target population, incorporating stronger security measures, using explainable AI algorithms and ensemble methods, and making AI-based solutions accessible to everyone. With careful oversight and regulation, AI-based systems can be used to supplement physician decision-making and improve patient care and outcomes.


Asunto(s)
Inteligencia Artificial , Oftalmología , Humanos , Algoritmos , Inteligencia Artificial/ética , Bases de Datos Factuales , Principios Morales
20.
Forensic Sci Int ; 354: 111909, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38104395

RESUMEN

Forensic science disciplines such as latent print examination, bullet and cartridge case comparisons, and shoeprint analysis, involve subjective decisions by forensic experts throughout the examination process. Most of the decisions involve ordinal categories. Examples include a three-category outcome for latent print comparisons (exclusion, inconclusive, identification) and a seven-category outcome for footwear comparisons (exclusion, indications of non-association, inconclusive, limited association of class characteristics, association of class characteristics, high degree of association, identification). As the results of the forensic examinations of evidence can heavily influence the outcomes of court proceedings, it is important to assess the reliability and accuracy of the underlying decisions. "Black box" studies are the most common approach for assessing the reliability and accuracy of subjective decisions. In these studies, researchers produce evidence samples consisting of a sample of questioned source and a sample of known source where the ground truth (same source or different source) is known. Examiners provide assessments for selected samples using the same approach they would use in actual casework. These studies often have two phases; the first phase comprises of decisions on samples of varying complexities by different examiners, and the second phase involves repeated decisions by the same examiner on a (usually) small subset of samples that were encountered by examiners in the first phase. We provide a statistical method to analyze ordinal decisions from black-box trials with the objective of obtaining inferences for the reliability of these decisions and quantifying the variation in decisions attributable to the examiners, the samples, and statistical interaction effects between examiners and samples. We present simulation studies to judge the performance of the model on data with known parameter values and apply the model to data from a handwritten signature complexity study, a latent fingerprint examination black-box study, and a handwriting comparisons black-box study.


Asunto(s)
Dermatoglifia , Ciencias Forenses , Reproducibilidad de los Resultados , Simulación por Computador , Escritura Manual
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...