Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 352
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Proc Natl Acad Sci U S A ; 120(41): e2301842120, 2023 10 10.
Artículo en Inglés | MEDLINE | ID: mdl-37782786

RESUMEN

One of the most troubling trends in criminal investigations is the growing use of "black box" technology, in which law enforcement rely on artificial intelligence (AI) models or algorithms that are either too complex for people to understand or they simply conceal how it functions. In criminal cases, black box systems have proliferated in forensic areas such as DNA mixture interpretation, facial recognition, and recidivism risk assessments. The champions and critics of AI argue, mistakenly, that we face a catch 22: While black box AI is not understandable by people, they assume that it produces more accurate forensic evidence. In this Article, we question this assertion, which has so powerfully affected judges, policymakers, and academics. We describe a mature body of computer science research showing how "glass box" AI-designed to be interpretable-can be more accurate than black box alternatives. Indeed, black box AI performs predictably worse in settings like the criminal system. Debunking the black box performance myth has implications for forensic evidence, constitutional criminal procedure rights, and legislative policy. Absent some compelling-or even credible-government interest in keeping AI as a black box, and given the constitutional rights and public safety interests at stake, we argue that a substantial burden rests on the government to justify black box AI in criminal cases. We conclude by calling for judicial rulings and legislation to safeguard a right to interpretable forensic AI.


Asunto(s)
Inteligencia Artificial , Criminales , Humanos , Medicina Legal , Aplicación de la Ley , Algoritmos
2.
Neuroimage ; 298: 120771, 2024 Aug 05.
Artículo en Inglés | MEDLINE | ID: mdl-39111376

RESUMEN

Modeling dynamic interactions among network components is crucial to uncovering the evolution mechanisms of complex networks. Recently, spatio-temporal graph learning methods have achieved noteworthy results in characterizing the dynamic changes of inter-node relations (INRs). However, challenges remain: The spatial neighborhood of an INR is underexploited, and the spatio-temporal dependencies in INRs' dynamic changes are overlooked, ignoring the influence of historical states and local information. In addition, the model's explainability has been understudied. To address these issues, we propose an explainable spatio-temporal graph evolution learning (ESTGEL) model to model the dynamic evolution of INRs. Specifically, an edge attention module is proposed to utilize the spatial neighborhood of an INR at multi-level, i.e., a hierarchy of nested subgraphs derived from decomposing the initial node-relation graph. Subsequently, a dynamic relation learning module is proposed to capture the spatio-temporal dependencies of INRs. The INRs are then used as adjacent information to improve the node representation, resulting in comprehensive delineation of dynamic evolution of the network. Finally, the approach is validated with real data on brain development study. Experimental results on dynamic brain networks analysis reveal that brain functional networks transition from dispersed to more convergent and modular structures throughout development. Significant changes are observed in the dynamic functional connectivity (dFC) associated with functions including emotional control, decision-making, and language processing.

3.
Cancer ; 130(12): 2101-2107, 2024 Jun 15.
Artículo en Inglés | MEDLINE | ID: mdl-38554271

RESUMEN

Modern artificial intelligence (AI) tools built on high-dimensional patient data are reshaping oncology care, helping to improve goal-concordant care, decrease cancer mortality rates, and increase workflow efficiency and scope of care. However, data-related concerns and human biases that seep into algorithms during development and post-deployment phases affect performance in real-world settings, limiting the utility and safety of AI technology in oncology clinics. To this end, the authors review the current potential and limitations of predictive AI for cancer diagnosis and prognostication as well as of generative AI, specifically modern chatbots, which interfaces with patients and clinicians. They conclude the review with a discussion on ongoing challenges and regulatory opportunities in the field.


Asunto(s)
Inteligencia Artificial , Oncología Médica , Neoplasias , Humanos , Oncología Médica/métodos , Neoplasias/terapia , Neoplasias/diagnóstico , Algoritmos , Pronóstico
4.
Brief Bioinform ; 23(3)2022 05 13.
Artículo en Inglés | MEDLINE | ID: mdl-35511112

RESUMEN

MOTIVATION: Drug-drug interactions (DDIs) occur during the combination of drugs. Identifying potential DDI helps us to study the mechanism behind the combination medication or adverse reactions so as to avoid the side effects. Although many artificial intelligence methods predict and mine potential DDI, they ignore the 3D structure information of drug molecules and do not fully consider the contribution of molecular substructure in DDI. RESULTS: We proposed a new deep learning architecture, 3DGT-DDI, a model composed of a 3D graph neural network and pre-trained text attention mechanism. We used 3D molecular graph structure and position information to enhance the prediction ability of the model for DDI, which enabled us to deeply explore the effect of drug substructure on DDI relationship. The results showed that 3DGT-DDI outperforms other state-of-the-art baselines. It achieved an 84.48% macro F1 score in the DDIExtraction 2013 shared task dataset. Also, our 3D graph model proves its performance and explainability through weight visualization on the DrugBank dataset. 3DGT-DDI can help us better understand and identify potential DDI, thereby helping to avoid the side effects of drug mixing. AVAILABILITY: The source code and data are available at https://github.com/hehh77/3DGT-DDI.


Asunto(s)
Inteligencia Artificial , Efectos Colaterales y Reacciones Adversas Relacionados con Medicamentos , Interacciones Farmacológicas , Humanos , Redes Neurales de la Computación , Programas Informáticos
5.
Hum Reprod ; 39(2): 285-292, 2024 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-38061074

RESUMEN

With the exponential growth of computing power and accumulation of embryo image data in recent years, artificial intelligence (AI) is starting to be utilized in embryo selection in IVF. Amongst different AI technologies, machine learning (ML) has the potential to reduce operator-related subjectivity in embryo selection while saving labor time on this task. However, as modern deep learning (DL) techniques, a subcategory of ML, are increasingly used, its integrated black-box attracts growing concern owing to the well-recognized issues regarding lack of interpretability. Currently, there is a lack of randomized controlled trials to confirm the effectiveness of such black-box models. Recently, emerging evidence has shown underperformance of black-box models compared to the more interpretable traditional ML models in embryo selection. Meanwhile, glass-box AI, such as interpretable ML, is being increasingly promoted across a wide range of fields and is supported by its ethical advantages and technical feasibility. In this review, we propose a novel classification system for traditional and AI-driven systems from an embryology standpoint, defining different morphology-based selection approaches with an emphasis on subjectivity, explainability, and interpretability.


Asunto(s)
Inteligencia Artificial , Aprendizaje Automático , Humanos , Embrión de Mamíferos
6.
J Nucl Cardiol ; : 101889, 2024 Jun 08.
Artículo en Inglés | MEDLINE | ID: mdl-38852900

RESUMEN

BACKGROUND: We developed an explainable deep-learning (DL)-based classifier to identify flow-limiting coronary artery disease (CAD) by O-15 H2O perfusion positron emission tomography computed tomography (PET/CT) and coronary CT angiography (CTA) imaging. The classifier uses polar map images with numerical data and visualizes data findings. METHODS: A DLmodel was implemented and evaluated on 138 individuals, consisting of a combined image-and data-based classifier considering 35 clinical, CTA, and PET variables. Data from invasive coronary angiography were used as reference. Performance was evaluated with clinical classification using accuracy (ACC), area under the receiver operating characteristic curve (AUC), F1 score (F1S), sensitivity (SEN), specificity (SPE), precision (PRE), net benefit, and Cohen's Kappa. Statistical testing was conducted using McNemar's test. RESULTS: The DL model had a median ACC = 0.8478, AUC = 0.8481, F1S = 0.8293, SEN = 0.8500, SPE = 0.8846, and PRE = 0.8500. Improved detection of true-positive and false-negative cases, increased net benefit in thresholds up to 34%, and comparable Cohen's kappa was seen, reaching similar performance to clinical reading. Statistical testing revealed no significant differences between DL model and clinical reading. CONCLUSIONS: The combined DL model is a feasible and an effective method in detection of CAD, allowing to highlight important data findings individually in interpretable manner.

7.
J Biomed Inform ; 156: 104681, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38960273

RESUMEN

The multimorbidity problem involves the identification and mitigation of adverse interactions that occur when multiple computer interpretable guidelines are applied concurrently to develop a treatment plan for a patient diagnosed with multiple diseases. Solving this problem requires decision support approaches which are difficult to comprehend for physicians. As such, the rationale for treatment plans generated by these approaches needs to be provided. OBJECTIVE: To develop an explainability component for an automated planning-based approach to the multimorbidity problem, and to assess the fidelity and interpretability of generated explanations using a clinical case study. METHODS: The explainability component leverages the task-network model for representing computer interpretable guidelines. It generates post-hoc explanations composed of three aspects that answer why specific clinical actions are in a treatment plan, why specific revisions were applied, and how factors like medication cost, patient's adherence, etc. influence the selection of specific actions. The explainability component is implemented as part of MitPlan, where we revised our planning-based approach to support explainability. We developed an evaluation instrument based on the system causability scale and other vetted surveys to evaluate the fidelity and interpretability of its explanations using a two dimensional comparison study design. RESULTS: The explainability component was implemented for MitPlan and tested in the context of a clinical case study. The fidelity and interpretability of the generated explanations were assessed using a physician-focused evaluation study involving 21 participants from two different specialties and two levels of experience. Results show that explanations provided by the explainability component in MitPlan are of acceptable fidelity and interpretability, and that the clinical justification of the actions in a treatment plan is important to physicians. CONCLUSION: We created an explainability component that enriches an automated planning-based approach to solving the multimorbidity problem with meaningful explanations for actions in a treatment plan. This component relies on the task-network model to represent computer interpretable guidelines and as such can be ported to other approaches that also use the task-network model representation. Our evaluation study demonstrated that explanations that support a physician's understanding of the clinical reasons for the actions in a treatment plan are useful and important.


Asunto(s)
Multimorbilidad , Humanos , Sistemas de Apoyo a Decisiones Clínicas , Planificación de Atención al Paciente
8.
J Biomed Inform ; 154: 104650, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38701887

RESUMEN

BACKGROUND: Distinguishing diseases into distinct subtypes is crucial for study and effective treatment strategies. The Open Targets Platform (OT) integrates biomedical, genetic, and biochemical datasets to empower disease ontologies, classifications, and potential gene targets. Nevertheless, many disease annotations are incomplete, requiring laborious expert medical input. This challenge is especially pronounced for rare and orphan diseases, where resources are scarce. METHODS: We present a machine learning approach to identifying diseases with potential subtypes, using the approximately 23,000 diseases documented in OT. We derive novel features for predicting diseases with subtypes using direct evidence. Machine learning models were applied to analyze feature importance and evaluate predictive performance for discovering both known and novel disease subtypes. RESULTS: Our model achieves a high (89.4%) ROC AUC (Area Under the Receiver Operating Characteristic Curve) in identifying known disease subtypes. We integrated pre-trained deep-learning language models and showed their benefits. Moreover, we identify 515 disease candidates predicted to possess previously unannotated subtypes. CONCLUSIONS: Our models can partition diseases into distinct subtypes. This methodology enables a robust, scalable approach for improving knowledge-based annotations and a comprehensive assessment of disease ontology tiers. Our candidates are attractive targets for further study and personalized medicine, potentially aiding in the unveiling of new therapeutic indications for sought-after targets.


Asunto(s)
Aprendizaje Automático , Humanos , Enfermedad/clasificación , Curva ROC , Biología Computacional/métodos , Algoritmos , Aprendizaje Profundo
9.
Sensors (Basel) ; 24(12)2024 Jun 07.
Artículo en Inglés | MEDLINE | ID: mdl-38931500

RESUMEN

Cybersecurity has become a major concern in the modern world due to our heavy reliance on cyber systems. Advanced automated systems utilize many sensors for intelligent decision-making, and any malicious activity of these sensors could potentially lead to a system-wide collapse. To ensure safety and security, it is essential to have a reliable system that can automatically detect and prevent any malicious activity, and modern detection systems are created based on machine learning (ML) models. Most often, the dataset generated from the sensor node for detecting malicious activity is highly imbalanced because the Malicious class is significantly fewer than the Non-Malicious class. To address these issues, we proposed a hybrid data balancing technique in combination with a Cluster-based Under Sampling and Synthetic Minority Oversampling Technique (SMOTE). We have also proposed an ensemble machine learning model that outperforms other standard ML models, achieving 99.7% accuracy. Additionally, we have identified the critical features that pose security risks to the sensor nodes with extensive explainability analysis of our proposed machine learning model. In brief, we have explored a hybrid data balancing method, developed a robust ensemble machine learning model for detecting malicious sensor nodes, and conducted a thorough analysis of the model's explainability.

10.
J Environ Manage ; 351: 119866, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38147770

RESUMEN

Loktak Lake, one of the largest freshwater lakes in Manipur, India, is critical for the eco-hydrology and economy of the region, but faces deteriorating water quality due to urbanisation, anthropogenic activities, and domestic sewage. Addressing the urgent need for effective pollution management, this study aims to assess the lake's water quality status using the water quality index (WQI) and develop advanced machine learning (ML) tools for WQI assessment and ML model interpretation to improve pollution management decision making. The WQI was assessed using entropy-based weighting arithmetic and three ML models - Gradient Boosting Machine (GBM), Random Forest (RF) and Deep Neural Network (DNN) - were optimised using a grid search algorithm in the H2O Application Programming Interface (API). These models were validated by various metrics and interpreted globally and locally via Partial Dependency Plot (PDP), Accumulated Local Effect (ALE) and SHapley Additive exPlanations (SHAP). The results show a WQI range of 72.38-100, with 52.7% of samples categorised as very poor. The RF model outperformed GBM and DNN and showed the highest accuracy and generalisation ability, which is reflected in the superior R2 values (0.97 in training, 0.9 in test) and the lower root mean square error (RMSE). RF's minimal margin of error and reliable feature interpretation contrasted with DNN's larger margin of error and inconsistency, which affected its usefulness for decision making. Turbidity was found to be a critical predictive feature in all models, significantly influencing WQI, with other variables such as pH and temperature also playing an important role. SHAP dependency plots illustrated the direct relationship between key water quality parameters such as turbidity and WQI predictions. The novelty of this study lies in its comprehensive approach to the evaluation and interpretation of ML models for WQI estimation, which provides a nuanced understanding of water quality dynamics in Loktak Lake. By identifying the most effective ML models and key predictive functions, this study provides invaluable insights for water quality management and paves the way for targeted strategies to monitor and improve water quality in this vital freshwater ecosystem.


Asunto(s)
Aprendizaje Profundo , Calidad del Agua , Lagos , Monitoreo del Ambiente/métodos , Ecosistema , India
11.
J Med Syst ; 48(1): 25, 2024 Feb 23.
Artículo en Inglés | MEDLINE | ID: mdl-38393660

RESUMEN

Precise neurosurgical guidance is critical for successful brain surgeries and plays a vital role in all phases of image-guided neurosurgery (IGN). Neuronavigation software enables real-time tracking of surgical tools, ensuring their presentation with high precision in relation to a virtual patient model. Therefore, this work focuses on the development of a novel multimodal IGN system, leveraging deep learning and explainable AI to enhance brain tumor surgery outcomes. The study establishes the clinical and technical requirements of the system for brain tumor surgeries. NeuroIGN adopts a modular architecture, including brain tumor segmentation, patient registration, and explainable output prediction, and integrates open-source packages into an interactive neuronavigational display. The NeuroIGN system components underwent validation and evaluation in both laboratory and simulated operating room (OR) settings. Experimental results demonstrated its accuracy in tumor segmentation and the success of ExplainAI in increasing the trust of medical professionals in deep learning. The proposed system was successfully assembled and set up within 11 min in a pre-clinical OR setting with a tracking accuracy of 0.5 (± 0.1) mm. NeuroIGN was also evaluated as highly useful, with a high frame rate (19 FPS) and real-time ultrasound imaging capabilities. In conclusion, this paper describes not only the development of an open-source multimodal IGN system but also demonstrates the innovative application of deep learning and explainable AI algorithms in enhancing neuronavigation for brain tumor surgeries. By seamlessly integrating pre- and intra-operative patient image data with cutting-edge interventional devices, our experiments underscore the potential for deep learning models to improve the surgical treatment of brain tumors and long-term post-operative outcomes.


Asunto(s)
Neoplasias Encefálicas , Cirugía Asistida por Computador , Humanos , Neoplasias Encefálicas/diagnóstico por imagen , Neoplasias Encefálicas/cirugía , Neoplasias Encefálicas/patología , Neuronavegación/métodos , Cirugía Asistida por Computador/métodos , Procedimientos Neuroquirúrgicos/métodos , Ultrasonografía , Imagen por Resonancia Magnética/métodos
12.
Hum Brain Mapp ; 44(15): 5113-5124, 2023 10 15.
Artículo en Inglés | MEDLINE | ID: mdl-37647214

RESUMEN

Diffusion tensor imaging (DTI) and diffusion kurtosis imaging (DKI) have been previously used to explore white matter related to human immunodeficiency virus (HIV) infection. While DTI and DKI suffer from low specificity, the Combined Hindered and Restricted Model of Diffusion (CHARMED) provides additional microstructural specificity. We used these three models to evaluate microstructural differences between 35 HIV-positive patients without neurological impairment and 20 healthy controls who underwent diffusion-weighted imaging using three b-values. While significant group effects were found in all diffusion metrics, CHARMED and DKI analyses uncovered wider involvement (80% vs. 20%) of all white matter tracts in HIV infection compared with DTI. In restricted fraction (FR) analysis, we found significant differences in the left corticospinal tract, middle cerebellar peduncle, right inferior cerebellar peduncle, right corticospinal tract, splenium of the corpus callosum, left superior cerebellar peduncle, left superior cerebellar peduncle, pontine crossing tract, left posterior limb of the internal capsule, and left/right medial lemniscus. These are involved in language, motor, equilibrium, behavior, and proprioception, supporting the functional integration that is frequently impaired in HIV-positivity. Additionally, we employed a machine learning algorithm (XGBoost) to discriminate HIV-positive patients from healthy controls using DTI and CHARMED metrics on an ROIwise basis, and unique contributions to this discrimination were examined using Shapley Explanation values. The CHARMED and DKI estimates produced the best performance. Our results suggest that biophysical multishell imaging, combining additional sensitivity and built-in specificity, provides further information about the brain microstructural changes in multimodal areas involved in attentive, emotional and memory networks often impaired in HIV patients.


Asunto(s)
Imagen de Difusión Tensora , Infecciones por VIH , Sustancia Blanca , Humanos , Masculino , Femenino , Adulto Joven , Adulto , Persona de Mediana Edad , Anciano , Infecciones por VIH/diagnóstico por imagen , Sustancia Blanca/diagnóstico por imagen
13.
Hum Brain Mapp ; 44(7): 2921-2935, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-36852610

RESUMEN

Brain decoding, aiming to identify the brain states using neural activity, is important for cognitive neuroscience and neural engineering. However, existing machine learning methods for fMRI-based brain decoding either suffer from low classification performance or poor explainability. Here, we address this issue by proposing a biologically inspired architecture, Spatial Temporal-pyramid Graph Convolutional Network (STpGCN), to capture the spatial-temporal graph representation of functional brain activities. By designing multi-scale spatial-temporal pathways and bottom-up pathways that mimic the information process and temporal integration in the brain, STpGCN is capable of explicitly utilizing the multi-scale temporal dependency of brain activities via graph, thereby achieving high brain decoding performance. Additionally, we propose a sensitivity analysis method called BrainNetX to better explain the decoding results by automatically annotating task-related brain regions from the brain-network standpoint. We conduct extensive experiments on fMRI data under 23 cognitive tasks from Human Connectome Project (HCP) S1200. The results show that STpGCN significantly improves brain-decoding performance compared to competing baseline models; BrainNetX successfully annotates task-relevant brain regions. Post hoc analysis based on these regions further validates that the hierarchical structure in STpGCN significantly contributes to the explainability, robustness and generalization of the model. Our methods not only provide insights into information representation in the brain under multiple cognitive tasks but also indicate a bright future for fMRI-based brain decoding.


Asunto(s)
Conectoma , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Encéfalo , Conectoma/métodos , Cognición , Aprendizaje Automático
14.
Clin Transplant ; 37(1): e14845, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36315983

RESUMEN

BACKGROUND: Machine learning (ML) is increasingly being applied in Cardiology to predict outcomes and assist in clinical decision-making. We sought to develop and validate an ML model for the prediction of mortality after heart transplantation (HT) in adults with congenital heart disease (ACHD). METHODS: The United Network for Organ Sharing (UNOS) database was queried from 2000 to 2020 for ACHD patients who underwent isolated HT. The study cohort was randomly split into derivation (70%) and validation (30%) datasets that were used to train and test a CatBoost ML model. Feature selection was performed using SHapley Additive exPlanations (SHAP). Recipient, donor, procedural, and post-transplant characteristics were tested for their ability to predict mortality. We additionally used SHAP for explainability analysis, as well as individualized mortality risk assessment. RESULTS: The study cohort included 1033 recipients (median age 34 years, 61% male). At 1 year after HT, there were 205 deaths (19.9%). Out of a total of 49 variables, 10 were selected as highly predictive of 1-year mortality and were used to train the ML model. Area under the curve (AUC) and predictive accuracy for the 1-year ML model were .80 and 75.2%, respectively, and .69 and 74.2% for the 3-year model, respectively. Based on SHAP analysis, hemodialysis of the recipient post-HT had overall the strongest relative impact on 1-year mortality after HΤ, followed by recipient-estimated glomerular filtration rate, age and ischemic time. CONCLUSIONS: ML models showed satisfactory predictive accuracy of mortality after HT in ACHD and allowed for individualized mortality risk assessment.


Asunto(s)
Cardiopatías Congénitas , Insuficiencia Cardíaca , Trasplante de Corazón , Humanos , Masculino , Adulto , Femenino , Medición de Riesgo , Cardiopatías Congénitas/cirugía , Aprendizaje Automático
15.
BMC Med Res Methodol ; 23(1): 102, 2023 04 24.
Artículo en Inglés | MEDLINE | ID: mdl-37095430

RESUMEN

BACKGROUND: The use of machine learning is becoming increasingly popular in many disciplines, but there is still an implementation gap of machine learning models in clinical settings. Lack of trust in models is one of the issues that need to be addressed in an effort to close this gap. No models are perfect, and it is crucial to know in which use cases we can trust a model and for which cases it is less reliable. METHODS: Four different algorithms are trained on the eICU Collaborative Research Database using similar features as the APACHE IV severity-of-disease scoring system to predict hospital mortality in the ICU. The training and testing procedure is repeated 100 times on the same dataset to investigate whether predictions for single patients change with small changes in the models. Features are then analysed separately to investigate potential differences between patients consistently classified correctly and incorrectly. RESULTS: A total of 34 056 patients (58.4%) are classified as true negative, 6 527 patients (11.3%) as false positive, 3 984 patients (6.8%) as true positive, and 546 patients (0.9%) as false negatives. The remaining 13 108 patients (22.5%) are inconsistently classified across models and rounds. Histograms and distributions of feature values are compared visually to investigate differences between groups. CONCLUSIONS: It is impossible to distinguish the groups using single features alone. Considering a combination of features, the difference between the groups is clearer. Incorrectly classified patients have features more similar to patients with the same prediction rather than the same outcome.


Asunto(s)
Unidades de Cuidados Intensivos , Aprendizaje Automático , Humanos , Mortalidad Hospitalaria , APACHE , Algoritmos
16.
J Pathol ; 257(1): 1-4, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-34928523

RESUMEN

The use of artificial intelligence methods in the image-based diagnostic assessment of hematological diseases is a growing trend in recent years. In these methods, the selection of quantitative features that describe cytological characteristics plays a key role. They are expected to add objectivity and consistency among observers to the geometric, color, or texture variables that pathologists usually interpret from visual inspection. In a recent paper in The Journal of Pathology, El Hussein, Chen et al proposed an algorithmic procedure to assist pathologists in the diagnostic evaluation of chronic lymphocytic leukemia (CLL) progression using whole-slide image analysis of tissue samples. The core of the procedure was a set of quantitative descriptors (biomarkers) calculated from the segmentation of cell nuclei, which was performed using a convolutional neural network. These biomarkers were based on clinical practice and easily calculated with reproducible tools. They were used as input to a machine learning algorithm that provided classification in one of the stages of CLL progression. Works like this can contribute to the integration into the workflow of clinical laboratories of automated diagnostic systems based on the morphological analysis of histological slides and blood smears. © 2021 The Pathological Society of Great Britain and Ireland.


Asunto(s)
Inteligencia Artificial , Leucemia Linfocítica Crónica de Células B , Humanos , Procesamiento de Imagen Asistido por Computador , Leucemia Linfocítica Crónica de Células B/diagnóstico , Aprendizaje Automático , Redes Neurales de la Computación
17.
Environ Sci Technol ; 57(46): 17671-17689, 2023 Nov 21.
Artículo en Inglés | MEDLINE | ID: mdl-37384597

RESUMEN

Machine learning (ML) is increasingly used in environmental research to process large data sets and decipher complex relationships between system variables. However, due to the lack of familiarity and methodological rigor, inadequate ML studies may lead to spurious conclusions. In this study, we synthesized literature analysis with our own experience and provided a tutorial-like compilation of common pitfalls along with best practice guidelines for environmental ML research. We identified more than 30 key items and provided evidence-based data analysis based on 148 highly cited research articles to exhibit the misconceptions of terminologies, proper sample size and feature size, data enrichment and feature selection, randomness assessment, data leakage management, data splitting, method selection and comparison, model optimization and evaluation, and model explainability and causality. By analyzing good examples on supervised learning and reference modeling paradigms, we hope to help researchers adopt more rigorous data preprocessing and model development standards for more accurate, robust, and practicable model uses in environmental research and applications.


Asunto(s)
Ciencia Ambiental , Aprendizaje Automático
18.
J Med Internet Res ; 25: e43838, 2023 06 12.
Artículo en Inglés | MEDLINE | ID: mdl-37307043

RESUMEN

BACKGROUND: Health professionals are often faced with the need to identify women at risk of manifesting poor psychological resilience following the diagnosis and treatment of breast cancer. Machine learning algorithms are increasingly used to support clinical decision support (CDS) tools in helping health professionals identify women who are at risk of adverse well-being outcomes and plan customized psychological interventions for women at risk. Clinical flexibility, cross-validated performance accuracy, and model explainability permitting person-specific identification of risk factors are highly desirable features of such tools. OBJECTIVE: This study aimed to develop and cross-validate machine learning models designed to identify breast cancer survivors at risk of poor overall mental health and global quality of life and identify potential targets of personalized psychological interventions according to an extensive set of clinical recommendations. METHODS: A set of 12 alternative models was developed to improve the clinical flexibility of the CDS tool. All models were validated using longitudinal data from a prospective, multicenter clinical pilot at 5 major oncology centers in 4 countries (Italy, Finland, Israel, and Portugal; the Predicting Effective Adaptation to Breast Cancer to Help Women to BOUNCE Back [BOUNCE] project). A total of 706 patients with highly treatable breast cancer were enrolled shortly after diagnosis and before the onset of oncological treatments and were followed up for 18 months. An extensive set of demographic, lifestyle, clinical, psychological, and biological variables measured within 3 months after enrollment served as predictors. Rigorous feature selection isolated key psychological resilience outcomes that could be incorporated into future clinical practice. RESULTS: Balanced random forest classifiers were successful at predicting well-being outcomes, with accuracies ranging between 78% and 82% (for 12-month end points after diagnosis) and between 74% and 83% (for 18-month end points after diagnosis). Explainability and interpretability analyses built on the best-performing models were used to identify potentially modifiable psychological and lifestyle characteristics that, if addressed systematically in the context of personalized psychological interventions, would be most likely to promote resilience for a given patient. CONCLUSIONS: Our results highlight the clinical utility of the BOUNCE modeling approach by focusing on resilience predictors that can be readily available to practicing clinicians at major oncology centers. The BOUNCE CDS tool paves the way for personalized risk assessment methods to identify patients at high risk of adverse well-being outcomes and direct valuable resources toward those most in need of specialized psychological interventions.


Asunto(s)
Neoplasias de la Mama , Sistemas de Apoyo a Decisiones Clínicas , Resiliencia Psicológica , Humanos , Femenino , Estudios Prospectivos , Calidad de Vida , Medición de Riesgo , Aprendizaje Automático
19.
Sensors (Basel) ; 23(10)2023 May 17.
Artículo en Inglés | MEDLINE | ID: mdl-37430758

RESUMEN

With the massive, worldwide, smart metering roll-out, both energy suppliers and users are starting to tap into the potential of higher resolution energy readings for accurate billing, improved demand response, improved tariffs better tuned to users and the grid, and empowering end-users to know how much their individual appliances contribute to their electricity bills via nonintrusive load monitoring (NILM). A number of NILM approaches, based on machine learning (ML), have been proposed over the years, focusing on improving the NILM model performance. However, the trustworthiness of the NILM model itself has hardly been addressed. It is important to explain the underlying model and its reasoning to understand why the model underperforms in order to satisfy user curiosity and to enable model improvement. This can be done by leveraging naturally interpretable or explainable models as well as explainability tools. This paper adopts a naturally interpretable decision tree (DT)-based approach for a NILM multiclass classifier. Furthermore, this paper leverages explainability tools to determine local and global feature importance, and design a methodology that informs feature selection for each appliance class, which can determine how well a trained model will predict an appliance on any unseen test data, minimising testing time on target datasets. We explain how one or more appliances can negatively impact classification of other appliances and predict appliance and model performance of the REFIT-data trained models on unseen data of the same house and on unseen houses on the UK-DALE dataset. Experimental results confirm that models trained with the explainability-informed local feature importance can improve toaster classification performance from 65% to 80%. Additionally, instead of one five-classifier approach incorporating all five appliances, a three-classifier approach comprising a kettle, microwave, and dishwasher and a two-classifier comprising a toaster and washing machine improves classification performance for the dishwasher from 72% to 94% and the washing machine from 56% to 80%.

20.
Sensors (Basel) ; 23(17)2023 Sep 02.
Artículo en Inglés | MEDLINE | ID: mdl-37688069

RESUMEN

Brain cancer is widely recognised as one of the most aggressive types of tumors. In fact, approximately 70% of patients diagnosed with this malignant cancer do not survive. In this paper, we propose a method aimed to detect and localise brain cancer, starting from the analysis of magnetic resonance images. The proposed method exploits deep learning, in particular convolutional neural networks and class activation mapping, in order to provide explainability by highlighting the areas of the medical image related to brain cancer (from the model point of view). We evaluate the proposed method with 3000 magnetic resonances using a free available dataset. The results we obtained are encouraging. We reach an accuracy ranging from 97.83% to 99.67% in brain cancer detection by exploiting four different models: VGG16, ResNet50, Alex_Net, and MobileNet, thus showing the effectiveness of the proposed method.


Asunto(s)
Neoplasias Encefálicas , Encéfalo , Humanos , Neoplasias Encefálicas/diagnóstico por imagen , Agresión , Redes Neurales de la Computación , Registros
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA