Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 398
Filtrar
Mais filtros

Tipo de documento
Intervalo de ano de publicação
1.
Brief Bioinform ; 25(6)2024 Sep 23.
Artigo em Inglês | MEDLINE | ID: mdl-39325460

RESUMO

Drug repurposing has emerged as a effective and efficient strategy to identify new treatments for a variety of diseases. One of the most effective approaches for discovering potential new drug candidates involves the utilization of Knowledge Graphs (KGs). This review comprehensively explores some of the most prominent KGs, detailing their structure, data sources, and how they facilitate the repurposing of drugs. In addition to KGs, this paper delves into various artificial intelligence techniques that enhance the process of drug repurposing. These methods not only accelerate the identification of viable drug candidates but also improve the precision of predictions by leveraging complex datasets and advanced algorithms. Furthermore, the importance of explainability in drug repurposing is emphasized. Explainability methods are crucial as they provide insights into the reasoning behind AI-generated predictions, thereby increasing the trustworthiness and transparency of the repurposing process. We will discuss several techniques that can be employed to validate these predictions, ensuring that they are both reliable and understandable.


Assuntos
Reposicionamento de Medicamentos , Reposicionamento de Medicamentos/métodos , Humanos , Algoritmos , Inteligência Artificial , Bases de Dados Factuais , Biologia Computacional/métodos
2.
Proc Natl Acad Sci U S A ; 120(41): e2301842120, 2023 10 10.
Artigo em Inglês | MEDLINE | ID: mdl-37782786

RESUMO

One of the most troubling trends in criminal investigations is the growing use of "black box" technology, in which law enforcement rely on artificial intelligence (AI) models or algorithms that are either too complex for people to understand or they simply conceal how it functions. In criminal cases, black box systems have proliferated in forensic areas such as DNA mixture interpretation, facial recognition, and recidivism risk assessments. The champions and critics of AI argue, mistakenly, that we face a catch 22: While black box AI is not understandable by people, they assume that it produces more accurate forensic evidence. In this Article, we question this assertion, which has so powerfully affected judges, policymakers, and academics. We describe a mature body of computer science research showing how "glass box" AI-designed to be interpretable-can be more accurate than black box alternatives. Indeed, black box AI performs predictably worse in settings like the criminal system. Debunking the black box performance myth has implications for forensic evidence, constitutional criminal procedure rights, and legislative policy. Absent some compelling-or even credible-government interest in keeping AI as a black box, and given the constitutional rights and public safety interests at stake, we argue that a substantial burden rests on the government to justify black box AI in criminal cases. We conclude by calling for judicial rulings and legislation to safeguard a right to interpretable forensic AI.


Assuntos
Inteligência Artificial , Criminosos , Humanos , Medicina Legal , Aplicação da Lei , Algoritmos
3.
Neuroimage ; 298: 120771, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39111376

RESUMO

Modeling dynamic interactions among network components is crucial to uncovering the evolution mechanisms of complex networks. Recently, spatio-temporal graph learning methods have achieved noteworthy results in characterizing the dynamic changes of inter-node relations (INRs). However, challenges remain: The spatial neighborhood of an INR is underexploited, and the spatio-temporal dependencies in INRs' dynamic changes are overlooked, ignoring the influence of historical states and local information. In addition, the model's explainability has been understudied. To address these issues, we propose an explainable spatio-temporal graph evolution learning (ESTGEL) model to model the dynamic evolution of INRs. Specifically, an edge attention module is proposed to utilize the spatial neighborhood of an INR at multi-level, i.e., a hierarchy of nested subgraphs derived from decomposing the initial node-relation graph. Subsequently, a dynamic relation learning module is proposed to capture the spatio-temporal dependencies of INRs. The INRs are then used as adjacent information to improve the node representation, resulting in comprehensive delineation of dynamic evolution of the network. Finally, the approach is validated with real data on brain development study. Experimental results on dynamic brain networks analysis reveal that brain functional networks transition from dispersed to more convergent and modular structures throughout development. Significant changes are observed in the dynamic functional connectivity (dFC) associated with functions including emotional control, decision-making, and language processing.


Assuntos
Encéfalo , Rede Nervosa , Humanos , Encéfalo/crescimento & desenvolvimento , Encéfalo/fisiologia , Encéfalo/diagnóstico por imagem , Rede Nervosa/crescimento & desenvolvimento , Rede Nervosa/fisiologia , Rede Nervosa/diagnóstico por imagem , Aprendizado de Máquina , Imageamento por Ressonância Magnética/métodos , Conectoma/métodos
4.
Cancer ; 130(12): 2101-2107, 2024 Jun 15.
Artigo em Inglês | MEDLINE | ID: mdl-38554271

RESUMO

Modern artificial intelligence (AI) tools built on high-dimensional patient data are reshaping oncology care, helping to improve goal-concordant care, decrease cancer mortality rates, and increase workflow efficiency and scope of care. However, data-related concerns and human biases that seep into algorithms during development and post-deployment phases affect performance in real-world settings, limiting the utility and safety of AI technology in oncology clinics. To this end, the authors review the current potential and limitations of predictive AI for cancer diagnosis and prognostication as well as of generative AI, specifically modern chatbots, which interfaces with patients and clinicians. They conclude the review with a discussion on ongoing challenges and regulatory opportunities in the field.


Assuntos
Inteligência Artificial , Oncologia , Neoplasias , Humanos , Oncologia/métodos , Neoplasias/terapia , Neoplasias/diagnóstico , Algoritmos , Prognóstico
5.
Brief Bioinform ; 23(3)2022 05 13.
Artigo em Inglês | MEDLINE | ID: mdl-35511112

RESUMO

MOTIVATION: Drug-drug interactions (DDIs) occur during the combination of drugs. Identifying potential DDI helps us to study the mechanism behind the combination medication or adverse reactions so as to avoid the side effects. Although many artificial intelligence methods predict and mine potential DDI, they ignore the 3D structure information of drug molecules and do not fully consider the contribution of molecular substructure in DDI. RESULTS: We proposed a new deep learning architecture, 3DGT-DDI, a model composed of a 3D graph neural network and pre-trained text attention mechanism. We used 3D molecular graph structure and position information to enhance the prediction ability of the model for DDI, which enabled us to deeply explore the effect of drug substructure on DDI relationship. The results showed that 3DGT-DDI outperforms other state-of-the-art baselines. It achieved an 84.48% macro F1 score in the DDIExtraction 2013 shared task dataset. Also, our 3D graph model proves its performance and explainability through weight visualization on the DrugBank dataset. 3DGT-DDI can help us better understand and identify potential DDI, thereby helping to avoid the side effects of drug mixing. AVAILABILITY: The source code and data are available at https://github.com/hehh77/3DGT-DDI.


Assuntos
Inteligência Artificial , Efeitos Colaterais e Reações Adversas Relacionados a Medicamentos , Interações Medicamentosas , Humanos , Redes Neurais de Computação , Software
6.
Hum Reprod ; 39(2): 285-292, 2024 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-38061074

RESUMO

With the exponential growth of computing power and accumulation of embryo image data in recent years, artificial intelligence (AI) is starting to be utilized in embryo selection in IVF. Amongst different AI technologies, machine learning (ML) has the potential to reduce operator-related subjectivity in embryo selection while saving labor time on this task. However, as modern deep learning (DL) techniques, a subcategory of ML, are increasingly used, its integrated black-box attracts growing concern owing to the well-recognized issues regarding lack of interpretability. Currently, there is a lack of randomized controlled trials to confirm the effectiveness of such black-box models. Recently, emerging evidence has shown underperformance of black-box models compared to the more interpretable traditional ML models in embryo selection. Meanwhile, glass-box AI, such as interpretable ML, is being increasingly promoted across a wide range of fields and is supported by its ethical advantages and technical feasibility. In this review, we propose a novel classification system for traditional and AI-driven systems from an embryology standpoint, defining different morphology-based selection approaches with an emphasis on subjectivity, explainability, and interpretability.


Assuntos
Inteligência Artificial , Aprendizado de Máquina , Humanos , Embrião de Mamíferos
7.
J Nucl Cardiol ; 38: 101889, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38852900

RESUMO

BACKGROUND: We developed an explainable deep-learning (DL)-based classifier to identify flow-limiting coronary artery disease (CAD) by O-15 H2O perfusion positron emission tomography computed tomography (PET/CT) and coronary CT angiography (CTA) imaging. The classifier uses polar map images with numerical data and visualizes data findings. METHODS: A DLmodel was implemented and evaluated on 138 individuals, consisting of a combined image-and data-based classifier considering 35 clinical, CTA, and PET variables. Data from invasive coronary angiography were used as reference. Performance was evaluated with clinical classification using accuracy (ACC), area under the receiver operating characteristic curve (AUC), F1 score (F1S), sensitivity (SEN), specificity (SPE), precision (PRE), net benefit, and Cohen's Kappa. Statistical testing was conducted using McNemar's test. RESULTS: The DL model had a median ACC = 0.8478, AUC = 0.8481, F1S = 0.8293, SEN = 0.8500, SPE = 0.8846, and PRE = 0.8500. Improved detection of true-positive and false-negative cases, increased net benefit in thresholds up to 34%, and comparable Cohen's kappa was seen, reaching similar performance to clinical reading. Statistical testing revealed no significant differences between DL model and clinical reading. CONCLUSIONS: The combined DL model is a feasible and an effective method in detection of CAD, allowing to highlight important data findings individually in interpretable manner.


Assuntos
Doença da Artéria Coronariana , Aprendizado Profundo , Radioisótopos de Oxigênio , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Humanos , Masculino , Feminino , Pessoa de Meia-Idade , Doença da Artéria Coronariana/diagnóstico por imagem , Idoso , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos , Angiografia Coronária/métodos , Imagem de Perfusão do Miocárdio/métodos , Angiografia por Tomografia Computadorizada/métodos , Isquemia Miocárdica/diagnóstico por imagem , Sensibilidade e Especificidade
8.
J Biomed Inform ; 156: 104681, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38960273

RESUMO

The multimorbidity problem involves the identification and mitigation of adverse interactions that occur when multiple computer interpretable guidelines are applied concurrently to develop a treatment plan for a patient diagnosed with multiple diseases. Solving this problem requires decision support approaches which are difficult to comprehend for physicians. As such, the rationale for treatment plans generated by these approaches needs to be provided. OBJECTIVE: To develop an explainability component for an automated planning-based approach to the multimorbidity problem, and to assess the fidelity and interpretability of generated explanations using a clinical case study. METHODS: The explainability component leverages the task-network model for representing computer interpretable guidelines. It generates post-hoc explanations composed of three aspects that answer why specific clinical actions are in a treatment plan, why specific revisions were applied, and how factors like medication cost, patient's adherence, etc. influence the selection of specific actions. The explainability component is implemented as part of MitPlan, where we revised our planning-based approach to support explainability. We developed an evaluation instrument based on the system causability scale and other vetted surveys to evaluate the fidelity and interpretability of its explanations using a two dimensional comparison study design. RESULTS: The explainability component was implemented for MitPlan and tested in the context of a clinical case study. The fidelity and interpretability of the generated explanations were assessed using a physician-focused evaluation study involving 21 participants from two different specialties and two levels of experience. Results show that explanations provided by the explainability component in MitPlan are of acceptable fidelity and interpretability, and that the clinical justification of the actions in a treatment plan is important to physicians. CONCLUSION: We created an explainability component that enriches an automated planning-based approach to solving the multimorbidity problem with meaningful explanations for actions in a treatment plan. This component relies on the task-network model to represent computer interpretable guidelines and as such can be ported to other approaches that also use the task-network model representation. Our evaluation study demonstrated that explanations that support a physician's understanding of the clinical reasons for the actions in a treatment plan are useful and important.


Assuntos
Multimorbidade , Humanos , Sistemas de Apoio a Decisões Clínicas , Planejamento de Assistência ao Paciente
9.
J Biomed Inform ; 154: 104650, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38701887

RESUMO

BACKGROUND: Distinguishing diseases into distinct subtypes is crucial for study and effective treatment strategies. The Open Targets Platform (OT) integrates biomedical, genetic, and biochemical datasets to empower disease ontologies, classifications, and potential gene targets. Nevertheless, many disease annotations are incomplete, requiring laborious expert medical input. This challenge is especially pronounced for rare and orphan diseases, where resources are scarce. METHODS: We present a machine learning approach to identifying diseases with potential subtypes, using the approximately 23,000 diseases documented in OT. We derive novel features for predicting diseases with subtypes using direct evidence. Machine learning models were applied to analyze feature importance and evaluate predictive performance for discovering both known and novel disease subtypes. RESULTS: Our model achieves a high (89.4%) ROC AUC (Area Under the Receiver Operating Characteristic Curve) in identifying known disease subtypes. We integrated pre-trained deep-learning language models and showed their benefits. Moreover, we identify 515 disease candidates predicted to possess previously unannotated subtypes. CONCLUSIONS: Our models can partition diseases into distinct subtypes. This methodology enables a robust, scalable approach for improving knowledge-based annotations and a comprehensive assessment of disease ontology tiers. Our candidates are attractive targets for further study and personalized medicine, potentially aiding in the unveiling of new therapeutic indications for sought-after targets.


Assuntos
Aprendizado de Máquina , Humanos , Doença/classificação , Curva ROC , Biologia Computacional/métodos , Algoritmos , Aprendizado Profundo
10.
BMC Med Ethics ; 25(1): 104, 2024 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-39354512

RESUMO

BACKGROUND: Despite continuous performance improvements, especially in clinical contexts, a major challenge of Artificial Intelligence based Decision Support Systems (AI-DSS) remains their degree of epistemic opacity. The conditions of and the solutions for the justified use of the occasionally unexplainable technology in healthcare are an active field of research. In March 2024, the European Union agreed upon the Artificial Intelligence Act (AIA), requiring medical AI-DSS to be ad-hoc explainable or to use post-hoc explainability methods. The ethical debate does not seem to settle on this requirement yet. This systematic review aims to outline and categorize the positions and arguments in the ethical debate. METHODS: We conducted a literature search on PubMed, BASE, and Scopus for English-speaking scientific peer-reviewed publications from 2016 to 2024. The inclusion criterion was to give explicit requirements of explainability for AI-DSS in healthcare and reason for it. Non-domain-specific documents, as well as surveys, reviews, and meta-analyses were excluded. The ethical requirements for explainability outlined in the documents were qualitatively analyzed with respect to arguments for the requirement of explainability and the required level of explainability. RESULTS: The literature search resulted in 1662 documents; 44 documents were included in the review after eligibility screening of the remaining full texts. Our analysis showed that 17 records argue in favor of the requirement of explainable AI methods (xAI) or ad-hoc explainable models, providing 9 categories of arguments. The other 27 records argued against a general requirement, providing 11 categories of arguments. Also, we found that 14 works advocate the need for context-dependent levels of explainability, as opposed to 30 documents, arguing for context-independent, absolute standards. CONCLUSIONS: The systematic review of reasons shows no clear agreement on the requirement of post-hoc explainability methods or ad-hoc explainable models for AI-DSS in healthcare. The arguments found in the debate were referenced and responded to from different perspectives, demonstrating an interactive discourse. Policymakers and researchers should watch the development of the debate closely. Conversely, ethicists should be well informed by empirical and technical research, given the frequency of advancements in the field.


Assuntos
Inteligência Artificial , Atenção à Saúde , Humanos , Inteligência Artificial/ética , Atenção à Saúde/ética , Sistemas de Apoio a Decisões Clínicas/ética , União Europeia
11.
BMC Med Inform Decis Mak ; 24(Suppl 4): 318, 2024 Oct 29.
Artigo em Inglês | MEDLINE | ID: mdl-39472842

RESUMO

BACKGROUND: Amyotrophic lateral sclerosis (ALS) is a progressive neurodegenerative disease that results in death within a short time span (3-5 years). One of the major challenges in treating ALS is its highly heterogeneous disease progression and the lack of effective prognostic tools to forecast it. The main aim of this study was, then, to test the feasibility of predicting relevant clinical outcomes that characterize the progression of ALS with a two-year prediction horizon via artificial intelligence techniques using routine visits data. METHODS: Three classification problems were considered: predicting death (binary problem), predicting death or percutaneous endoscopic gastrostomy (PEG) (multiclass problem), and predicting death or non-invasive ventilation (NIV) (multiclass problem). Two supervised learning models, a logistic regression (LR) and a deep learning multilayer perceptron (MLP), were trained ensuring technical robustness and reproducibility. Moreover, to provide insights into model explainability and result interpretability, model coefficients for LR and Shapley values for both LR and MLP were considered to characterize the relationship between each variable and the outcome. RESULTS: On the one hand, predicting death was successful as both models yielded F1 scores and accuracy well above 0.7. The model explainability analysis performed for this outcome allowed for the understanding of how different methodological approaches consider the input variables when performing the prediction. On the other hand, predicting death alongside PEG or NIV proved to be much more challenging (F1 scores and accuracy in the 0.4-0.6 interval). CONCLUSIONS: In conclusion, predicting death due to ALS proved to be feasible. However, predicting PEG or NIV in a multiclass fashion proved to be unfeasible with these data, regardless of the complexity of the methodological approach. The observed results suggest a potential ceiling on the amount of information extractable from the database, e.g., due to the intrinsic difficulty of the prediction tasks at hand, or to the absence of crucial predictors that are, however, not currently collected during routine practice.


Assuntos
Esclerose Lateral Amiotrófica , Progressão da Doença , Estudos de Viabilidade , Aprendizado de Máquina , Esclerose Lateral Amiotrófica/terapia , Humanos , Masculino , Pessoa de Meia-Idade , Feminino , Idoso , Prognóstico , Ventilação não Invasiva
12.
BMC Med Inform Decis Mak ; 24(1): 317, 2024 Oct 29.
Artigo em Inglês | MEDLINE | ID: mdl-39472925

RESUMO

BACKGROUND: Ageing is one of the most important challenges in our society. Evaluating how one is ageing is important in many aspects, from giving personalized recommendations to providing insight for long-term care eligibility. Machine learning can be utilized for that purpose, however, user reservations towards "black-box" predictions call for increased transparency and explainability of results. This study aimed to explore the potential of developing a machine learning-based healthy ageing scale that provides explainable results that could be trusted and understood by informal carers. METHODS: In this study, we used data from 696 older adults collected via personal field interviews as part of independent research. Explanatory factor analysis was used to find candidate healthy ageing aspects. For visualization of key aspects, a web annotation application was developed. Key aspects were selected by gerontologists who later used web annotation applications to evaluate healthy ageing for each older adult on a Likert scale. Logistic Regression, Decision Tree Classifier, Random Forest, KNN, SVM and XGBoost were used for multi-classification machine learning. AUC OvO, AUC OvR, F1, Precision and Recall were used for evaluation. Finally, SHAP was applied to best model predictions to make them explainable. RESULTS: The experimental results show that human annotations of healthy ageing could be modelled using machine learning where among several algorithms XGBoost showed superior performance. The use of XGBoost resulted in 0.92 macro-averaged AuC OvO and 0.76 macro-averaged F1. SHAP was applied to generate local explanations for predictions and shows how each feature is influencing the prediction. CONCLUSION: The resulting explainable predictions make a step toward practical scale implementation into decision support systems. The development of such a decision support system that would incorporate an explainable model could reduce user reluctance towards the utilization of AI in healthcare and provide explainable and trusted insights to informal carers or healthcare providers as a basis to shape tangible actions for improving ageing. Furthermore, the cooperation with gerontology specialists throughout the process also indicates expert knowledge as integrated into the model.


Assuntos
Envelhecimento Saudável , Aprendizado de Máquina , Humanos , Idoso , Feminino , Masculino , Idoso de 80 Anos ou mais , Pessoa de Meia-Idade
13.
Risk Anal ; 2024 Sep 20.
Artigo em Inglês | MEDLINE | ID: mdl-39301866

RESUMO

There is growing interest in leveraging advanced analytics, including artificial intelligence (AI) and machine learning (ML), for disaster risk analysis (RA) applications. These emerging methods offer unprecedented abilities to assess risk in settings where threats can emerge and transform quickly by relying on "learning" through datasets. There is a need to understand these emerging methods in comparison to the more established set of risk assessment methods commonly used in practice. These existing methods are generally accepted by the risk community and are grounded in use across various risk application areas. The next frontier in RA with emerging methods is to develop insights for evaluating the compatibility of those risk methods with more recent advancements in AI/ML, particularly with consideration of usefulness, trust, explainability, and other factors. This article leverages inputs from RA and AI experts to investigate the compatibility of various risk assessment methods, including both established methods and an example of a commonly used AI-based method for disaster RA applications. This article utilizes empirical evidence from expert perspectives to support key insights on those methods and the compatibility of those methods. This article will be of interest to researchers and practitioners in risk-analytics disciplines who leverage AI/ML methods.

14.
Sensors (Basel) ; 24(18)2024 Sep 23.
Artigo em Inglês | MEDLINE | ID: mdl-39338875

RESUMO

Road surface quality is essential for driver comfort and safety, making it crucial to monitor pavement conditions and detect defects in real time. However, the diversity of defects and the complexity of ambient conditions make it challenging to develop an effective and robust classification and detection algorithm. In this study, we adopted a semi-supervised learning approach to train ResNet-18 for image feature retrieval and then classification and detection of pavement defects. The resulting feature embedding vectors from image patches were retrieved, concatenated, and randomly sampled to model a multivariate normal distribution based on the only one-class training pavement image dataset. The calibration pavement image dataset was used to determine the defect score threshold based on the receiver operating characteristic curve, with the Mahalanobis distance employed as a metric to evaluate differences between normal and defect pavement images. Finally, a heatmap derived from the defect score map for the testing dataset was overlaid on the original pavement images to provide insight into the network's decisions and guide measures to improve its performance. The results demonstrate that the model's classification accuracy improved from 0.868 to 0.887 using the expanded and augmented pavement image data based on the analysis of heatmaps.

15.
Sensors (Basel) ; 24(12)2024 Jun 07.
Artigo em Inglês | MEDLINE | ID: mdl-38931500

RESUMO

Cybersecurity has become a major concern in the modern world due to our heavy reliance on cyber systems. Advanced automated systems utilize many sensors for intelligent decision-making, and any malicious activity of these sensors could potentially lead to a system-wide collapse. To ensure safety and security, it is essential to have a reliable system that can automatically detect and prevent any malicious activity, and modern detection systems are created based on machine learning (ML) models. Most often, the dataset generated from the sensor node for detecting malicious activity is highly imbalanced because the Malicious class is significantly fewer than the Non-Malicious class. To address these issues, we proposed a hybrid data balancing technique in combination with a Cluster-based Under Sampling and Synthetic Minority Oversampling Technique (SMOTE). We have also proposed an ensemble machine learning model that outperforms other standard ML models, achieving 99.7% accuracy. Additionally, we have identified the critical features that pose security risks to the sensor nodes with extensive explainability analysis of our proposed machine learning model. In brief, we have explored a hybrid data balancing method, developed a robust ensemble machine learning model for detecting malicious sensor nodes, and conducted a thorough analysis of the model's explainability.

16.
J Environ Manage ; 351: 119866, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38147770

RESUMO

Loktak Lake, one of the largest freshwater lakes in Manipur, India, is critical for the eco-hydrology and economy of the region, but faces deteriorating water quality due to urbanisation, anthropogenic activities, and domestic sewage. Addressing the urgent need for effective pollution management, this study aims to assess the lake's water quality status using the water quality index (WQI) and develop advanced machine learning (ML) tools for WQI assessment and ML model interpretation to improve pollution management decision making. The WQI was assessed using entropy-based weighting arithmetic and three ML models - Gradient Boosting Machine (GBM), Random Forest (RF) and Deep Neural Network (DNN) - were optimised using a grid search algorithm in the H2O Application Programming Interface (API). These models were validated by various metrics and interpreted globally and locally via Partial Dependency Plot (PDP), Accumulated Local Effect (ALE) and SHapley Additive exPlanations (SHAP). The results show a WQI range of 72.38-100, with 52.7% of samples categorised as very poor. The RF model outperformed GBM and DNN and showed the highest accuracy and generalisation ability, which is reflected in the superior R2 values (0.97 in training, 0.9 in test) and the lower root mean square error (RMSE). RF's minimal margin of error and reliable feature interpretation contrasted with DNN's larger margin of error and inconsistency, which affected its usefulness for decision making. Turbidity was found to be a critical predictive feature in all models, significantly influencing WQI, with other variables such as pH and temperature also playing an important role. SHAP dependency plots illustrated the direct relationship between key water quality parameters such as turbidity and WQI predictions. The novelty of this study lies in its comprehensive approach to the evaluation and interpretation of ML models for WQI estimation, which provides a nuanced understanding of water quality dynamics in Loktak Lake. By identifying the most effective ML models and key predictive functions, this study provides invaluable insights for water quality management and paves the way for targeted strategies to monitor and improve water quality in this vital freshwater ecosystem.


Assuntos
Aprendizado Profundo , Qualidade da Água , Lagos , Monitoramento Ambiental/métodos , Ecossistema , Índia
17.
J Med Syst ; 48(1): 25, 2024 Feb 23.
Artigo em Inglês | MEDLINE | ID: mdl-38393660

RESUMO

Precise neurosurgical guidance is critical for successful brain surgeries and plays a vital role in all phases of image-guided neurosurgery (IGN). Neuronavigation software enables real-time tracking of surgical tools, ensuring their presentation with high precision in relation to a virtual patient model. Therefore, this work focuses on the development of a novel multimodal IGN system, leveraging deep learning and explainable AI to enhance brain tumor surgery outcomes. The study establishes the clinical and technical requirements of the system for brain tumor surgeries. NeuroIGN adopts a modular architecture, including brain tumor segmentation, patient registration, and explainable output prediction, and integrates open-source packages into an interactive neuronavigational display. The NeuroIGN system components underwent validation and evaluation in both laboratory and simulated operating room (OR) settings. Experimental results demonstrated its accuracy in tumor segmentation and the success of ExplainAI in increasing the trust of medical professionals in deep learning. The proposed system was successfully assembled and set up within 11 min in a pre-clinical OR setting with a tracking accuracy of 0.5 (± 0.1) mm. NeuroIGN was also evaluated as highly useful, with a high frame rate (19 FPS) and real-time ultrasound imaging capabilities. In conclusion, this paper describes not only the development of an open-source multimodal IGN system but also demonstrates the innovative application of deep learning and explainable AI algorithms in enhancing neuronavigation for brain tumor surgeries. By seamlessly integrating pre- and intra-operative patient image data with cutting-edge interventional devices, our experiments underscore the potential for deep learning models to improve the surgical treatment of brain tumors and long-term post-operative outcomes.


Assuntos
Neoplasias Encefálicas , Cirurgia Assistida por Computador , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/cirurgia , Neoplasias Encefálicas/patologia , Neuronavegação/métodos , Cirurgia Assistida por Computador/métodos , Procedimentos Neurocirúrgicos/métodos , Ultrassonografia , Imageamento por Ressonância Magnética/métodos
18.
Hum Brain Mapp ; 44(15): 5113-5124, 2023 10 15.
Artigo em Inglês | MEDLINE | ID: mdl-37647214

RESUMO

Diffusion tensor imaging (DTI) and diffusion kurtosis imaging (DKI) have been previously used to explore white matter related to human immunodeficiency virus (HIV) infection. While DTI and DKI suffer from low specificity, the Combined Hindered and Restricted Model of Diffusion (CHARMED) provides additional microstructural specificity. We used these three models to evaluate microstructural differences between 35 HIV-positive patients without neurological impairment and 20 healthy controls who underwent diffusion-weighted imaging using three b-values. While significant group effects were found in all diffusion metrics, CHARMED and DKI analyses uncovered wider involvement (80% vs. 20%) of all white matter tracts in HIV infection compared with DTI. In restricted fraction (FR) analysis, we found significant differences in the left corticospinal tract, middle cerebellar peduncle, right inferior cerebellar peduncle, right corticospinal tract, splenium of the corpus callosum, left superior cerebellar peduncle, left superior cerebellar peduncle, pontine crossing tract, left posterior limb of the internal capsule, and left/right medial lemniscus. These are involved in language, motor, equilibrium, behavior, and proprioception, supporting the functional integration that is frequently impaired in HIV-positivity. Additionally, we employed a machine learning algorithm (XGBoost) to discriminate HIV-positive patients from healthy controls using DTI and CHARMED metrics on an ROIwise basis, and unique contributions to this discrimination were examined using Shapley Explanation values. The CHARMED and DKI estimates produced the best performance. Our results suggest that biophysical multishell imaging, combining additional sensitivity and built-in specificity, provides further information about the brain microstructural changes in multimodal areas involved in attentive, emotional and memory networks often impaired in HIV patients.


Assuntos
Imagem de Tensor de Difusão , Infecções por HIV , Substância Branca , Humanos , Masculino , Feminino , Adulto Jovem , Adulto , Pessoa de Meia-Idade , Idoso , Infecções por HIV/diagnóstico por imagem , Substância Branca/diagnóstico por imagem
19.
Hum Brain Mapp ; 44(7): 2921-2935, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36852610

RESUMO

Brain decoding, aiming to identify the brain states using neural activity, is important for cognitive neuroscience and neural engineering. However, existing machine learning methods for fMRI-based brain decoding either suffer from low classification performance or poor explainability. Here, we address this issue by proposing a biologically inspired architecture, Spatial Temporal-pyramid Graph Convolutional Network (STpGCN), to capture the spatial-temporal graph representation of functional brain activities. By designing multi-scale spatial-temporal pathways and bottom-up pathways that mimic the information process and temporal integration in the brain, STpGCN is capable of explicitly utilizing the multi-scale temporal dependency of brain activities via graph, thereby achieving high brain decoding performance. Additionally, we propose a sensitivity analysis method called BrainNetX to better explain the decoding results by automatically annotating task-related brain regions from the brain-network standpoint. We conduct extensive experiments on fMRI data under 23 cognitive tasks from Human Connectome Project (HCP) S1200. The results show that STpGCN significantly improves brain-decoding performance compared to competing baseline models; BrainNetX successfully annotates task-relevant brain regions. Post hoc analysis based on these regions further validates that the hierarchical structure in STpGCN significantly contributes to the explainability, robustness and generalization of the model. Our methods not only provide insights into information representation in the brain under multiple cognitive tasks but also indicate a bright future for fMRI-based brain decoding.


Assuntos
Conectoma , Imageamento por Ressonância Magnética , Humanos , Imageamento por Ressonância Magnética/métodos , Encéfalo , Conectoma/métodos , Cognição , Aprendizado de Máquina
20.
Clin Transplant ; 37(1): e14845, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36315983

RESUMO

BACKGROUND: Machine learning (ML) is increasingly being applied in Cardiology to predict outcomes and assist in clinical decision-making. We sought to develop and validate an ML model for the prediction of mortality after heart transplantation (HT) in adults with congenital heart disease (ACHD). METHODS: The United Network for Organ Sharing (UNOS) database was queried from 2000 to 2020 for ACHD patients who underwent isolated HT. The study cohort was randomly split into derivation (70%) and validation (30%) datasets that were used to train and test a CatBoost ML model. Feature selection was performed using SHapley Additive exPlanations (SHAP). Recipient, donor, procedural, and post-transplant characteristics were tested for their ability to predict mortality. We additionally used SHAP for explainability analysis, as well as individualized mortality risk assessment. RESULTS: The study cohort included 1033 recipients (median age 34 years, 61% male). At 1 year after HT, there were 205 deaths (19.9%). Out of a total of 49 variables, 10 were selected as highly predictive of 1-year mortality and were used to train the ML model. Area under the curve (AUC) and predictive accuracy for the 1-year ML model were .80 and 75.2%, respectively, and .69 and 74.2% for the 3-year model, respectively. Based on SHAP analysis, hemodialysis of the recipient post-HT had overall the strongest relative impact on 1-year mortality after HΤ, followed by recipient-estimated glomerular filtration rate, age and ischemic time. CONCLUSIONS: ML models showed satisfactory predictive accuracy of mortality after HT in ACHD and allowed for individualized mortality risk assessment.


Assuntos
Cardiopatias Congênitas , Insuficiência Cardíaca , Transplante de Coração , Humanos , Masculino , Adulto , Feminino , Medição de Risco , Cardiopatias Congênitas/cirurgia , Aprendizado de Máquina
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA