Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Brief Bioinform ; 24(4)2023 07 20.
Artigo em Inglês | MEDLINE | ID: mdl-37253690

RESUMO

Great efforts have been made to develop precision medicine-based treatments using machine learning. In this field, where the goal is to provide the optimal treatment for each patient based on his/her medical history and genomic characteristics, it is not sufficient to make excellent predictions. The challenge is to understand and trust the model's decisions while also being able to easily implement it. However, one of the issues with machine learning algorithms-particularly deep learning-is their lack of interpretability. This review compares six different machine learning methods to provide guidance for defining interpretability by focusing on accuracy, multi-omics capability, explainability and implementability. Our selection of algorithms includes tree-, regression- and kernel-based methods, which we selected for their ease of interpretation for the clinician. We also included two novel explainable methods in the comparison. No significant differences in accuracy were observed when comparing the methods, but an improvement was observed when using gene expression instead of mutational status as input for these methods. We concentrated on the current intriguing challenge: model comprehension and ease of use. Our comparison suggests that the tree-based methods are the most interpretable of those tested.


Assuntos
Oncologia , Neoplasias , Feminino , Humanos , Masculino , Neoplasias/genética , Algoritmos , Genômica , Aprendizado de Máquina
2.
EBioMedicine ; 95: 104767, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37633093

RESUMO

BACKGROUND: Although Deep Neural Networks (DDNs) have been successful in predicting the efficacy of cancer drugs, the lack of explainability in their decision-making process is a significant challenge. Previous research proposed mimicking the Gene Ontology structure to allow for interpretation of each neuron in the network. However, these previous approaches require huge amount of GPU resources and hinder its extension to genome-wide models. METHODS: We developed SparseGO, a sparse and interpretable neural network, for predicting drug response in cancer cell lines and their Mechanism of Action (MoA). To ensure model generalization, we trained it on multiple datasets and evaluated its performance using three cross-validation schemes. Its efficiency allows it to be used with gene expression. In addition, SparseGO integrates an eXplainable Artificial Intelligence (XAI) technique, DeepLIFT, with Support Vector Machines to computationally discover the MoA of drugs. FINDINGS: SparseGO's sparse implementation significantly reduced GPU memory usage and training speed compared to other methods, allowing it to process gene expression instead of mutations as input data. SparseGO using expression improved the accuracy and enabled its use on drug repositioning. Furthermore, gene expression allows the prediction of MoA using 265 drugs to train it. It was validated on understudied drugs such as parbendazole and PD153035. INTERPRETATION: SparseGO is an effective XAI method for predicting, but more importantly, understanding drug response. FUNDING: The Accelerator Award Programme funded by Cancer Research UK [C355/A26819], Fundación Científica de la AECC and Fondazione AIRC, Project PIBA_2020_1_0055 funded by the Basque Government and the Synlethal Project (RETOS Investigacion, Spanish Government).


Assuntos
Inteligência Artificial , Reposicionamento de Medicamentos , Humanos , Linhagem Celular , Ontologia Genética , Mutação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA