Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 15 de 15
Filtrar
1.
Brief Bioinform ; 24(1)2023 01 19.
Artigo em Inglês | MEDLINE | ID: mdl-36416116

RESUMO

DNA-binding proteins (DBPs) play crucial roles in numerous cellular processes including nucleotide recognition, transcriptional control and the regulation of gene expression. Majority of the existing computational techniques for identifying DBPs are mainly applicable to human and mouse datasets. Even though some models have been tested on Arabidopsis, they produce poor accuracy when applied to other plant species. Therefore, it is imperative to develop an effective computational model for predicting plant DBPs. In this study, we developed a comprehensive computational model for plant specific DBPs identification. Five shallow learning and six deep learning models were initially used for prediction, where shallow learning methods outperformed deep learning algorithms. In particular, support vector machine achieved highest repeated 5-fold cross-validation accuracy of 94.0% area under receiver operating characteristic curve (AUC-ROC) and 93.5% area under precision recall curve (AUC-PR). With an independent dataset, the developed approach secured 93.8% AUC-ROC and 94.6% AUC-PR. While compared with the state-of-art existing tools by using an independent dataset, the proposed model achieved much higher accuracy. Overall results suggest that the developed computational model is more efficient and reliable as compared to the existing models for the prediction of DBPs in plants. For the convenience of the majority of experimental scientists, the developed prediction server PlDBPred is publicly accessible at https://iasri-sg.icar.gov.in/pldbpred/.The source code is also provided at https://iasri-sg.icar.gov.in/pldbpred/source_code.php for prediction using a large-size dataset.


Assuntos
Arabidopsis , Proteínas de Ligação a DNA , Algoritmos , Arabidopsis/genética , Arabidopsis/metabolismo , Biologia Computacional/métodos , Simulação por Computador , Proteínas de Ligação a DNA/genética , Proteínas de Ligação a DNA/metabolismo , Curva ROC , Software
2.
Brief Bioinform ; 23(6)2022 11 19.
Artigo em Inglês | MEDLINE | ID: mdl-36215083

RESUMO

Antimicrobial peptides (AMPs) have received a great deal of attention given their potential to become a plausible option to fight multi-drug resistant bacteria as well as other pathogens. Quantitative sequence-activity models (QSAMs) have been helpful to discover new AMPs because they allow to explore a large universe of peptide sequences and help reduce the number of wet lab experiments. A main aspect in the building of QSAMs based on shallow learning is to determine an optimal set of protein descriptors (features) required to discriminate between sequences with different antimicrobial activities. These features are generally handcrafted from peptide sequence datasets that are labeled with specific antimicrobial activities. However, recent developments have shown that unsupervised approaches can be used to determine features that outperform human-engineered (handcrafted) features. Thus, knowing which of these two approaches contribute to a better classification of AMPs, it is a fundamental question in order to design more accurate models. Here, we present a systematic and rigorous study to compare both types of features. Experimental outcomes show that non-handcrafted features lead to achieve better performances than handcrafted features. However, the experiments also prove that an improvement in performance is achieved when both types of features are merged. A relevance analysis reveals that non-handcrafted features have higher information content than handcrafted features, while an interaction-based importance analysis reveals that handcrafted features are more important. These findings suggest that there is complementarity between both types of features. Comparisons regarding state-of-the-art deep models show that shallow models yield better performances both when fed with non-handcrafted features alone and when fed with non-handcrafted and handcrafted features together.


Assuntos
Anti-Infecciosos , Peptídeos Antimicrobianos , Humanos , Peptídeos Catiônicos Antimicrobianos/farmacologia , Anti-Infecciosos/farmacologia , Anti-Infecciosos/química , Sequência de Aminoácidos
3.
Brief Bioinform ; 23(3)2022 05 13.
Artigo em Inglês | MEDLINE | ID: mdl-35380616

RESUMO

In the last few decades, antimicrobial peptides (AMPs) have been explored as an alternative to classical antibiotics, which in turn motivated the development of machine learning models to predict antimicrobial activities in peptides. The first generation of these predictors was filled with what is now known as shallow learning-based models. These models require the computation and selection of molecular descriptors to characterize each peptide sequence and train the models. The second generation, known as deep learning-based models, which no longer requires the explicit computation and selection of those descriptors, started to be used in the prediction task of AMPs just four years ago. The superior performance claimed by deep models regarding shallow models has created a prevalent inertia to using deep learning to identify AMPs. However, methodological flaws and/or modeling biases in the building of deep models do not support such superiority. Here, we analyze the main pitfalls that led to establish biased conclusions on the leading performance of deep models. Also, we analyze whether deep models truly contribute to achieve better predictions than shallow models by performing fair studies on different state-of-the-art benchmarking datasets. The experiments reveal that deep models do not outperform shallow models in the classification of AMPs, and that both types of models codify similar chemical information since their predictions are highly similar. Thus, according to the currently available datasets, we conclude that the use of deep learning could not be the most suitable approach to develop models to identify AMPs, mainly because shallow models achieve comparable-to-superior performances and are simpler (Ockham's razor principle). Even so, we suggest the use of deep learning only when its capabilities lead to obtaining significantly better performance gains worth the additional computational cost.


Assuntos
Aprendizado Profundo , Sequência de Aminoácidos , Peptídeos Antimicrobianos , Aprendizado de Máquina , Peptídeos/química
4.
J Med Internet Res ; 25: e46934, 2023 10 27.
Artigo em Inglês | MEDLINE | ID: mdl-37889530

RESUMO

BACKGROUND: Sensitive and interpretable machine learning (ML) models can provide valuable assistance to clinicians in managing patients with heart failure (HF) at discharge by identifying individual factors associated with a high risk of readmission. In this cohort study, we delve into the factors driving the potential utility of classification models as decision support tools for predicting readmissions in patients with HF. OBJECTIVE: The primary objective of this study is to assess the trade-off between using deep learning (DL) and traditional ML models to identify the risk of 100-day readmissions in patients with HF. Additionally, the study aims to provide explanations for the model predictions by highlighting important features both on a global scale across the patient cohort and on a local level for individual patients. METHODS: The retrospective data for this study were obtained from the Regional Health Care Information Platform in Region Halland, Sweden. The study cohort consisted of patients diagnosed with HF who were over 40 years old and had been hospitalized at least once between 2017 and 2019. Data analysis encompassed the period from January 1, 2017, to December 31, 2019. Two ML models were developed and validated to predict 100-day readmissions, with a focus on the explainability of the model's decisions. These models were built based on decision trees and recurrent neural architecture. Model explainability was obtained using an ML explainer. The predictive performance of these models was compared against 2 risk assessment tools using multiple performance metrics. RESULTS: The retrospective data set included a total of 15,612 admissions, and within these admissions, readmission occurred in 5597 cases, representing a readmission rate of 35.85%. It is noteworthy that a traditional and explainable model, informed by clinical knowledge, exhibited performance comparable to the DL model and surpassed conventional scoring methods in predicting readmission among patients with HF. The evaluation of predictive model performance was based on commonly used metrics, with an area under the precision-recall curve of 66% for the deep model and 68% for the traditional model on the holdout data set. Importantly, the explanations provided by the traditional model offer actionable insights that have the potential to enhance care planning. CONCLUSIONS: This study found that a widely used deep prediction model did not outperform an explainable ML model when predicting readmissions among patients with HF. The results suggest that model transparency does not necessarily compromise performance, which could facilitate the clinical adoption of such models.


Assuntos
Insuficiência Cardíaca , Readmissão do Paciente , Humanos , Adulto , Estudos Retrospectivos , Estudos de Coortes , Aprendizado de Máquina , Insuficiência Cardíaca/terapia , Insuficiência Cardíaca/diagnóstico
5.
Sensors (Basel) ; 23(16)2023 Aug 09.
Artigo em Inglês | MEDLINE | ID: mdl-37631573

RESUMO

Electroencephalography (EEG) is increasingly being used in pediatric neurology and provides opportunities to diagnose various brain illnesses more accurately and precisely. It is thought to be one of the most effective tools for identifying newborn seizures, especially in Neonatal Intensive Care Units (NICUs). However, EEG interpretation is time-consuming and requires specialists with extensive training. It can be challenging and time-consuming to distinguish between seizures since they might have a wide range of clinical characteristics and etiologies. Technological advancements such as the Machine Learning (ML) approach for the rapid and automated diagnosis of newborn seizures have increased in recent years. This work proposes a novel optimized ML framework to eradicate the constraints of conventional seizure detection techniques. Moreover, we modified a novel meta-heuristic optimization algorithm (MHOA), named Aquila Optimization (AO), to develop an optimized model to make our proposed framework more efficient and robust. To conduct a comparison-based study, we also examined the performance of our optimized model with that of other classifiers, including the Decision Tree (DT), Random Forest (RF), and Gradient Boosting Classifier (GBC). This framework was validated on a public dataset of Helsinki University Hospital, where EEG signals were collected from 79 neonates. Our proposed model acquired encouraging results showing a 93.38% Accuracy Score, 93.9% Area Under the Curve (AUC), 92.72% F1 score, 65.17% Kappa, 93.38% sensitivity, and 77.52% specificity. Thus, it outperforms most of the present shallow ML architectures by showing improvements in accuracy and AUC scores. We believe that these results indicate a major advance in the detection of newborn seizures, which will benefit the medical community by increasing the reliability of the detection process.


Assuntos
Águias , Recém-Nascido , Criança , Animais , Humanos , Reprodutibilidade dos Testes , Convulsões/diagnóstico , Encéfalo , Algoritmos
6.
Sensors (Basel) ; 23(21)2023 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-37960599

RESUMO

Short QT syndrome (SQTS) is an inherited cardiac ion-channel disease related to an increased risk of sudden cardiac death (SCD) in young and otherwise healthy individuals. SCD is often the first clinical presentation in patients with SQTS. However, arrhythmia risk stratification is presently unsatisfactory in asymptomatic patients. In this context, artificial intelligence-based electrocardiogram (ECG) analysis has never been applied to refine risk stratification in patients with SQTS. The purpose of this study was to analyze ECGs from SQTS patients with the aid of different AI algorithms to evaluate their ability to discriminate between subjects with and without documented life-threatening arrhythmic events. The study group included 104 SQTS patients, 37 of whom had a documented major arrhythmic event at presentation and/or during follow-up. Thirteen ECG features were measured independently by three expert cardiologists; then, the dataset was randomly divided into three subsets (training, validation, and testing). Five shallow neural networks were trained, validated, and tested to predict subject-specific class (non-event/event) using different subsets of ECG features. Additionally, several deep learning and machine learning algorithms, such as Vision Transformer, Swin Transformer, MobileNetV3, EfficientNetV2, ConvNextTiny, Capsule Networks, and logistic regression were trained, validated, and tested directly on the scanned ECG images, without any manual feature extraction. Furthermore, a shallow neural network, a 1-D transformer classifier, and a 1-D CNN were trained, validated, and tested on ECG signals extracted from the aforementioned scanned images. Classification metrics were evaluated by means of sensitivity, specificity, positive and negative predictive values, accuracy, and area under the curve. Results prove that artificial intelligence can help clinicians in better stratifying risk of arrhythmia in patients with SQTS. In particular, shallow neural networks' processing features showed the best performance in identifying patients that will not suffer from a potentially lethal event. This could pave the way for refined ECG-based risk stratification in this group of patients, potentially helping in saving the lives of young and otherwise healthy individuals.


Assuntos
Arritmias Cardíacas , Inteligência Artificial , Humanos , Arritmias Cardíacas/diagnóstico , Arritmias Cardíacas/complicações , Redes Neurais de Computação , Eletrocardiografia/métodos , Morte Súbita Cardíaca/etiologia
7.
Sensors (Basel) ; 20(12)2020 Jun 21.
Artigo em Inglês | MEDLINE | ID: mdl-32575909

RESUMO

Detecting cognitive profiles is critical to efficient adaptive learning systems that automatically adjust the content delivered depending on the learner's cognitive states and skills. This study explores electroencephalography (EEG) and facial expressions as physiological monitoring tools to build models that detect two cognitive states, namely, engagement and instantaneous attention, and three cognitive skills, namely, focused attention, planning, and shifting. First, while wearing a 14-channel EEG Headset and being videotaped, data has been collected from 127 subjects taking two scientifically validated cognitive assessments. Second, labeling was performed based on the scores obtained from the used tools. Third, different shallow and deep models were experimented in the two modalities of EEG and facial expressions. Finally, the best performing models for the analyzed states are determined. According to the used performance measure, which is the f-beta score with beta = 2, the best obtained results for engagement, instantaneous attention, and focused attention are EEG-based models with 0.86, 0.82, and 0.63 scores, respectively. As for planning and shifting, the best performing models are facial expressions-based models with 0.78 and 0.81, respectively. The obtained results show that EEG and facial expressions contain important and different cues and features about the analyzed cognitive states, and hence, can be used to automatically and non-intrusively detect them.


Assuntos
Cognição , Eletroencefalografia , Expressão Facial , Reconhecimento Automatizado de Padrão , Atenção , Sinais (Psicologia) , Humanos
8.
Eur Radiol Exp ; 8(1): 26, 2024 Mar 05.
Artigo em Inglês | MEDLINE | ID: mdl-38438821

RESUMO

An increasingly strong connection between artificial intelligence and medicine has enabled the development of predictive models capable of supporting physicians' decision-making. Artificial intelligence encompasses much more than machine learning, which nevertheless is its most cited and used sub-branch in the last decade. Since most clinical problems can be modeled through machine learning classifiers, it is essential to discuss their main elements. This review aims to give primary educational insights on the most accessible and widely employed classifiers in radiology field, distinguishing between "shallow" learning (i.e., traditional machine learning) algorithms, including support vector machines, random forest and XGBoost, and "deep" learning architectures including convolutional neural networks and vision transformers. In addition, the paper outlines the key steps for classifiers training and highlights the differences between the most common algorithms and architectures. Although the choice of an algorithm depends on the task and dataset dealing with, general guidelines for classifier selection are proposed in relation to task analysis, dataset size, explainability requirements, and available computing resources. Considering the enormous interest in these innovative models and architectures, the problem of machine learning algorithms interpretability is finally discussed, providing a future perspective on trustworthy artificial intelligence.Relevance statement The growing synergy between artificial intelligence and medicine fosters predictive models aiding physicians. Machine learning classifiers, from shallow learning to deep learning, are offering crucial insights for the development of clinical decision support systems in healthcare. Explainability is a key feature of models that leads systems toward integration into clinical practice. Key points • Training a shallow classifier requires extracting disease-related features from region of interests (e.g., radiomics).• Deep classifiers implement automatic feature extraction and classification.• The classifier selection is based on data and computational resources availability, task, and explanation needs.


Assuntos
Inteligência Artificial , Aprendizado Profundo , Algoritmos , Aprendizado de Máquina , Redes Neurais de Computação
9.
Brief Funct Genomics ; 22(5): 401-410, 2023 11 10.
Artigo em Inglês | MEDLINE | ID: mdl-37158175

RESUMO

RNA-binding proteins (RBPs) are essential for post-transcriptional gene regulation in eukaryotes, including splicing control, mRNA transport and decay. Thus, accurate identification of RBPs is important to understand gene expression and regulation of cell state. In order to detect RBPs, a number of computational models have been developed. These methods made use of datasets from several eukaryotic species, specifically from mice and humans. Although some models have been tested on Arabidopsis, these techniques fall short of correctly identifying RBPs for other plant species. Therefore, the development of a powerful computational model for identifying plant-specific RBPs is needed. In this study, we presented a novel computational model for locating RBPs in plants. Five deep learning models and ten shallow learning algorithms were utilized for prediction with 20 sequence-derived and 20 evolutionary feature sets. The highest repeated five-fold cross-validation accuracy, 91.24% AU-ROC and 91.91% AU-PRC, was achieved by light gradient boosting machine. While evaluated using an independent dataset, the developed approach achieved 94.00% AU-ROC and 94.50% AU-PRC. The proposed model achieved significantly higher accuracy for predicting plant-specific RBPs as compared to the currently available state-of-art RBP prediction models. Despite the fact that certain models have already been trained and assessed on the model organism Arabidopsis, this is the first comprehensive computer model for the discovery of plant-specific RBPs. The web server RBPLight was also developed, which is publicly accessible at https://iasri-sg.icar.gov.in/rbplight/, for the convenience of researchers to identify RBPs in plants.


Assuntos
Arabidopsis , Humanos , Animais , Camundongos , Arabidopsis/genética , Arabidopsis/metabolismo , Algoritmos , Evolução Biológica , Proteínas de Ligação a RNA/genética , Proteínas de Ligação a RNA/química , Proteínas de Ligação a RNA/metabolismo , Biologia Computacional/métodos , Sítios de Ligação
10.
Environ Sci Pollut Res Int ; 30(5): 12317-12347, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36107302

RESUMO

The stability of the power grid and the operational security of the power system depend on the precise prediction of wind speed. In consideration of the nonlinear and non-stationary characteristics of wind speed in different seasons, this paper employs the weight of wind resource index calculated by triangular fuzzy analytic hierarchy process (TF-AHP), criteria importance through inter-criteria correlation (CRITIC), and entropy weight method (EWM) to improve gray correlation analysis (GRA) and obtain the gray correlation degree of each season. In addition, a wind speed prediction model is provided that includes single-layer and two-layer weighting and is based on both deep and shallow machine learning models. At first, we establish each quarter's wind resource characteristics at typical monthly intervals of 10 min, 30 min, 60 min, and 120 min. The GRA's TF-AHP-CRITIC-EWM, enhanced with subjective and objective weights, is used to assess the available wind resources in each season and to compute the forecasted combination of wind speed for each season. As the final prediction results, the prediction values of each layer model are evaluated independently. For the intervals with considerable errors, we apply wavelet denoising and replacement combination. The simulation findings show that the proposed combined model surpasses earlier benchmark models in terms of goodness of fit, prediction accuracy, and generalizability.


Assuntos
Aprendizado de Máquina , Vento , Estações do Ano , Simulação por Computador , Previsões
11.
Metabolites ; 12(5)2022 May 18.
Artigo em Inglês | MEDLINE | ID: mdl-35629959

RESUMO

Optical microscopy has long been the gold standard to analyse tissue samples for the diagnostics of various diseases, such as cancer. The current diagnostic workflow is time-consuming and labour-intensive, and manual annotation by a qualified pathologist is needed. With the ever-increasing number of tissue blocks and the complexity of molecular diagnostics, new approaches have been developed as complimentary or alternative solutions for the current workflow, such as digital pathology and mass spectrometry imaging (MSI). This study compares the performance of a digital pathology workflow using deep learning for tissue recognition and an MSI approach utilising shallow learning to annotate formalin-fixed and paraffin-embedded (FFPE) breast cancer tissue microarrays (TMAs). Results show that both deep learning algorithms based on conventional optical images and MSI-based shallow learning can provide automated diagnostics with F1-scores higher than 90%, with the latter intrinsically built on biochemical information that can be used for further analysis.

12.
Neural Comput Appl ; 34(2): 1135-1159, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34483495

RESUMO

The process of tagging a given text or document with suitable labels is known as text categorization or classification. The aim of this work is to automatically tag a news article based on its vocabulary features. To accomplish this objective, 2 large datasets have been constructed from various Arabic news portals. The first dataset contains of 90k single-labeled articles from 4 domains (Business, Middle East, Technology and Sports). The second dataset has over 290 k multi-tagged articles. To examine the single-label dataset, we employed an array of ten shallow learning classifiers. Furthermore, we added an ensemble model that adopts the majority-voting technique of all studied classifiers. The performance of the classifiers on the first dataset ranged between 87.7% (AdaBoost) and 97.9% (SVM). Analyzing some of the misclassified articles confirmed the need for a multi-label opposed to single-label categorization for better classification results. For the second dataset, we tested both shallow learning and deep learning multi-labeling approaches. A custom accuracy metric, designed for the multi-labeling task, has been developed for performance evaluation along with hamming loss metric. Firstly, we used classifiers that were compatible with multi-labeling tasks such as Logistic Regression and XGBoost, by wrapping each in a OneVsRest classifier. XGBoost gave the higher accuracy, scoring 84.7%, while Logistic Regression scored 81.3%. Secondly, ten neural networks were constructed (CNN, CLSTM, LSTM, BILSTM, GRU, CGRU, BIGRU, HANGRU, CRF-BILSTM and HANLSTM). CGRU proved to be the best multi-labeling classifier scoring an accuracy of 94.85%, higher than the rest of the classifies.

13.
Sci Total Environ ; 741: 140338, 2020 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-32610233

RESUMO

Machine learning (ML) models are increasingly used to study complex environmental phenomena with high variability in time and space. In this study, the potential of exploiting three categories of ML regression models, including classical regression, shallow learning and deep learning for predicting soil greenhouse gas (GHG) emissions from an agricultural field was explored. Carbon dioxide (CO2) and nitrous oxide (N2O) fluxes, as well as various environmental, agronomic and soil data were measured at the site over a five-year period in Quebec, Canada. The rigorous analysis, which included statistical comparison and cross-validation for the prediction of CO2 and N2O fluxes, confirmed that the LSTM model performed the best among the considered ML models with the highest R coefficient and the lowest root mean squared error (RMSE) values (R = 0.87 and RMSE = 30.3 mg·m-2·hr-1 for CO2 flux prediction and R = 0.86 and RMSE = 0.19 mg·m-2·hr-1 for N2O flux prediction). The predictive performances of LSTM were more accurate than those simulated in a previous study conducted by a biophysical-based Root Zone Water Quality Model (RZWQM2). The classical regression models (namely RF, SVM and LASSO) satisfactorily simulated cyclical and seasonal variations of CO2 fluxes (R = 0.75, 0.71 and 0.68, respectively); however, they failed to reasonably predict the peak values of N2O fluxes (R < 0.25). Shallow ML was found to be less effective in predicting GHG fluxes than other considered ML models (R < 0.7 for CO2 flux and R < 0.3 for estimating N2O fluxes) and was the most sensitive to hyperparameter tuning. Based on this comprehensive comparison study, it was elicited that the LSTM model can be employed successfully in simulating GHG emissions from agricultural soils, providing a new perspective on the application of machine learning modeling for predicting GHG emissions to the environment.

14.
Sleep Med Rev ; 48: 101204, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31491655

RESUMO

Clinical sleep scoring involves a tedious visual review of overnight polysomnograms by a human expert, according to official standards. It could appear then a suitable task for modern artificial intelligence algorithms. Indeed, machine learning algorithms have been applied to sleep scoring for many years. As a result, several software products offer nowadays automated or semi-automated scoring services. However, the vast majority of the sleep physicians do not use them. Very recently, thanks to the increased computational power, deep learning has also been employed with promising results. Machine learning algorithms can undoubtedly reach a high accuracy in specific situations, but there are many difficulties in their introduction in the daily routine. In this review, the latest approaches that are applying deep learning for facilitating and accelerating sleep scoring are thoroughly analyzed and compared with the state of the art methods. Then the obstacles in introducing automated sleep scoring in the clinical practice are examined. Deep learning algorithm capabilities of learning from a highly heterogeneous dataset, in terms both of human data and of scorers, are very promising and should be further investigated.


Assuntos
Análise de Dados , Aprendizado de Máquina , Fases do Sono/fisiologia , Transtornos do Sono-Vigília/diagnóstico , Algoritmos , Diagnóstico por Computador , Humanos , Polissonografia/instrumentação
15.
Artigo em Inglês | MEDLINE | ID: mdl-34408917

RESUMO

Despite the linear relation between the number of observed spectra and the searching time, the current protein search engines, even the parallel versions, could take several hours to search a large amount of MSMS spectra, which can be generated in a short time. After a laborious searching process, some (and at times, majority) of the observed spectra are labeled as non-identifiable. We evaluate the role of machine learning in building an efficient MSMS filter to remove non-identifiable spectra. We compare and evaluate the deep learning algorithm using 9 shallow learning algorithms with different configurations. Using 10 different datasets generated from two different search engines, different instruments, different sizes and from different species, we experimentally show that deep learning models are powerful in filtering MSMS spectra. We also show that our simple features list is significant where other shallow learning algorithms showed encouraging results in filtering the MSMS spectra. Our deep learning model can exclude around 50% of the non-identifiable spectra while losing, on average, only 9% of the identifiable ones. As for shallow learning, algorithms of: Random Forest, Support Vector Machine and Neural Networks showed encouraging results, eliminating, on average, 70% of the non-identifiable spectra while losing around 25% of the identifiable ones. The deep learning algorithm may be especially more useful in instances where the protein(s) of interest are in lower cellular or tissue concentration, while the other algorithms may be more useful for concentrated or more highly expressed proteins.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa