Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 441
Filtrar
1.
Comput Methods Programs Biomed ; 255: 108337, 2024 Jul 19.
Artículo en Inglés | MEDLINE | ID: mdl-39067139

RESUMEN

BACKGROUND AND OBJECTIVE: Recent studies point out that the dynamics and interaction of cell populations within their environment are related to several biological processes in immunology. Hence, single-cell analysis in immunology now relies on spatial omics. Moreover, recent literature suggests that immunology scenarios are hierarchically organized, including unknown cell behaviors appearing in different proportions across some observable control and therapy groups. These dynamic behaviors play a crucial role in identifying the causes of processes such as inflammation, aging, and fighting off pathogens or cancerous cells. In this work, we use a self-supervised learning approach to discover these behaviors associated with cell dynamics in an immunology scenario. MATERIALS AND METHODS: Specifically, we study the different responses of control group and therapy groups in a scenario involving inflammation due to infarct, with a focus on neutrophil migration within blood vessels. Starting from a set of hand-crafted spatio-temporal features, we use a recurrent neural network to generate embeddings that properly describe the dynamics of the migration processes. The network is trained using a novel multi-task contrastive loss that, on the one hand, models the hierarchical structure of our scenario (groups-behaviors-samples) and, on the other, ensures temporal consistency within the embedding, enforcing that subsequent temporal samples obtained from a given cell stay close in the latent space. RESULTS: Our experimental results demonstrate that the resulting embeddings improve the separability of cell behaviors and log-likelihood of the therapies, when compared to the hand-crafted feature extraction and recent methods from the state of the art, even with dimensionality reduction (16 vs. 21 hand-crafted features). CONCLUSIONS: Our approach enables single-cell analyses at a population level, being able to automatically discover shared behaviors among different groups. This, in turn, enables the prediction of the therapy effectiveness based on their proportions within a study group.

2.
Cell Rep ; 43(7): 114412, 2024 Jul 23.
Artículo en Inglés | MEDLINE | ID: mdl-38968075

RESUMEN

A stimulus held in working memory is perceived as contracted toward the average stimulus. This contraction bias has been extensively studied in psychophysics, but little is known about its origin from neural activity. By training recurrent networks of spiking neurons to discriminate temporal intervals, we explored the causes of this bias and how behavior relates to population firing activity. We found that the trained networks exhibited animal-like behavior. Various geometric features of neural trajectories in state space encoded warped representations of the durations of the first interval modulated by sensory history. Formulating a normative model, we showed that these representations conveyed a Bayesian estimate of the interval durations, thus relating activity and behavior. Importantly, our findings demonstrate that Bayesian computations already occur during the sensory phase of the first stimulus and persist throughout its maintenance in working memory, until the time of stimulus comparison.


Asunto(s)
Teorema de Bayes , Animales , Modelos Neurológicos , Neuronas/fisiología , Potenciales de Acción/fisiología , Red Nerviosa/fisiología , Memoria a Corto Plazo/fisiología , Redes Neurales de la Computación
3.
Sci Rep ; 14(1): 16800, 2024 Jul 22.
Artículo en Inglés | MEDLINE | ID: mdl-39039237

RESUMEN

Handwritten Text Recognition (HTR) is a challenging task due to the complex structures and variations present in handwritten text. In recent years, the application of gated mechanisms, such as Long Short-Term Memory (LSTM) networks, has brought significant advancements to HTR systems. This paper presents an overview of HTR using a gated mechanism and highlights its novelty and advantages. The gated mechanism enables the model to capture long-term dependencies, retain relevant context, handle variable length sequences, mitigate error propagation, and adapt to contextual variations. The pipeline involves preprocessing the handwritten text images, extracting features, modeling the sequential dependencies using the gated mechanism, and decoding the output into readable text. The training process utilizes annotated datasets and optimization techniques to minimize transcription discrepancies. HTR using a gated mechanism has found applications in digitizing historical documents, automatic form processing, and real-time transcription. The results show improved accuracy and robustness compared to traditional HTR approaches. The advancements in HTR using a gated mechanism open up new possibilities for effectively recognizing and transcribing handwritten text in various domains. This research does a better job than the most recent iteration of the HTR system when compared to five different handwritten datasets (Washington, Saint Gall, RIMES, Bentham and IAM). Smartphones and robots are examples of low-cost computing devices that can benefit from this research.

4.
Heliyon ; 10(12): e32639, 2024 Jun 30.
Artículo en Inglés | MEDLINE | ID: mdl-38988581

RESUMEN

The objective of this study is to investigate methodologies concerning enterprise financial sharing and risk identification to mitigate concerns associated with the sharing and safeguarding of financial data. Initially, the analysis examines security vulnerabilities inherent in conventional financial information sharing practices. Subsequently, blockchain technology is introduced to transition various entity nodes within centralized enterprise financial networks into a decentralized blockchain framework, culminating in the formulation of a blockchain-based model for enterprise financial data sharing. Concurrently, the study integrates the Bi-directional Long Short-Term Memory (BiLSTM) algorithm with the transformer model, presenting an enterprise financial risk identification model referred to as the BiLSTM-fused transformer model. This model amalgamates multimodal sequence modeling with comprehensive understanding of both textual and visual data. It stratifies financial values into levels 1 to 5, where level 1 signifies the most favorable financial condition, followed by relatively good (level 2), average (level 3), high risk (level 4), and severe risk (level 5). Subsequent to model construction, experimental analysis is conducted, revealing that, in comparison to the Byzantine Fault Tolerance (BFT) algorithm mechanism, the proposed model achieves a throughput exceeding 80 with a node count of 146. Both data message leakage and average packet loss rates remain below 10 %. Moreover, when juxtaposed with the recurrent neural networks (RNNs) algorithm, this model demonstrates a risk identification accuracy surpassing 94 %, an AUC value exceeding 0.95, and a reduction in the time required for risk identification by approximately 10 s. Consequently, this study facilitates the more precise and efficient identification of potential risks, thereby furnishing crucial support for enterprise risk management and strategic decision-making endeavors.

5.
AIMS Public Health ; 11(2): 432-458, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39027393

RESUMEN

Recurrent Neural Networks (RNNs), a type of machine learning technique, have recently drawn a lot of interest in numerous fields, including epidemiology. Implementing public health interventions in the field of epidemiology depends on efficient modeling and outbreak prediction. Because RNNs can capture sequential dependencies in data, they have become highly effective tools in this field. In this paper, the use of RNNs in epidemic modeling is examined, with a focus on the extent to which they can handle the inherent temporal dynamics in the spread of diseases. The mathematical representation of epidemics requires taking time-dependent variables into account, such as the rate at which infections spread and the long-term effects of interventions. The goal of this study is to use an intelligent computing solution based on RNNs to provide numerical performances and interpretations for the SEIR nonlinear system based on the propagation of the Zika virus (SEIRS-PZV) model. The four patient dynamics, namely susceptible patients S(y), exposed patients admitted in a hospital E(y), the fraction of infective individuals I(y), and recovered patients R(y), are represented by the epidemic version of the nonlinear system, or the SEIR model. SEIRS-PZV is represented by ordinary differential equations (ODEs), which are then solved by the Adams method using the Mathematica software to generate a dataset. The dataset was used as an output for the RNN to train the model and examine results such as regressions, correlations, error histograms, etc. For RNN, we used 100% to train the model with 15 hidden layers and a delay of 2 seconds. The input for the RNN is a time series sequence from 0 to 5, with a step size of 0.05. In the end, we compared the approximated solution with the exact solution by plotting them on the same graph and generating the absolute error plot for each of the 4 cases of SEIRS-PZV. Predictions made by the model appeared to be become more accurate when the mean squared error (MSE) decreased. An increased fit to the observed data was suggested by this decrease in the MSE, which suggested that the variance between the model's predicted values and the actual values was dropping. A minimal absolute error almost equal to zero was obtained, which further supports the usefulness of the suggested strategy. A small absolute error shows the degree to which the model's predictions matches the ground truth values, thus indicating the level of accuracy and precision for the model's output.

6.
Sensors (Basel) ; 24(11)2024 May 29.
Artículo en Inglés | MEDLINE | ID: mdl-38894286

RESUMEN

Research on transformers in remote sensing (RS), which started to increase after 2021, is facing the problem of a relative lack of review. To understand the trends of transformers in RS, we undertook a quantitative analysis of the major research on transformers over the past two years by dividing the application of transformers into eight domains: land use/land cover (LULC) classification, segmentation, fusion, change detection, object detection, object recognition, registration, and others. Quantitative results show that transformers achieve a higher accuracy in LULC classification and fusion, with more stable performance in segmentation and object detection. Combining the analysis results on LULC classification and segmentation, we have found that transformers need more parameters than convolutional neural networks (CNNs). Additionally, further research is also needed regarding inference speed to improve transformers' performance. It was determined that the most common application scenes for transformers in our database are urban, farmland, and water bodies. We also found that transformers are employed in the natural sciences such as agriculture and environmental protection rather than the humanities or economics. Finally, this work summarizes the analysis results of transformers in remote sensing obtained during the research process and provides a perspective on future directions of development.

7.
bioRxiv ; 2024 Jun 08.
Artículo en Inglés | MEDLINE | ID: mdl-38895477

RESUMEN

How do biological neural systems efficiently encode, transform and propagate information between the sensory periphery and the sensory cortex about sensory features evolving at different time scales? Are these computations efficient in normative information processing terms? While previous work has suggested that biologically plausible models of of such neural information processing may be implemented efficiently within a single processing layer, how such computations extend across several processing layers is less clear. Here, we model propagation of multiple time-varying sensory features across a sensory pathway, by extending the theory of efficient coding with spikes to efficient encoding, transformation and transmission of sensory signals. These computations are optimally realized by a multilayer spiking network with feedforward networks of spiking neurons (receptor layer) and recurrent excitatory-inhibitory networks of generalized leaky integrate-and-fire neurons (recurrent layers). Our model efficiently realizes a broad class of feature transformations, including positive and negative interaction across features, through specific and biologically plausible structures of feedforward connectivity. We find that mixing of sensory features in the activity of single neurons is beneficial because it lowers the metabolic cost at the network level. We apply the model to the somatosensory pathway by constraining it with parameters measured empirically and include in its last node, analogous to the primary somatosensory cortex (S1), two types of inhibitory neurons: parvalbumin-positive neurons realizing lateral inhibition, and somatostatin-positive neurons realizing winner-take-all inhibition. By implementing a negative interaction across stimulus features, this model captures several intriguing empirical observations from the somatosensory system of the mouse, including a decrease of sustained responses from subcortical networks to S1, a non-linear effect of the knock-out of receptor neuron types on the activity in S1, and amplification of weak signals from sensory neurons across the pathway.

8.
Cogn Neurodyn ; 18(3): 1323-1335, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38826641

RESUMEN

In order to comprehend and enhance models that describes various brain regions it is important to study the dynamics of trained recurrent neural networks. Including Dale's law in such models usually presents several challenges. However, this is an important aspect that allows computational models to better capture the characteristics of the brain. Here we present a framework to train networks using such constraint. Then we have used it to train them in simple decision making tasks. We characterized the eigenvalue distributions of the recurrent weight matrices of such networks. Interestingly, we discovered that the non-dominant eigenvalues of the recurrent weight matrix are distributed in a circle with a radius less than 1 for those whose initial condition before training was random normal and in a ring for those whose initial condition was random orthogonal. In both cases, the radius does not depend on the fraction of excitatory and inhibitory units nor the size of the network. Diminution of the radius, compared to networks trained without the constraint, has implications on the activity and dynamics that we discussed here. Supplementary Information: The online version contains supplementary material available at 10.1007/s11571-023-09956-w.

9.
Sci Rep ; 14(1): 12666, 2024 Jun 03.
Artículo en Inglés | MEDLINE | ID: mdl-38831089

RESUMEN

In the paper, a new evolutionary technique called Linear Matrix Genetic Programming (LMGP) is proposed. It is a matrix extension of Linear Genetic Programming and its application is data-driven black-box control-oriented modeling in conditions of limited access to training data. In LMGP, the model is in the form of an evolutionarily-shaped program which is a sequence of matrix operations. Since the program has a hidden state, running it for a sequence of input data has a similar effect to using well-known recurrent neural networks such as Long Short-Term Memory (LSTM) or Gated Recurrent Unit (GRU). To verify the effectiveness of the LMGP, it was compared with different types of neural networks. The task of all the compared techniques was to reproduce the behavior of a nonlinear model of an underwater vehicle. The results of the comparative tests are reported in the paper and they show that the LMGP can quickly find an effective and very simple solution to the given problem. Moreover, a detailed comparison of models, generated by LMGP and LSTM/GRU, revealed that the former are up to four times more accurate than the latter in reproducing vehicle behavior.

10.
Heliyon ; 10(11): e32077, 2024 Jun 15.
Artículo en Inglés | MEDLINE | ID: mdl-38912510

RESUMEN

Oral cancer early diagnosis is a critical task in the field of medical science, and one of the most necessary things is to develop sound and effective strategies for early detection. The current research investigates a new strategy to diagnose an oral cancer based upon combination of effective learning and medical imaging. The current research investigates a new strategy to diagnose an oral cancer using Gated Recurrent Unit (GRU) networks optimized by an improved model of the NGO (Northern Goshawk Optimization) algorithm. The proposed approach has several advantages over existing methods, including its ability to analyze large and complex datasets, its high accuracy, as well as its capacity to detect oral cancer at the very beginning stage. The improved NGO algorithm is utilized to improve the GRU network that helps to improve the performance of the network and increase the accuracy of the diagnosis. The paper describes the proposed approach and evaluates its performance using a dataset of oral cancer patients. The findings of the study demonstrate the efficiency of the suggested approach in accurately diagnosing oral cancer.

11.
Biomimetics (Basel) ; 9(6)2024 Jun 14.
Artículo en Inglés | MEDLINE | ID: mdl-38921237

RESUMEN

Recurrent neural networks (RNNs) transmit information over time through recurrent connections. In contrast, biological neural networks use many other temporal processing mechanisms. One of these mechanisms is the inter-neuron delays caused by varying axon properties. Recently, this feature was implemented in echo state networks (ESNs), a type of RNN, by assigning spatial locations to neurons and introducing distance-dependent inter-neuron delays. These delays were shown to significantly improve ESN task performance. However, thus far, it is still unclear why distance-based delay networks (DDNs) perform better than ESNs. In this paper, we show that by optimizing inter-node delays, the memory capacity of the network matches the memory requirements of the task. As such, networks concentrate their memory capabilities to the points in the past which contain the most information for the task at hand. Moreover, we show that DDNs have a greater total linear memory capacity, with the same amount of non-linear processing power.

12.
Syst Biol ; 2024 Jun 25.
Artículo en Inglés | MEDLINE | ID: mdl-38916476

RESUMEN

Models have always been central to inferring molecular evolution and to reconstructing phylogenetic trees. Their use typically involves the development of a mechanistic framework reflecting our understanding of the underlying biological processes, such as nucleotide substitu- tions, and the estimation of model parameters by maximum likelihood or Bayesian inference. However, deriving and optimizing the likelihood of the data is not always possible under complex evolutionary scenarios or even tractable for large datasets, often leading to unrealistic simplifying assumptions in the fitted models. To overcome this issue, we coupled stochastic simulations of genome evolution with a new supervised deep learning model to infer key parameters of molecular evolution. Our model is designed to directly analyze multiple sequence alignments and estimate per-site evolutionary rates and divergence, without requiring a known phylogenetic tree. The accuracy of our predictions matched that of likelihood-based phylogenetic inference, when rate heterogeneity followed a simple gamma distribution, but it strongly exceeded it under more complex patterns of rate variation, such as codon models. Our approach is highly scalable and can be efficiently applied to genomic data, as we showed on a dataset of 26 million nucleotides from the clownfish clade. Our simulations also showed that the integration of per-site rates obtained by deep learning within a Bayesian framework led to significantly more accu- rate phylogenetic inference, particularly with respect to the estimated branch lengths. We thus propose that future advancements in phylogenetic analysis will benefit from a semi-supervised learning approach that combines deep-learning estimation of substitution rates, which allows for more flexible models of rate variation, and probabilistic inference of the phylogenetic tree, which guarantees interpretability and a rigorous assessment of statistical support.

13.
Math Biosci Eng ; 21(5): 5996-6018, 2024 May 20.
Artículo en Inglés | MEDLINE | ID: mdl-38872567

RESUMEN

Severe acute respiratory syndrome coronavirus 2 (SARS­CoV­2) has been evolving rapidly after causing havoc worldwide in 2020. Since then, it has been very hard to contain the virus owing to its frequently mutating nature. Changes in its genome lead to viral evolution, rendering it more resistant to existing vaccines and drugs. Predicting viral mutations beforehand will help in gearing up against more infectious and virulent versions of the virus in turn decreasing the damage caused by them. In this paper, we have proposed different NMT (neural machine translation) architectures based on RNNs (recurrent neural networks) to predict mutations in the SARS-CoV-2-selected non-structural proteins (NSP), i.e., NSP1, NSP3, NSP5, NSP8, NSP9, NSP13, and NSP15. First, we created and pre-processed the pairs of sequences from two languages using k-means clustering and nearest neighbors for training a neural translation machine. We also provided insights for training NMTs on long biological sequences. In addition, we evaluated and benchmarked our models to demonstrate their efficiency and reliability.


Asunto(s)
COVID-19 , Genoma Viral , Mutación , Redes Neurales de la Computación , SARS-CoV-2 , Proteínas no Estructurales Virales , SARS-CoV-2/genética , Humanos , COVID-19/virología , COVID-19/transmisión , Proteínas no Estructurales Virales/genética , Algoritmos
14.
Comput Biol Med ; 177: 108665, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38820775

RESUMEN

BACKGROUND: Longitudinal data in health informatics studies often present challenges due to sparse observations from each subject, limiting the application of contemporary deep learning for prediction. This issue is particularly relevant in predicting birthweight, a crucial factor in identifying conditions such as macrosomia and large-for-gestational age (LGA). Previous approaches have relied on empirical formulas for estimated fetal weights (EFWs) from ultrasound measurements and mixed-effects models for interim predictions. METHOD: The proposed novel supervised longitudinal learning procedure features a three-step approach. First, EFWs are generated using empirical formulas from ultrasound measurements. Second, nonlinear mixed-effects models are applied to create augmented sequences of EFWs, spanning daily gestational timepoints. This augmentation transforms sparse longitudinal data into a dense parallel sequence suitable for training recurrent neural networks (RNNs). A tailored RNN architecture is then devised to incorporate the augmented sequential EFWs along with non-sequential maternal characteristics. RESULTS: The RNNs are trained on augmented data to predict birthweights, which are further classified for macrosomia and LGA. Application of this supervised longitudinal learning procedure to the Successive Small-for-Gestational-Age Births study yields improved performance in classification metrics. Specifically, sensitivity, area under the receiver operation characteristic curve, and Youden's Index demonstrate enhanced results, indicating the effectiveness of the proposed approach in overcoming sparsity challenges in longitudinal health informatics data. CONCLUSIONS: The integration of mixed-effects models for temporal data augmentation and RNNs on augmented sequences shows effective in accurately predicting birthweights, particularly in the context of identifying excessive fetal growth conditions.


Asunto(s)
Macrosomía Fetal , Redes Neurales de la Computación , Humanos , Macrosomía Fetal/diagnóstico por imagen , Femenino , Embarazo , Recién Nacido , Peso al Nacer , Edad Gestacional , Adulto , Aprendizaje Automático Supervisado , Ultrasonografía Prenatal/métodos
15.
Heliyon ; 10(9): e30351, 2024 May 15.
Artículo en Inglés | MEDLINE | ID: mdl-38726158

RESUMEN

In the context of the burgeoning progression of wireless network technology and the corresponding escalation in the demand for mobile Internet-based multimedia transmission services, the task of preserving and augmenting user satisfaction has emerged as an imperative concern. This necessitates a sophisticated and accurate evaluation of multimedia service quality within the sphere of wireless networks. To systematically address the nuanced issue of user experience quality, the present study introduces a novel method for evaluating multimedia Quality of Experience (QoE) in wireless networks, employing an advanced deep learning model as the underlying analytical framework. Initially, the research undertakes the task of modeling the video session process, giving due consideration to the status of each temporal interval within the session's architecture. Subsequently, the challenge of QoE prediction is dissected and investigated through the lens of recurrent neural networks (RNNs), culminating in the proposition of an all-encompassing QoE prediction model that harmoniously integrates video information, Quality of Service (QoS) data, user behavior analytics, and facial expression analysis. The empirical segment of this research serves to validate the efficacy of the suggested video QoE evaluation method, engaging both quantitative and qualitative comparison metrics with contemporaneous state-of-the-art QoE models, employing the RTVCQoE dataset as the empirical foundation. The experimental findings illuminate that the QoE model elucidated in this study transcends competing models in performance metrics such as PLCC, SRCC, and KRCC. Consequently, this investigation stands as a seminal contribution to academic literature, furnishing an exacting and dependable QoE evaluation methodology. Such a contribution augments the user experience landscape in multimedia services within wireless networks, and instigates further scholarly exploration and technological innovation in the mobile Internet domain.

16.
J Insur Med ; 51(1): 35-40, 2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-38802088

RESUMEN

METHODOLOGY: A key-word search of artificial intelligence, artificial intelligence in medicine, and artificial intelligence models was done in PubMed and Google Scholar yielded more than 100 articles that were reviewed for summation in this article.


Asunto(s)
Inteligencia Artificial , Humanos
17.
Network ; : 1-38, 2024 May 08.
Artículo en Inglés | MEDLINE | ID: mdl-38717192

RESUMEN

Generally, financial investments are necessary for portfolio management. However, the prediction of a portfolio becomes complicated in several processing techniques which may cause certain issues while predicting the portfolio. Moreover, the error analysis needs to be validated with efficient performance measures. To solve the problems of portfolio optimization, a new portfolio prediction framework is developed. Initially, a dataset is collected from the standard database which is accumulated with various companies' portfolios. For forecasting the benefits of companies, a Multi-serial Cascaded Network (MCNet) is employed which constitutes of Autoencoder, 1D Convolutional Neural Network (1DCNN), and Recurrent Neural Network (RNN) is utilized. The prediction output for the different companies is stored using the developed MCNet model for further use. After predicting the benefits, the best company with the highest profit is selected by Integration of Artificial Rabbit and Hummingbird Algorithm (IARHA). The major contribution of our work is to increase the accuracy of prediction and to choose the optimal portfolio. The implementation is conducted in Python platform. The result analysis shows that the developed model achieves 0.89% and 0.56% regarding RMSE and MAE measures. Throughout the analysis, the experimentation of the developed model shows enriched performance.

18.
Cureus ; 16(4): e58364, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38756254

RESUMEN

Artificial intelligence (AI) simulates intelligent behavior using computers with minimum human intervention. Recent advances in AI, especially deep learning, have made significant progress in perceptual operations, enabling computers to convey and comprehend complicated input more accurately. Worldwide, fractures affect people of all ages and in all regions of the planet. One of the most prevalent causes of inaccurate diagnosis and medical lawsuits is overlooked fractures on radiographs taken in the emergency room, which can range from 2% to 9%. The workforce will soon be under a great deal of strain due to the growing demand for fracture detection on multiple imaging modalities. A dearth of radiologists worsens this rise in demand as a result of a delay in hiring and a significant percentage of radiologists close to retirement. Additionally, the process of interpreting diagnostic images can sometimes be challenging and tedious. Integrating orthopedic radio-diagnosis with AI presents a promising solution to these problems. There has recently been a noticeable rise in the application of deep learning techniques, namely convolutional neural networks (CNNs), in medical imaging. In the field of orthopedic trauma, CNNs are being documented to operate at the proficiency of expert orthopedic surgeons and radiologists in the identification and categorization of fractures. CNNs can analyze vast amounts of data at a rate that surpasses that of human observations. In this review, we discuss the use of deep learning methods in fracture detection and classification, the integration of AI with various imaging modalities, and the benefits and disadvantages of integrating AI with radio-diagnostics.

19.
PeerJ Comput Sci ; 10: e1947, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38699206

RESUMEN

Diabetic retinopathy (DR) is the leading cause of visual impairment globally. It occurs due to long-term diabetes with fluctuating blood glucose levels. It has become a significant concern for people in the working age group as it can lead to vision loss in the future. Manual examination of fundus images is time-consuming and requires much effort and expertise to determine the severity of the retinopathy. To diagnose and evaluate the disease, deep learning-based technologies have been used, which analyze blood vessels, microaneurysms, exudates, macula, optic discs, and hemorrhages also used for initial detection and grading of DR. This study examines the fundamentals of diabetes, its prevalence, complications, and treatment strategies that use artificial intelligence methods such as machine learning (ML), deep learning (DL), and federated learning (FL). The research covers future studies, performance assessments, biomarkers, screening methods, and current datasets. Various neural network designs, including recurrent neural networks (RNNs), generative adversarial networks (GANs), and applications of ML, DL, and FL in the processing of fundus images, such as convolutional neural networks (CNNs) and their variations, are thoroughly examined. The potential research methods, such as developing DL models and incorporating heterogeneous data sources, are also outlined. Finally, the challenges and future directions of this research are discussed.

20.
Int J Mol Sci ; 25(10)2024 May 18.
Artículo en Inglés | MEDLINE | ID: mdl-38791544

RESUMEN

Antimicrobial peptides (AMPs) are promising candidates for new antibiotics due to their broad-spectrum activity against pathogens and reduced susceptibility to resistance development. Deep-learning techniques, such as deep generative models, offer a promising avenue to expedite the discovery and optimization of AMPs. A remarkable example is the Feedback Generative Adversarial Network (FBGAN), a deep generative model that incorporates a classifier during its training phase. Our study aims to explore the impact of enhanced classifiers on the generative capabilities of FBGAN. To this end, we introduce two alternative classifiers for the FBGAN framework, both surpassing the accuracy of the original classifier. The first classifier utilizes the k-mers technique, while the second applies transfer learning from the large protein language model Evolutionary Scale Modeling 2 (ESM2). Integrating these classifiers into FBGAN not only yields notable performance enhancements compared to the original FBGAN but also enables the proposed generative models to achieve comparable or even superior performance to established methods such as AMPGAN and HydrAMP. This achievement underscores the effectiveness of leveraging advanced classifiers within the FBGAN framework, enhancing its computational robustness for AMP de novo design and making it comparable to existing literature.


Asunto(s)
Péptidos Antimicrobianos , Péptidos Antimicrobianos/química , Péptidos Antimicrobianos/farmacología , Diseño de Fármacos/métodos , Redes Neurales de la Computación , Aprendizaje Profundo , Algoritmos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...