Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 317.983
1.
Rev. esp. patol ; 57(2): 77-83, Abr-Jun, 2024. tab, ilus
Article Es | IBECS | ID: ibc-232410

Introducción: En un servicio de anatomía patológica se analiza la carga laboral en tiempo médico en función de la complejidad de las muestras recibidas, y se valora su distribución entre los patólogos, presentado un nuevo algoritmo informático que favorece una distribución equitativa. Métodos: Siguiendo las directrices para la «Estimación de la carga de trabajo en citopatología e histopatología (tiempo médico) atendiendo al catálogo de muestras y procedimientos de la SEAP-IAP (2.ª edición)» se determinan las unidades de carga laboral (UCL) por patólogo y UCL global del servicio, la carga media laboral que soporta el servicio (factor MU), el tiempo de dedicación de cada patólogo a la actividad asistencial y el número de patólogos óptimo según la carga laboral del servicio. Resultados: Determinamos 12.197 UCL totales anuales para el patólogo jefe de servicio, así como 14.702 y 13.842 para los patólogos adjuntos, con una UCL global del servicio de 40.742. El factor MU calculado es 4,97. El jefe ha dedicado el 72,25% de su jornada a la asistencia y los adjuntos el 87,09 y 82,01%. El número de patólogos óptimo para el servicio es de 3,55. Conclusiones: Todos los resultados obtenidos demuestran la sobrecarga laboral médica, y la distribución de las UCL entre los patólogos no resulta equitativa. Se propone un algoritmo informático capaz de distribuir la carga laboral de manera equitativa, asociado al sistema de información del laboratorio, y que tenga en cuenta el tipo de muestra, su complejidad y la dedicación asistencial de cada patólogo.(AU)


Introduction: In a pathological anatomy service, the workload in medical time is analyzed based on the complexity of the samples received and its distribution among pathologists is assessed, presenting a new computer algorithm that favors an equitable distribution. Methods: Following the second edition of the Spanish guidelines for the estimation of workload in cytopathology and histopathology (medical time) according to the Spanish Pathology Society-International Academy of Pathology (SEAP-IAP) catalog of samples and procedures, we determined the workload units (UCL) per pathologist and the overall UCL of the service, the average workload of the service (MU factor), the time dedicated by each pathologist to healthcare activity and the optimal number of pathologists according to the workload of the service. Results: We determined 12 197 total annual UCL for the chief pathologist, as well as 14 702 and 13 842 UCL for associate pathologists, with an overall of 40 742 UCL for the whole service. The calculated MU factor is 4.97. The chief pathologist devoted 72.25% of his working day to healthcare activity while associate pathologists dedicated 87.09% and 82.01% of their working hours. The optimal number of pathologists for the service is found to be 3.55. Conclusions: The results demonstrate medical work overload and a non-equitable distribution of UCLs among pathologists. We propose a computer algorithm capable of distributing the workload in an equitable manner. It would be associated with the laboratory information system and take into account the type of specimen, its complexity and the dedication of each pathologist to healthcare activity.(AU)


Humans , Male , Female , Pathology , Workload , Pathologists , Pathology Department, Hospital , Algorithms
2.
Article En | MEDLINE | ID: mdl-38862427

Since its establishment in 2013, BioLiP has become one of the widely used resources for protein-ligand interactions. Nevertheless, several known issues occurred with it over the past decade. For example, the protein-ligand interactions are represented in the form of single chain-based tertiary structures, which may be inappropriate as many interactions involve multiple protein chains (known as quaternary structures). We sought to address these issues, resulting in Q-BioLiP, a comprehensive resource for quaternary structure-based protein-ligand interactions. The major features of Q-BioLiP include: (1) representing protein structures in the form of quaternary structures rather than single chain-based tertiary structures; (2) pairing DNA/RNA chains properly rather than separation; (3) providing both experimental and predicted binding affinities; (4) retaining both biologically relevant and irrelevant interactions to alleviate the wrong justification of ligands' biological relevance; and (5) developing a new quaternary structure-based algorithm for the modelling of protein-ligand complex structure. With these new features, Q-BioLiP is expected to be a valuable resource for studying biomolecule interactions, including protein-small molecule interaction, protein-metal ion interaction, protein-peptide interaction, protein-protein interaction, protein-DNA/RNA interaction, and RNA-small molecule interaction. Q-BioLiP is freely available at https://yanglab.qd.sdu.edu.cn/Q-BioLiP/.


Protein Binding , Proteins , Ligands , Proteins/chemistry , Proteins/metabolism , Protein Structure, Quaternary , DNA/metabolism , DNA/chemistry , Databases, Protein , RNA/metabolism , RNA/chemistry , Algorithms
3.
Article En | MEDLINE | ID: mdl-38862430

Tandem duplication (TD) is a major type of structural variations (SVs) that plays an important role in novel gene formation and human diseases. However, TDs are often missed or incorrectly classified as insertions by most modern SV detection methods due to the lack of specialized operation on TD-related mutational signals. Herein, we developed a TD detection module for the Pindel tool, referred to as Pindel-TD, based on a TD-specific pattern growth approach. Pindel-TD is capable of detecting TDs with a wide size range at single nucleotide resolution. Using simulated and real read data from HG002, we demonstrated that Pindel-TD outperforms other leading methods in terms of precision, recall, F1-score, and robustness. Furthermore, by applying Pindel-TD to data generated from the K562 cancer cell line, we identified a TD located at the seventh exon of SAGE1, providing an explanation for its high expression. Pindel-TD is available for non-commercial use at https://github.com/xjtu-omics/pindel.


Software , Humans , K562 Cells , Gene Duplication , Tandem Repeat Sequences/genetics , Algorithms
4.
Chaos ; 34(6)2024 Jun 01.
Article En | MEDLINE | ID: mdl-38838102

This paper introduces two novel scores for detecting local perturbations in networks. For this, we consider a non-Euclidean representation of networks, namely, their embedding onto the Poincaré disk model of hyperbolic geometry. We numerically evaluate the performances of these scores for the detection and localization of perturbations on homogeneous and heterogeneous network models. To illustrate our approach, we study latent geometric representations of real brain networks to identify and quantify the impact of epilepsy surgery on brain regions. Results suggest that our approach can provide a powerful tool for representing and analyzing changes in brain networks following surgical intervention, marking the first application of geometric network embedding in epilepsy research.


Brain , Nerve Net , Humans , Nerve Net/physiology , Brain/physiology , Epilepsy/physiopathology , Models, Neurological , Algorithms , Computer Simulation
5.
Curr Sports Med Rep ; 23(6): 237-244, 2024 Jun 01.
Article En | MEDLINE | ID: mdl-38838687

ABSTRACT: Achilles tendinopathy is a common overuse injury that is traditionally managed with activity modification and a progressive eccentric strengthening program. This narrative review describes the available evidence for adjunctive procedural interventions in the management of midportion and insertional AT, specifically in the athletic population. Safety and efficacy data from available literature on extracorporeal shockwave therapy, platelet-rich plasma, high-volume injectate with or without tendon scraping, and percutaneous needle tenotomy are used to propose an algorithm for treatment of Achilles tendinopathy for the in-season athlete.


Achilles Tendon , Athletic Injuries , Platelet-Rich Plasma , Tendinopathy , Humans , Tendinopathy/therapy , Achilles Tendon/injuries , Athletic Injuries/therapy , Extracorporeal Shockwave Therapy , Tenotomy/methods , Athletes , Algorithms
6.
Front Public Health ; 12: 1406566, 2024.
Article En | MEDLINE | ID: mdl-38827615

Background: Emerging infectious diseases pose a significant threat to global public health. Timely detection and response are crucial in mitigating the spread of such epidemics. Inferring the onset time and epidemiological characteristics is vital for accelerating early interventions, but accurately predicting these parameters in the early stages remains challenging. Methods: We introduce a Bayesian inference method to fit epidemic models to time series data based on state-space modeling, employing a stochastic Susceptible-Exposed-Infectious-Removed (SEIR) model for transmission dynamics analysis. Our approach uses the particle Markov chain Monte Carlo (PMCMC) method to estimate key epidemiological parameters, including the onset time, the transmission rate, and the recovery rate. The PMCMC algorithm integrates the advantageous aspects of both MCMC and particle filtering methodologies to yield a computationally feasible and effective means of approximating the likelihood function, especially when it is computationally intractable. Results: To validate the proposed method, we conduct case studies on COVID-19 outbreaks in Wuhan, Shanghai and Nanjing, China, respectively. Using early-stage case reports, the PMCMC algorithm accurately predicted the onset time, key epidemiological parameters, and the basic reproduction number. These findings are consistent with empirical studies and the literature. Conclusion: This study presents a robust Bayesian inference method for the timely investigation of emerging infectious diseases. By accurately estimating the onset time and essential epidemiological parameters, our approach is versatile and efficient, extending its utility beyond COVID-19.


Algorithms , Bayes Theorem , COVID-19 , Communicable Diseases, Emerging , Markov Chains , Humans , Communicable Diseases, Emerging/epidemiology , COVID-19/epidemiology , COVID-19/transmission , China/epidemiology , Monte Carlo Method , SARS-CoV-2 , Disease Outbreaks/statistics & numerical data , Time Factors , Epidemiological Models
7.
Front Public Health ; 12: 1326178, 2024.
Article En | MEDLINE | ID: mdl-38827621

Background: By using algorithms and Machine Learning - ML techniques, the aim of this research was to determine the impact of the following factors on the development of Problematic Internet Use (PIU): sociodemographic factors, the intensity of using the Internet, different contents accessed on the Internet by adolescents, adolescents' online activities, life habits and different affective temperament types. Methods: Sample included 2,113 adolescents. The following instruments were used: questionnaire about: socio-demographic characteristics, intensity of the Internet use, content categories and online activities on the Internet; Facebook (FB) usage and life habits; The Internet Use Disorder Scale (IUDS). Based on their scores on the scale, subjects were divided into two groups - with or without PIU; Temperament Evaluation of Memphis, Pisa, Paris, and San Diego scale for adolescents (A-TEMPS-A). Results: Various ML classification models on our data set were trained. Binary classification models were created (class-label attribute was PIU value). Models hyperparameters were optimized using grid search method and models were validated using k-fold cross-validation technique. Random forest was the model with the best overall results and the time spent on FB and the cyclothymic temperament were variables of highest importance for these model. We also applied the ML techniques Lasso and ElasticNet. The three most important variables for the development of PIU with both techniques were: cyclothymic temperament, the longer use of the Internet and the desire to use the Internet more than at present time. Group of variables having a protective effect (regarding the prevention of the development of PIU) was found with both techniques. The three most important were: achievement, search for contents related to art and culture and hyperthymic temperament. Next, 34 important variables that explain 0.76% of variance were detected using the genetic algorithms. Finally, the binary classification model (with or without PIU) with the best characteristics was trained using artificial neural network. Conclusion: Variables related to the temporal determinants of Internet usage, cyclothymic temperament, the desire for increased Internet usage, anxious and irritable temperament, on line gaming, pornography, and some variables related to FB usage consistently appear as important variables for the development of PIU.


Internet Addiction Disorder , Machine Learning , Temperament , Humans , Adolescent , Male , Female , Surveys and Questionnaires , Internet Addiction Disorder/psychology , Algorithms , Internet , Adolescent Behavior/psychology , Internet Use/statistics & numerical data , Social Media/statistics & numerical data
8.
Sci Rep ; 14(1): 12823, 2024 06 04.
Article En | MEDLINE | ID: mdl-38834839

The prevalence of cardiovascular disease (CVD) has surged in recent years, making it the foremost cause of mortality among humans. The Electrocardiogram (ECG), being one of the pivotal diagnostic tools for cardiovascular diseases, is increasingly gaining prominence in the field of machine learning. However, prevailing neural network models frequently disregard the spatial dimension features inherent in ECG signals. In this paper, we propose an ECG autoencoder network architecture incorporating low-rank attention (LRA-autoencoder). It is designed to capture potential spatial features of ECG signals by interpreting the signals from a spatial perspective and extracting correlations between different signal points. Additionally, the low-rank attention block (LRA-block) obtains spatial features of electrocardiogram signals through singular value decomposition, and then assigns these spatial features as weights to the electrocardiogram signals, thereby enhancing the differentiation of features among different categories. Finally, we utilize the ResNet-18 network classifier to assess the performance of the LRA-autoencoder on both the MIT-BIH Arrhythmia and PhysioNet Challenge 2017 datasets. The experimental results reveal that the proposed method demonstrates superior classification performance. The mean accuracy on the MIT-BIH Arrhythmia dataset is as high as 0.997, and the mean accuracy and F 1 -score on the PhysioNet Challenge 2017 dataset are 0.850 and 0.843.


Electrocardiography , Neural Networks, Computer , Electrocardiography/methods , Humans , Arrhythmias, Cardiac/diagnosis , Arrhythmias, Cardiac/physiopathology , Machine Learning , Signal Processing, Computer-Assisted , Algorithms , Cardiovascular Diseases/diagnosis
9.
BMC Bioinformatics ; 25(1): 205, 2024 Jun 04.
Article En | MEDLINE | ID: mdl-38834962

BACKGROUND: Although RNA-seq data are traditionally used for quantifying gene expression levels, the same data could be useful in an integrated approach to compute genetic distances as well. Challenges to using mRNA sequences for computing genetic distances include the relatively high conservation of coding sequences and the presence of paralogous and, in some species, homeologous genes. RESULTS: We developed a new computational method, RNA-clique, for calculating genetic distances using assembled RNA-seq data and assessed the efficacy of the method using biological and simulated data. The method employs reciprocal BLASTn followed by graph-based filtering to ensure that only orthologous genes are compared. Each vertex in the graph constructed for filtering represents a gene in a specific sample under comparison, and an edge connects a pair of vertices if the genes they represent are best matches for each other in their respective samples. The distance computation is a function of the BLAST alignment statistics and the constructed graph and incorporates only those genes that are present in some complete connected component of this graph. As a biological testbed we used RNA-seq data of tall fescue (Lolium arundinaceum), an allohexaploid plant ( 2 n = 14 Gb ), and bluehead wrasse (Thalassoma bifasciatum), a teleost fish. RNA-clique reliably distinguished individual tall fescue plants by genotype and distinguished bluehead wrasse RNA-seq samples by individual. In tests with simulated RNA-seq data, the ground truth phylogeny was accurately recovered from the computed distances. Moreover, tests of the algorithm parameters indicated that, even with stringent filtering for orthologs, sufficient sequence data were retained for the distance computations. Although comparisons with an alternative method revealed that RNA-clique has relatively high time and memory requirements, the comparisons also showed that RNA-clique's results were at least as reliable as the alternative's for tall fescue data and were much more reliable for the bluehead wrasse data. CONCLUSION: Results of this work indicate that RNA-clique works well as a way of deriving genetic distances from RNA-seq data, thus providing a methodological integration of functional and genetic diversity studies.


RNA-Seq , RNA-Seq/methods , Sequence Analysis, RNA/methods , Computational Biology/methods , Algorithms
10.
BMC Musculoskelet Disord ; 25(1): 438, 2024 Jun 04.
Article En | MEDLINE | ID: mdl-38834975

BACKGROUND: Machine learning (ML) has shown exceptional promise in various domains of medical research. However, its application in predicting subsequent fragility fractures is still largely unknown. In this study, we aim to evaluate the predictive power of different ML algorithms in this area and identify key features associated with the risk of subsequent fragility fractures in osteoporotic patients. METHODS: We retrospectively analyzed data from patients presented with fragility fractures at our Fracture Liaison Service, categorizing them into index fragility fracture (n = 905) and subsequent fragility fracture groups (n = 195). We independently trained ML models using 27 features for both male and female cohorts. The algorithms tested include Random Forest, XGBoost, CatBoost, Logistic Regression, LightGBM, AdaBoost, Multi-Layer Perceptron, and Support Vector Machine. Model performance was evaluated through 10-fold cross-validation. RESULTS: The CatBoost model outperformed other models, achieving 87% accuracy and an AUC of 0.951 for females, and 93.4% accuracy with an AUC of 0.990 for males. The most significant predictors for females included age, serum C-reactive protein (CRP), 25(OH)D, creatinine, blood urea nitrogen (BUN), parathyroid hormone (PTH), femoral neck Z-score, menopause age, number of pregnancies, phosphorus, calcium, and body mass index (BMI); for males, the predictors were serum CRP, femoral neck T-score, PTH, hip T-score, BMI, BUN, creatinine, alkaline phosphatase, and spinal Z-score. CONCLUSION: ML models, especially CatBoost, offer a valuable approach for predicting subsequent fragility fractures in osteoporotic patients. These models hold the potential to enhance clinical decision-making by supporting the development of personalized preventative strategies.


Machine Learning , Osteoporotic Fractures , Humans , Male , Female , Aged , Retrospective Studies , Osteoporotic Fractures/epidemiology , Osteoporotic Fractures/diagnosis , Middle Aged , Aged, 80 and over , Predictive Value of Tests , Risk Assessment/methods , Risk Factors , Osteoporosis/epidemiology , Osteoporosis/diagnosis , Algorithms
11.
BMC Med Imaging ; 24(1): 130, 2024 Jun 04.
Article En | MEDLINE | ID: mdl-38834987

In this study, we propose a novel method for quantifying tortuosity in 3D voxelized objects. As a shape characteristic, tortuosity has been widely recognized as a valuable feature in image analysis, particularly in the field of medical imaging. Our proposed method extends the two-dimensional approach of the Slope Chain Code (SCC) which creates a one-dimensional representation of curves. The utility of 3D tortuosity ( τ 3 D ) as a shape descriptor was investigated by characterizing brain structures. The results of the τ 3 D computation on the central sulcus and the main lobes revealed significant differences between Alzheimer's disease (AD) patients and control subjects, suggesting its potential as a biomarker for AD. We found a p < 0.05 for the left central sulcus and the four brain lobes.


Alzheimer Disease , Brain , Imaging, Three-Dimensional , Humans , Imaging, Three-Dimensional/methods , Alzheimer Disease/diagnostic imaging , Alzheimer Disease/pathology , Brain/diagnostic imaging , Female , Aged , Male , Algorithms , Magnetic Resonance Imaging/methods , Case-Control Studies
12.
Brief Bioinform ; 25(4)2024 May 23.
Article En | MEDLINE | ID: mdl-38842509

Peptide- and protein-based therapeutics are becoming a promising treatment regimen for myriad diseases. Toxicity of proteins is the primary hurdle for protein-based therapies. Thus, there is an urgent need for accurate in silico methods for determining toxic proteins to filter the pool of potential candidates. At the same time, it is imperative to precisely identify non-toxic proteins to expand the possibilities for protein-based biologics. To address this challenge, we proposed an ensemble framework, called VISH-Pred, comprising models built by fine-tuning ESM2 transformer models on a large, experimentally validated, curated dataset of protein and peptide toxicities. The primary steps in the VISH-Pred framework are to efficiently estimate protein toxicities taking just the protein sequence as input, employing an under sampling technique to handle the humongous class-imbalance in the data and learning representations from fine-tuned ESM2 protein language models which are then fed to machine learning techniques such as Lightgbm and XGBoost. The VISH-Pred framework is able to correctly identify both peptides/proteins with potential toxicity and non-toxic proteins, achieving a Matthews correlation coefficient of 0.737, 0.716 and 0.322 and F1-score of 0.759, 0.696 and 0.713 on three non-redundant blind tests, respectively, outperforming other methods by over $10\%$ on these quality metrics. Moreover, VISH-Pred achieved the best accuracy and area under receiver operating curve scores on these independent test sets, highlighting the robustness and generalization capability of the framework. By making VISH-Pred available as an easy-to-use web server, we expect it to serve as a valuable asset for future endeavors aimed at discerning the toxicity of peptides and enabling efficient protein-based therapeutics.


Proteins , Proteins/metabolism , Proteins/chemistry , Machine Learning , Databases, Protein , Computational Biology/methods , Humans , Peptides/toxicity , Peptides/chemistry , Computer Simulation , Algorithms , Software
13.
PLoS One ; 19(6): e0302327, 2024.
Article En | MEDLINE | ID: mdl-38843122

In the context of existing adversarial attack schemes based on unsupervised graph contrastive learning, a common issue arises due to the discreteness of graph structures, leading to reduced reliability of structural gradients and consequently resulting in the problem of attacks getting trapped in local optima. An adversarial attack method based on momentum gradient candidates is proposed in this research. Firstly, the gradients obtained by back-propagation are transformed into momentum gradients, and the gradient update is guided by overlaying the previous gradient information in a certain proportion to accelerate convergence speed and improve the accuracy of gradient update. Secondly, the exploratory process of candidate and evaluation is carried out by summing the momentum gradients of the two views and ranking them in descending order of saliency. In this process, selecting adversarial samples with stronger perturbation effects effectively improves the success rate of adversarial attacks. Finally, extensive experiments were conducted on three different datasets, and our generated adversarial samples were evaluated against contrastive learning models across two downstream tasks. The results demonstrate that the attack strategy proposed outperforms existing methods, significantly improving convergence speed. In the link prediction task, targeting the Cora dataset with perturbation rates of 0.05 and 0.1, the attack performance outperforms all baseline tasks, including the supervised baseline methods. The attack method is also transferred to other graph representation models, validating the method's strong transferability.


Algorithms , Humans , Machine Learning
14.
PLoS One ; 19(6): e0304284, 2024.
Article En | MEDLINE | ID: mdl-38843129

Agricultural pests and diseases pose major losses to agricultural productivity, leading to significant economic losses and food safety risks. However, accurately identifying and controlling these pests is still very challenging due to the scarcity of labeling data for agricultural pests and the wide variety of pest species with different morphologies. To this end, we propose a two-stage target detection method that combines Cascade RCNN and Swin Transformer models. To address the scarcity of labeled data, we employ random cut-and-paste and traditional online enhancement techniques to expand the pest dataset and use Swin Transformer for basic feature extraction. Subsequently, we designed the SCF-FPN module to enhance the basic features to extract richer pest features. Specifically, the SCF component provides a self-attentive mechanism with a flexible sliding window to enable adaptive feature extraction based on different pest features. Meanwhile, the feature pyramid network (FPN) enriches multiple levels of features and enhances the discriminative ability of the whole network. Finally, to further improve our detection results, we incorporated non-maximum suppression (Soft NMS) and Cascade R-CNN's cascade structure into the optimization process to ensure more accurate and reliable prediction results. In a detection task involving 28 pest species, our algorithm achieves 92.5%, 91.8%, and 93.7% precision in terms of accuracy, recall, and mean average precision (mAP), respectively, which is an improvement of 12.1%, 5.4%, and 7.6% compared to the original baseline model. The results demonstrate that our method can accurately identify and localize farmland pests, which can help improve farmland's ecological environment.


Algorithms , Animals , Agriculture/methods , Pest Control/methods , Neural Networks, Computer , Farms , Crops, Agricultural/parasitology
15.
PLoS One ; 19(6): e0300036, 2024.
Article En | MEDLINE | ID: mdl-38843145

With the continuous development of large-scale engineering projects such as construction projects, relief support, and large-scale relocation in various countries, engineering logistics has attracted much attention. This paper addresses a multimodal material route planning problem (MMRPP), which considers the transportation of engineering material from suppliers to the work zones using multiple transport modes. Due to the overall relevance and technical complexity of engineering logistics, we introduce the key processes at work zones to generate a transport solution, which is more realistic for various real-life applications. We propose a multi-objective multimodal transport route planning model that minimizes the total transport cost and the total transport time. The model by using the ε - constraint method that transforms the objective function of minimizing total transportation cost into a constraint, resulting in obtaining pareto optimal solutions. This method makes up for the lack of existing research on the combination of both engineering logistics and multimodal transportation, after which the feasibility of the model and algorithm is verified by examples. The results show that the model solution with the introduction of the key processes at work zones produces more time-efficient and less time-consuming route planning results, and that the results obtained using the ε - constraint method are more reliable than the traditional methods for solving multi-objective planning problems and are more in line with the decision maker's needs.


Algorithms , Models, Theoretical , Transportation , Transportation/methods , Engineering/methods , Humans , Workplace
16.
PLoS One ; 19(6): e0303160, 2024.
Article En | MEDLINE | ID: mdl-38843160

One of the primary challenges for autonomous vehicle (AV) is planning a collision-free path in dynamic environment. It is a tricky task for achieving high-performance obstacle avoidance with velocity-varying obstacle. To solve this problem, a highly smooth and parameter independent obstacle avoidance method for autonomous vehicle with velocity-varying obstacle (HSPI-OAM) is presented in this work. The proposed method uses the virtual collision point model to accurately design the desired acceleration, which makes the obtained path highly smooth. At the same time, the method gets rid of the dependence on parameter adjustment and has strong adaptability to different environments. The simulation is implemented on the Matlab-Carsim co-simulation platform, and the simulation results show that the path planned by HSPI-OAM has good performance for obstacle with acceleration.


Accidents, Traffic , Accidents, Traffic/prevention & control , Computer Simulation , Automobile Driving , Algorithms , Acceleration , Humans , Models, Theoretical , Automobiles
17.
PLoS One ; 19(6): e0303642, 2024.
Article En | MEDLINE | ID: mdl-38843194

In this manuscript, we present a novel concept known as the fuzzy Sehgal contraction, specifically designed for self-mappings defined in the context of a fuzzy metric space. Our primary objective is to explore the existence and uniqueness of fixed points for self-mappings in fuzzy metric space. To support our conclusions, we present a detailed illustrative case that demonstrates the superiority of the convergence obtained with our suggested method to those currently recorded in the literature. Moreover, we provide graphical depictions of the convergence behavior, which makes our study more understandable and transparent. Additionally, we extend the application of our results to address the existence and uniqueness of solutions for Volterra integral equations.


Fuzzy Logic , Algorithms , Models, Theoretical
18.
PLoS One ; 19(6): e0304531, 2024.
Article En | MEDLINE | ID: mdl-38843235

With the rapid development of modern communication technology, it has become a core problem in the field of communication to find new ways to effectively modulate signals and to classify and recognize the results of automatic modulation. To further improve the communication quality and system processing efficiency, this study combines two different neural network algorithms to optimize the traditional signal automatic modulation classification method. In this paper, the basic technology involved in the communication process, including automatic signal modulation technology and signal classification technology, is discussed. Then, combining parallel convolution and simple cyclic unit network, three different connection paths of automatic signal modulation classification model are constructed. The performance test results show that the classification model can achieve a stable training and verification state when the two networks are connected. After 20 and 29 iterations, the loss values are 0.13 and 0.18, respectively. In addition, when the signal-to-noise ratio (SNR) is 25dB, the classification accuracy of parallel convolutional neural network and simple cyclic unit network model is as high as 0.99. Finally, the classification models of parallel convolutional neural networks and simple cyclic unit networks have stable correct classification probabilities when Doppler shift conditions are introduced as interference in practical application environment. In summary, the neural network fusion classification model designed can significantly improve the shortcomings of traditional automatic modulation classification methods, and further improve the classification accuracy of modulated signals.


Algorithms , Neural Networks, Computer , Signal-To-Noise Ratio , Signal Processing, Computer-Assisted , Humans
19.
PLoS One ; 19(6): e0303890, 2024.
Article En | MEDLINE | ID: mdl-38843255

Anomaly detection in time series data is essential for fraud detection and intrusion monitoring applications. However, it poses challenges due to data complexity and high dimensionality. Industrial applications struggle to process high-dimensional, complex data streams in real time despite existing solutions. This study introduces deep ensemble models to improve traditional time series analysis and anomaly detection methods. Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM) networks effectively handle variable-length sequences and capture long-term relationships. Convolutional Neural Networks (CNNs) are also investigated, especially for univariate or multivariate time series forecasting. The Transformer, an architecture based on Artificial Neural Networks (ANN), has demonstrated promising results in various applications, including time series prediction and anomaly detection. Graph Neural Networks (GNNs) identify time series anomalies by capturing temporal connections and interdependencies between periods, leveraging the underlying graph structure of time series data. A novel feature selection approach is proposed to address challenges posed by high-dimensional data, improving anomaly detection by selecting different or more critical features from the data. This approach outperforms previous techniques in several aspects. Overall, this research introduces state-of-the-art algorithms for anomaly detection in time series data, offering advancements in real-time processing and decision-making across various industrial sectors.


Neural Networks, Computer , Algorithms , Multivariate Analysis , Deep Learning , Time Factors
20.
PLoS One ; 19(6): e0303764, 2024.
Article En | MEDLINE | ID: mdl-38843249

We propose a heuristic method of using network centralities for constructing small-weight Steiner trees in this paper. The Steiner tree problem in graphs is one of the practical NP-hard combinatorial optimization problems. Given a graph and a set of vertices called terminals in the graph, the objective of the Steiner tree problem in graphs is to find a minimum weight Steiner tree that is a tree containing all the terminals. Conventional construction methods make a Steiner tree based on the shortest paths between terminals. If these shortest paths are overlapped as much as possible, we can obtain a small-weight Steiner tree. Therefore, we proposed to use network centralities to distinguish which edges should be included to make a small-weight Steiner tree. Experimental results revealed that using the vertex or the edge betweenness centralities contributes to making small-weight Steiner trees.


Algorithms , Heuristics , Models, Theoretical
...