Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 136
Filtrar
1.
Accid Anal Prev ; 207: 107737, 2024 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-39186914

RESUMO

The Pedestrian Collision Avoidance System (PCAS) of Intelligent Vehicle (IV) can be effective in preventing the occurrence of traffic accidents. However, the complicated operation environments introduce great challenges to the camera used by the PCAS. Therefore, the camera based PCAS should be fully tested and evaluated before deployment. The traditional simulation test for the camera based PCAS attempted to use geometric or physical simulation models, which have low reality and are suitable for the primary stage of the PCAS development. Camera-in-the-Loop (CIL) test is one of Hardware-in-the-Loop methods that embeds the real camera hardware into the virtual simulation system to test the camera. CIL can utilize the real hardware response while overcoming the common simulation weakness of fidelity. In this paper, we construct a CIL test platform, and propose the CIL based test scenarios generation and scenario parameter impact evaluation method for PCAS. First, we construct the CIL test platform whose image quality and functional confidence are both validated to prove CIL credibility. Second, the PCAS under the test is analyzed and the corresponding test scenario parameters are designed. In order to accelerate the test scenario generation, a Greedy Based Combination test method (GBC) based on the CIL is proposed. The Chi-square analysis and two-factor of variance analysis verification methods are used to analyze the influence of individual and multiple scenario parameters on the PCAS performance. The experiment results show that the GBC improves the test speed by 12 times compared to the traversal test, and the frequency ratio of each scenario parameter is no more than 3% different from that of the traversal test. Also, GBC has an equivalent ability to find the PCAS collision scenarios parameter combination to the traversal test.


Assuntos
Acidentes de Trânsito , Pedestres , Humanos , Acidentes de Trânsito/prevenção & controle , Simulação por Computador , Automóveis , Fotografação/instrumentação
2.
Risk Anal ; 2024 Aug 21.
Artigo em Inglês | MEDLINE | ID: mdl-39166706

RESUMO

As urbanization continues to accelerate worldwide, urban flooding is becoming increasingly destructive, making it important to improve emergency scheduling capabilities. Compared to other scheduling problems, the urban flood emergency rescue scheduling problem is more complicated. Considering the impact of a disaster on the road network passability, a single type of vehicle cannot complete all rescue tasks. A reasonable combination of multiple vehicle types for cooperative rescue can improve the efficiency of rescue tasks. This study focuses on the urban flood emergency rescue scheduling problem considering the actual road network inundation situation. First, the progress and shortcomings of related research are analyzed. Then, a four-level emergency transportation network based on the collaborative water-ground multimodal transport transshipment mode is established. It is shown that the transshipment points have random locations and quantities according to the actual inundation situation. Subsequently, an interactive model based on hierarchical optimization is constructed considering the travel length, travel time, and waiting time as hierarchical optimization objectives. Next, an improved A* algorithm based on the quantity of specific extension nodes is proposed, and a scheduling scheme decision-making algorithm is proposed based on the improved A* and greedy algorithms. Finally, the proposed decision-making algorithm is applied in a practical example for solving and comparative analysis, and the results show that the improved A* algorithm is faster and more accurate. The results also verify the effectiveness of the scheduling model and decision-making algorithm. Finally, a scheduling scheme with the shortest travel time for the proposed emergency scheduling problem is obtained.

3.
Sensors (Basel) ; 24(13)2024 Jun 24.
Artigo em Inglês | MEDLINE | ID: mdl-39000883

RESUMO

In the scenario of an integrated space-air-ground emergency communication network, users encounter the challenge of rapidly identifying the optimal network node amidst the uncertainty and stochastic fluctuations of network states. This study introduces a Multi-Armed Bandit (MAB) model and proposes an optimization algorithm leveraging dynamic variance sampling (DVS). The algorithm posits that the prior distribution of each node's network state conforms to a normal distribution, and by constructing the distribution's expected value and variance, it maximizes the utilization of sample data, thereby maintaining an equilibrium between data exploitation and the exploration of the unknown. Theoretical substantiation is provided to illustrate that the Bayesian regret associated with the algorithm exhibits sublinear growth. Empirical simulations corroborate that the algorithm in question outperforms traditional ε-greedy, Upper Confidence Bound (UCB), and Thompson sampling algorithms in terms of higher cumulative rewards, diminished total regret, accelerated convergence rates, and enhanced system throughput.

4.
Math Med Biol ; 41(3): 157-168, 2024 Sep 16.
Artigo em Inglês | MEDLINE | ID: mdl-38978123

RESUMO

Experimental and theoretical properties of amino acids as building blocks of peptides and proteins have been extensively researched. Each such method assigns a number to each amino acid, and one such assignment is called amino-acid scale. Their usage in bioinformatics to explain and predict behaviour of peptides and proteins is of essential value. The number of such scales is very large. There are more than a hundred scales related just to hydrophobicity. A large number of scales can be a computational burden for algorithms that try to define peptide descriptors combining several of these scales. Hence, it is of interest to construct a smaller, but still representative set of scales. Here, we present software that does this. We test it on the set of scales using a database constructed by Kawashima and collaborators and show that it is possible to significantly reduce the number of scales observed without losing much of the information. An algorithm is implemented in C#. As a result, we provide a smaller database that might be a very useful tool for the analyses and construction of new peptides. Another interesting application of this database would be to compare the artificial intelligence construction of peptides having as an input the complete Kawashima database and this reduced one. Obtaining in both cases similar results would give much credibility to the constructs of such AI algorithms.


Assuntos
Algoritmos , Aminoácidos , Biologia Computacional , Software , Peptídeos , Bases de Dados de Proteínas , Proteínas/química , Interações Hidrofóbicas e Hidrofílicas
5.
Cogn Neurodyn ; 18(3): 907-918, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38826653

RESUMO

EEG is the most common test for diagnosing a seizure, where it presents information about the electrical activity of the brain. Automatic Seizure detection is one of the challenging tasks due to limitations of conventional methods with regard to inefficient feature selection, increased computational complexity and time and less accuracy. The situation calls for a practical framework to achieve better performance for detecting the seizure effectively. Hence, this study proposes modified Blackman bandpass filter-greedy particle swarm optimization (MBBF-GPSO) with convolutional neural network (CNN) for effective seizure detection. In this case, unwanted signals (noise) is eliminated by MBBF as it possess better ability in stopband attenuation, and, only the optimized features are selected using GPSO. For enhancing the efficacy of obtaining optimal solutions in GPSO, the time and frequency domain is extracted to complement it. Through this process, an optimized features are attained by MBBF-GPSO. Then, the CNN layer is employed for obtaining the productive classification output using the objective function. Here, CNN is employed due to its ability in automatically learning distinct features for individual class. Such advantages of the proposed system have made it explore better performance in seizure detection that is confirmed through performance and comparative analysis.

6.
Sensors (Basel) ; 24(9)2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38733003

RESUMO

In the context of the rapid development of the Internet of Vehicles, virtual reality, automatic driving and the industrial Internet, the terminal devices in the network show explosive growth. As a result, more and more information is generated from the edge of the network, which makes the data throughput increase dramatically in the mobile communication network. As the key technology of the fifth-generation mobile communication network, mobile edge caching technology which caches popular data to the edge server deployed at the edge of the network avoids the data transmission delay of the backhaul link and the occurrence of network congestion. With the growing scale of the network, distributing hot data from cloud servers to edge servers will generate huge energy consumption. To realize the green and sustainable development of the communication industry and reduce the energy consumption of distribution of data that needs to be cached in edge servers, we make the first attempt to propose and solve the problem of edge caching data distribution with minimum energy consumption (ECDDMEC) in this paper. First, we model and formulate the problem as a constrained optimization problem and then prove its NP-hardness. Subsequently, we design a greedy algorithm with computational complexity of O(n2) to solve the problem approximately. Experimental results show that compared with the distribution strategy of each edge server directly requesting data from the cloud server, the strategy obtained by the algorithm can significantly reduce the energy consumption of data distribution.

7.
Sensors (Basel) ; 24(6)2024 Mar 14.
Artigo em Inglês | MEDLINE | ID: mdl-38544141

RESUMO

The last-mile logistics in cities have become an indispensable part of the urban logistics system. This study aims to explore the effective selection of last-mile logistics nodes to enhance the efficiency of logistics distribution, strengthen the image of corporate distribution, further reduce corporate operating costs, and alleviate urban traffic congestion. This paper proposes a clustering-based approach to identify urban logistics nodes from the perspective of geographic information fusion. This method comprehensively considers several key indicators, including the coverage, balance, and urban traffic conditions of logistics distribution. Additionally, we employed a greedy algorithm to identify secondary nodes around primary nodes, thus constructing an effective nodal network. To verify the practicality of this model, we conducted an empirical simulation study using the logistics demand and traffic conditions in the Xianlin District of Nanjing. This research not only identifies the locations of primary and secondary logistics nodes but also provides a new perspective for constructing urban last-mile logistics systems, enriching the academic research related to the construction of logistics nodes. The results of this study are of significant theoretical and practical importance for optimizing urban logistics networks, enhancing logistics efficiency, and promoting the improvement of urban traffic conditions.

8.
Proc Natl Acad Sci U S A ; 121(8): e2314228121, 2024 Feb 20.
Artigo em Inglês | MEDLINE | ID: mdl-38363866

RESUMO

In problems such as variable selection and graph estimation, models are characterized by Boolean logical structure such as the presence or absence of a variable or an edge. Consequently, false-positive error or false-negative error can be specified as the number of variables/edges that are incorrectly included or excluded in an estimated model. However, there are several other problems such as ranking, clustering, and causal inference in which the associated model classes do not admit transparent notions of false-positive and false-negative errors due to the lack of an underlying Boolean logical structure. In this paper, we present a generic approach to endow a collection of models with partial order structure, which leads to a hierarchical organization of model classes as well as natural analogs of false-positive and false-negative errors. We describe model selection procedures that provide false-positive error control in our general setting, and we illustrate their utility with numerical experiments.

9.
Stat Med ; 43(9): 1726-1742, 2024 Apr 30.
Artigo em Inglês | MEDLINE | ID: mdl-38381059

RESUMO

Current status data are a type of failure time data that arise when the failure time of study subject cannot be determined precisely but is known only to occur before or after a random monitoring time. Variable selection methods for the failure time data have been discussed extensively in the literature. However, the statistical inference of the model selected based on the variable selection method ignores the uncertainty caused by model selection. To enhance the prediction accuracy for risk quantities such as survival probability, we propose two optimal model averaging methods under semiparametric additive hazards models. Specifically, based on martingale residuals processes, a delete-one cross-validation (CV) process is defined, and two new CV functional criteria are derived for choosing model weights. Furthermore, we present a greedy algorithm for the implementation of the techniques, and the asymptotic optimality of the proposed model averaging approaches is established, along with the convergence of the greedy averaging algorithms. A series of simulation experiments demonstrate the effectiveness and superiority of the proposed methods. Finally, a real-data example is provided as an illustration.


Assuntos
Algoritmos , Modelos Estatísticos , Humanos , Modelos de Riscos Proporcionais , Simulação por Computador , Probabilidade
10.
Sci Rep ; 14(1): 4694, 2024 Feb 26.
Artigo em Inglês | MEDLINE | ID: mdl-38409331

RESUMO

Community detection recognizes groups of densely connected nodes across networks, one of the fundamental procedures in network analysis. This research boosts the standard but locally optimized Greedy Modularity algorithm for community detection. We introduce innovative exploration techniques that include a variety of node and community disassembly strategies. These strategies include methods like non-triad creating, feeble, random as well as inadequate embeddedness for nodes, as well as low internal edge density, low triad participation ratio, weak, low conductance as well as random tactics for communities. We present a methodology that showcases the improvement in modularity across the wide variety of real-world and synthetic networks over the standard approaches. A detailed comparison against other well-known community detection algorithms further illustrates the better performance of our improved method. This study not only optimizes the process of community detection but also broadens the scope for a more nuanced and effective network analysis that may pave the way for more insights as to the dynamism and structures of its functioning by effectively addressing and overcoming the limitations that are inherently attached with the existing community detection algorithms.

11.
Med Phys ; 51(3): 1997-2006, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37523254

RESUMO

PURPOSE: To clarify the causal relationship between factors contributing to the postoperative survival of patients with esophageal cancer. METHODS: A cohort of 195 patients who underwent surgery for esophageal cancer between 2008 and 2021 was used in the study. All patients had preoperative chest computed tomography (CT) and positron emission tomography-CT (PET-CT) scans prior to receiving any treatment. From these images, high throughput and quantitative radiomic features, tumor features, and various body composition features were automatically extracted. Causal relationships among these image features, patient demographics, and other clinicopathological variables were analyzed and visualized using a novel score-based directed graph called "Grouped Greedy Equivalence Search" (GGES) while taking prior knowledge into consideration. After supplementing and screening the causal variables, the intervention do-calculus adjustment (IDA) scores were calculated to determine the degree of impact of each variable on survival. Based on this IDA score, a GGES prediction formula was generated. Ten-fold cross-validation was used to assess the performance of the models. The prediction results were evaluated using the R-Squared Score (R2 score). RESULTS: The final causal graphical model was formed by two PET-based image variables, ten body composition variables, four pathological variables, four demographic variables, two tumor variables, and one radiological variable (Percentile 10). Intramuscular fat mass was found to have the most impact on overall survival month. Percentile 10 and overall TNM (T: tumor, N: nodes, M: metastasis) stage were identified as direct causes of overall survival (month). The GGES casual model outperformed GES in regression prediction (R2  = 0.251) (p < 0.05) and was able to avoid unreasonable causality that may contradict common sense. CONCLUSION: The GGES causal model can provide a reliable and straightforward representation of the intricate causal relationships among the variables that impact the postoperative survival of patients with esophageal cancer.


Assuntos
Neoplasias Esofágicas , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Humanos , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos , Fluordesoxiglucose F18 , Neoplasias Esofágicas/diagnóstico por imagem , Neoplasias Esofágicas/cirurgia , Tomografia por Emissão de Pósitrons , Tomografia Computadorizada por Raios X , Estudos Retrospectivos
12.
Heliyon ; 9(9): e20133, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37809602

RESUMO

Gene Selection (GS) is a strategy method targeted at reducing redundancy, limited expressiveness, and low informativeness in gene expression datasets obtained by DNA Microarray technology. These datasets contain a plethora of diverse and high-dimensional samples and genes, with a significant discrepancy in the number of samples and genes present. The complexities of GS are especially noticeable in the context of microarray expression data analysis, owing to the inherent data imbalance. The main goal of this study is to offer a simplified and computationally effective approach to dealing with the conundrum of attribute selection in microarray gene expression data. We use the Black Widow Optimization algorithm (BWO) in the context of GS to achieve this, using two unique methodologies: the unaltered BWO variation and the hybridized BWO variant combined with the Iterated Greedy algorithm (BWO-IG). By improving the local search capabilities of BWO, this hybridization attempts to promote more efficient gene selection. A series of tests was carried out using nine benchmark datasets that were obtained from the gene expression data repository in the pursuit of empirical validation. The results of these tests conclusively show that the BWO-IG technique performs better than the traditional BWO algorithm. Notably, the hybridized BWO-IG technique excels in the efficiency of local searches, making it easier to identify relevant genes and producing findings with higher levels of reliability in terms of accuracy and the degree of gene pruning. Additionally, a comparison analysis is done against five modern wrapper Feature Selection (FS) methodologies, namely BIMFOHHO, BMFO, BHHO, BCS, and BBA, in order to put the suggested BWO-IG method's effectiveness into context. The comparison that follows highlights BWO-IG's obvious superiority in reducing the number of selected genes while also obtaining remarkably high classification accuracy. The key findings were an average classification accuracy of 94.426, average fitness values of 0.061, and an average number of selected genes of 2933.767.

13.
Front Physiol ; 14: 1264690, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37745249

RESUMO

Introduction: The inverse problem of electrocardiography noninvasively localizes the origin of undesired cardiac activity, such as a premature ventricular contraction (PVC), from potential recordings from multiple torso electrodes. However, the optimal number and placement of electrodes for an accurate solution of the inverse problem remain undetermined. This study presents a two-step inverse solution for a single dipole cardiac source, which investigates the significance of the torso electrodes on a patient-specific level. Furthermore, the impact of the significant electrodes on the accuracy of the inverse solution is studied. Methods: Body surface potential recordings from 128 electrodes of 13 patients with PVCs and their corresponding homogeneous and inhomogeneous torso models were used. The inverse problem using a single dipole was solved in two steps: First, using information from all electrodes, and second, using a subset of electrodes sorted in descending order according to their significance estimated by a greedy algorithm. The significance of electrodes was computed for three criteria derived from the singular values of the transfer matrix that correspond to the inversely estimated origin of the PVC computed in the first step. The localization error (LE) was computed as the Euclidean distance between the ground truth and the inversely estimated origin of the PVC. The LE obtained using the 32 and 64 most significant electrodes was compared to the LE obtained when all 128 electrodes were used for the inverse solution. Results: The average LE calculated for both torso models and using all 128 electrodes was 28.8 ± 11.9 mm. For the three tested criteria, the average LEs were 32.6 ± 19.9 mm, 29.6 ± 14.7 mm, and 28.8 ± 14.5 mm when 32 electrodes were used. When 64 electrodes were used, the average LEs were 30.1 ± 16.8 mm, 29.4 ± 12.0 mm, and 29.5 ± 12.6 mm. Conclusion: The study found inter-patient variability in the significance of torso electrodes and demonstrated that an accurate localization by the inverse solution with a single dipole could be achieved using a carefully selected reduced number of electrodes.

14.
Stat Sin ; 33(SI): 1343-1364, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-37455685

RESUMO

High-dimensional classification is an important statistical problem that has applications in many areas. One widely used classifier is the Linear Discriminant Analysis (LDA). In recent years, many regularized LDA classifiers have been proposed to solve the problem of high-dimensional classification. However, these methods rely on inverting a large matrix or solving large-scale optimization problems to render classification rules-methods that are computationally prohibitive when the dimension is ultra-high. With the emergence of big data, it is increasingly important to develop more efficient algorithms to solve the high-dimensional LDA problem. In this paper, we propose an efficient greedy search algorithm that depends solely on closed-form formulae to learn a high-dimensional LDA rule. We establish theoretical guarantee of its statistical properties in terms of variable selection and error rate consistency; in addition, we provide an explicit interpretation of the extra information brought by an additional feature in a LDA problem under some mild distributional assumptions. We demonstrate that this new algorithm drastically improves computational speed compared with other high-dimensional LDA methods, while maintaining comparable or even better classification performance.

15.
Sensors (Basel) ; 23(13)2023 Jun 28.
Artigo em Inglês | MEDLINE | ID: mdl-37447839

RESUMO

Vehicle Ad-hoc network (VANET) can provide technical support and solutions for the construction of intelligent and efficient transportation systems, and the routing protocol directly affects the efficiency of VANET. The rapid movement of nodes and uneven density distribution affect the routing stability and data transmission efficiency in VANET. To improve the local optimality and routing loops of the path-aware greedy perimeter stateless routing protocol (PA-GPSR) in urban sparse networks, a weight-based path-aware greedy perimeter stateless routing protocol (W-PAGPSR) is proposed. The protocol is divided into two stages. Firstly, in the routing establishment stage, the node distance, reliable node density, cumulative communication duration, and node movement direction are integrated to indicate the communication reliability of the node, and the next hop node is selected using the weight greedy forwarding strategy to achieve reliable transmission of data packets. Secondly, in the routing maintenance stage, based on the data packet delivery angle and reliable node density, the next hop node is selected for forwarding using the weight perimeter forwarding strategy to achieve routing repair. The simulation results show that compared to the greedy peripheral stateless routing protocol (GPSR), for the maximum distance-minimum angle greedy peripheral stateless routing (MM-GPSR) and PA-GPSR protocols, the packet loss rate of the protocol is reduced by an average of 24.47%, 25.02%, and 14.12%, respectively; the average end-to-end delay is reduced by an average of 48.34%, 79.96%, and 21.45%, respectively; and the network throughput is increased by an average of 47.68%, 58.39%, and 20.33%, respectively. This protocol improves network throughput while reducing the average end-to-end delay and packet loss rate.


Assuntos
Algoritmos , Tecnologia sem Fio , Reprodutibilidade dos Testes , Simulação por Computador , Redes de Comunicação de Computadores
16.
Sensors (Basel) ; 23(13)2023 Jul 04.
Artigo em Inglês | MEDLINE | ID: mdl-37447983

RESUMO

Network lifetime and localization are critical design factors for a number of wireless sensor network (WSN) applications. These networks may be randomly deployed and left unattended for prolonged periods of time. This means that node localization is performed after network deployment, and there is a need to develop mechanisms to extend the network lifetime since sensor nodes are usually constrained battery-powered devices, and replacing them can be costly or sometimes impossible, e.g., in hostile environments. To this end, this work proposes the energy-aware connected k-neighborhood (ECKN): a joint position estimation, packet routing, and sleep scheduling mechanism. To the best of our knowledge, there is a lack of such integrated solutions to WSNs. The proposed localization algorithm performs trilateration using the positions of a mobile sink and already-localized neighbor nodes in order to estimate the positions of sensor nodes. A routing protocol is also introduced, and it is based on the well-known greedy geographic forwarding (GGF). Similarly to GGF, the proposed protocol takes into consideration the positions of neighbors to decide the best forwarding node. However, it also considers node residual energy in order to guarantee the forwarding node will deliver the packet. A sleep scheduler is also introduced in order to extend the network lifetime. It is based on the connected k-neighborhood (CKN), which aids in the decision of which nodes switch to sleep mode while keeping the network connected. An extensive set of performance evaluation experiments was conducted and results show that ECKN not only extends the network lifetime and localizes nodes, but it does so while sustaining the acceptable packet delivery ratio and reducing network overhead.


Assuntos
Redes de Comunicação de Computadores , Tecnologia sem Fio , Simulação por Computador , Fenômenos Físicos , Algoritmos
17.
J Comb Optim ; 45(5): 117, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37304048

RESUMO

Thanks to the mass adoption of internet and mobile devices, users of the social media can seamlessly and spontaneously connect with their friends, followers and followees. Consequently, social media networks have gradually become the major venue for broadcasting and relaying information, and is casting great influences on the people in many aspects of their daily lives. Thus locating those influential users in social media has become crucially important for the successes of many viral marketing, cyber security, politics, and safety-related applications. In this study, we address the problem through solving the tiered influence and activation thresholds target set selection problem, which is to find the seed nodes that can influence the most users within a limited time frame. Both the minimum influential seeds and maximum influence within budget problems are considered in this study. Besides, this study proposes several models exploiting different requirements on seed nodes selection, such as maximum activation, early activation and dynamic threshold. These time-indexed integer program models suffer from the computational difficulties due to the large numbers of binary variables to model influence actions at each time epoch. To address this challenge, this paper designs and leverages several efficient algorithms, i.e., Graph Partition, Nodes Selection, Greedy algorithm, recursive threshold back algorithm and two-stage approach in time, especially for large-scale networks. Computational results show that it is beneficial to apply either the breadth first search or depth first search greedy algorithms for the large instances. In addition, algorithms based on node selection methods perform better in the long-tailed networks.

18.
Comput Biol Chem ; 104: 107878, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37167861

RESUMO

RNA (ribonucleic acid) structure prediction finds many applications in health science and drug discovery due to its importance in several life regulatory processes. But despite significant advances in the close field of protein prediction, RNA 3D structure still poses a tremendous challenge to predict, especially for large sequences. In this regard, the approach unfolded by Rosetta FARFAR2 (Fragment Assembly of RNA with Full-Atom Refinement, version 2) has shown promising results, but the algorithm is non-deterministic by nature. In this paper, we develop P-FARFAR2: a parallel enhancement of FARFAR2 that increases its ability to assemble low-energy structures via multithreaded exploration of random configurations in a greedy manner. This strategy, appearing in the literature under the term "parallel mechanism", is made viable through two measures: first, the synchronization window is coarsened to several Monte Carlo cycles; second, all but one of the threads are differentiated as auxiliary and set to perform a weakened version of the problem. Following empirical analysis on a diverse range of RNA structures, we report achieving statistical significance in lowering the energy levels of ensuing samples. And consequently, despite the moderate-to-weak correlation between energy levels and prediction accuracy, this achievement happens to propagate to accuracy measurements.


Assuntos
RNA , Software , RNA/química , Algoritmos , Proteínas/química , Método de Monte Carlo
19.
SLAS Technol ; 28(4): 264-277, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-36997066

RESUMO

During laboratory automation of life science experiments, coordinating specialized instruments and human experimenters for various experimental procedures is important to minimize the execution time. In particular, the scheduling of life science experiments requires the consideration of time constraints by mutual boundaries (TCMB) and can be formulated as the "scheduling for laboratory automation in biology" (S-LAB) problem. However, existing scheduling methods for the S-LAB problems have difficulties in obtaining a feasible solution for large-size scheduling problems at a time sufficient for real-time use. In this study, we proposed a fast schedule-finding method for S-LAB problems, SAGAS (Simulated annealing and greedy algorithm scheduler). SAGAS combines simulated annealing and the greedy algorithm to find a scheduling solution with the shortest possible execution time. We have performed scheduling on real experimental protocols and shown that SAGAS can search for feasible or optimal solutions in practicable computation time for various S-LAB problems. Furthermore, the reduced computation time by SAGAS enables us to systematically search for laboratory automation with minimum execution time by simulating scheduling for various laboratory configurations. This study provides a convenient scheduling method for life science automation laboratories and presents a new possibility for designing laboratory configurations.


Assuntos
Algoritmos , Automação Laboratorial , Humanos , Laboratórios
20.
Biomed Signal Process Control ; 84: 104718, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-36811003

RESUMO

Feature Selection (FS) techniques extract the most recognizable features for improving the performance of classification methods for medical applications. In this paper, two intelligent wrapper FS approaches based on a new metaheuristic algorithm named the Snake Optimizer (SO) are introduced. The binary SO, called BSO, is built based on an S-shape transform function to handle the binary discrete values in the FS domain. To improve the exploration of the search space by BSO, three evolutionary crossover operators (i.e., one-point crossover, two-point crossover, and uniform crossover) are incorporated and controlled by a switch probability. The two newly developed FS algorithms, BSO and BSO-CV, are implemented and assessed on a real-world COVID-19 dataset and 23 disease benchmark datasets. According to the experimental results, the improved BSO-CV significantly outperformed the standard BSO in terms of accuracy and running time in 17 datasets. Furthermore, it shrinks the COVID-19 dataset's dimension by 89% as opposed to the BSO's 79%. Moreover, the adopted operator on BSO-CV improved the balance between exploitation and exploration capabilities in the standard BSO, particularly in searching and converging toward optimal solutions. The BSO-CV was compared against the most recent wrapper-based FS methods; namely, the hyperlearning binary dragonfly algorithm (HLBDA), the binary moth flame optimization with Lévy flight (LBMFO-V3), the coronavirus herd immunity optimizer with greedy crossover operator (CHIO-GC), as well as four filter methods with an accuracy of more than 90% in most benchmark datasets. These optimistic results reveal the great potential of BSO-CV in reliably searching the feature space.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA