Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 35
Filtrar
1.
Expert Syst ; 36(5)2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-33162636

RESUMO

In this paper, the problem of mining complex temporal patterns in the context of multivariate time series is considered. A new method called the Fast Temporal Pattern Mining with Extended Vertical Lists is introduced. The method is based on an extension of the level-wise property, which requires a more complex pattern to start at positions within a record where all of the subpatterns of the pattern start. The approach is built around a novel data structure called the Extended Vertical List that tracks positions of the first state of the pattern inside records and links them to appropriate positions of a specific subpattern of the pattern called the prefix. Extensive computational results indicate that the new method performs significantly faster than the previous version of the algorithm for Temporal Pattern Mining; however, the increase in speed comes at the expense of increased memory usage.

2.
IEEE Trans Cybern ; 53(7): 4619-4629, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-34910659

RESUMO

Realistic epidemic spreading is usually driven by traffic flow in networks, which is not captured in classic diffusion models. Moreover, the progress of a node's infection from mild to severe phase has not been particularly addressed in previous epidemic modeling. To address these issues, we propose a novel traffic-driven epidemic spreading model by introducing a new epidemic state, that is, the severe state, which characterizes the serious infection of a node different from the initial mild infection. We derive the dynamic equations of our model with the tools of individual-based mean-field approximation and continuous-time Markov chain. We find that, besides infection and recovery rates, the epidemic threshold of our model is determined by the largest real eigenvalue of a communication frequency matrix we construct. Finally, we study how the epidemic spreading is influenced by representative distributions of infection control resources. In particular, we observe that the uniform and Weibull distributions of control resources, which have very close performance, are much better than the Pareto distribution in suppressing the epidemic spreading.


Assuntos
Epidemias , Cadeias de Markov , Comunicação , Difusão
3.
Ann Math Artif Intell ; 91(2-3): 349-372, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36721866

RESUMO

In this paper, we investigate a novel physician scheduling problem in the Mobile Cabin Hospitals (MCH) which are constructed in Wuhan, China during the outbreak of the Covid-19 pandemic. The shortage of physicians and the surge of patients brought great challenges for physicians scheduling in MCH. The purpose of the studied problem is to get an approximately optimal schedule that reaches the minimum workload for physicians on the premise of satisfying the service requirements of patients as much as possible. We propose a novel hybrid algorithm integrating particle swarm optimization (PSO) and variable neighborhood descent (VND) (named as PSO-VND) to find the approximate global optimal solution. A self-adaptive mechanism is developed to choose the updating operators dynamically during the procedures. Based on the special features of the problem, three neighborhood structures are designed and searched in VND to improve the solution. The experimental comparisons show that the proposed PSO-VND has a significant performance increase than the other competitors.

4.
Neural Netw ; 166: 379-395, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37549607

RESUMO

Support vector machines (SVMs) are powerful statistical learning tools, but their application to large datasets can cause time-consuming training complexity. To address this issue, various instance selection (IS) approaches have been proposed, which choose a small fraction of critical instances and screen out others before training. However, existing methods have not been able to balance accuracy and efficiency well. Some methods miss critical instances, while others use complicated selection schemes that require even more execution time than training with all original instances, thus violating the initial intention of IS. In this work, we present a newly developed IS method called Valid Border Recognition (VBR). VBR selects the closest heterogeneous neighbors as valid border instances and incorporates this process into the creation of a reduced Gaussian kernel matrix, thus minimizing the execution time. To improve reliability, we propose a strengthened version of VBR (SVBR). Based on VBR, SVBR gradually adds farther heterogeneous neighbors as complements until the Lagrange multipliers of already selected instances become stable. In numerical experiments, the effectiveness of our proposed methods is verified on benchmark and synthetic datasets in terms of accuracy, execution time and inference time.


Assuntos
Algoritmos , Máquina de Vetores de Suporte , Reprodutibilidade dos Testes
5.
Nat Commun ; 14(1): 2217, 2023 Apr 18.
Artigo em Inglês | MEDLINE | ID: mdl-37072418

RESUMO

Understanding diffusive processes in networks is a significant challenge in complexity science. Networks possess a diffusive potential that depends on their topological configuration, but diffusion also relies on the process and initial conditions. This article presents Diffusion Capacity, a concept that measures a node's potential to diffuse information based on a distance distribution that considers both geodesic and weighted shortest paths and dynamical features of the diffusion process. Diffusion Capacity thoroughly describes the role of individual nodes during a diffusion process and can identify structural modifications that may improve diffusion mechanisms. The article defines Diffusion Capacity for interconnected networks and introduces Relative Gain, which compares the performance of a node in a single structure versus an interconnected one. The method applies to a global climate network constructed from surface air temperature data, revealing a significant change in diffusion capacity around the year 2000, suggesting a loss of the planet's diffusion capacity that could contribute to the emergence of more frequent climatic events.

6.
Comput Intell Neurosci ; 2022: 5699472, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35535198

RESUMO

Human Learning Optimization (HLO) is an efficient metaheuristic algorithm in which three learning operators, i.e., the random learning operator, the individual learning operator, and the social learning operator, are developed to search for optima by mimicking the learning behaviors of humans. In fact, people not only learn from global optimization but also learn from the best solution of other individuals in the real life, and the operators of Differential Evolution are updated based on the optima of other individuals. Inspired by these facts, this paper proposes two novel differential human learning optimization algorithms (DEHLOs), into which the Differential Evolution strategy is introduced to enhance the optimization ability of the algorithm. And the two optimization algorithms, based on improving the HLO from individual and population, are named DEHLO1 and DEHLO2, respectively. The multidimensional knapsack problems are adopted as benchmark problems to validate the performance of DEHLOs, and the results are compared with the standard HLO and Modified Binary Differential Evolution (MBDE) as well as other state-of-the-art metaheuristics. The experimental results demonstrate that the developed DEHLOs significantly outperform other algorithms and the DEHLO2 achieves the best overall performance on various problems.


Assuntos
Algoritmos , Humanos
7.
Ann Oper Res ; 316(1): 699-721, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35531563

RESUMO

Global vaccine revenues are projected at $59.2 billion, yet large-scale vaccine distribution remains challenging for many diseases in countries around the world. Poor management of the vaccine supply chain can lead to a disease outbreak, or at worst, a pandemic. Fortunately, a large number of those challenges, such as decision-making for optimal allocation of resources, vaccination strategy, inventory management, among others, can be improved through optimization approaches. This work aims to understand how optimization has been applied to vaccine supply chain and logistics. To achieve this, we conducted a rapid review and searched for peer-reviewed journal articles, published between 2009 and March 2020, in four scientific databases. The search resulted in 345 articles, of which 25 unique studies met our inclusion criteria. Our analysis focused on the identification of article characteristics such as research objectives, vaccine supply chain stage addressed, the optimization method used, whether outbreak scenarios were considered, among others. Approximately 64% of the studies dealt with vaccination strategy, and the remainder dealt with logistics and inventory management. Only one addressed market competition (4%). There were 14 different types of optimization methods used, but control theory, linear programming, mathematical model and mixed integer programming were the most common (12% each). Uncertainties were considered in the models of 44% of the studies. One resulting observation was the lack of studies using optimization for vaccine inventory management and logistics. The results provide an understanding of how optimization models have been used to address challenges in large-scale vaccine supply chains.

8.
Epilepsia ; 51(2): 243-50, 2010 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-19732132

RESUMO

PURPOSE: Distinguishing nonconvulsive status epilepticus (NCSE) from some nonepileptic encephalopathies is a challenging problem. In many situations, NCSE and nonepileptic encephalopathies are indistinguishable by clinical symptoms and can produce very similar electroencephalography (EEG) patterns. Misdiagnosis or delay to diagnosis of NCSE may increase the rate of morbidity and mortality. METHODS: We developed a fast-differentiating algorithm using quantitative EEG analysis to distinguish NCSE patients from patients with toxic/metabolic encephalopathy (TME). EEG recordings were collected from 11 patients, including 6 with NCSE and 5 with TME. Three nonlinear dynamic measures were used in the proposed algorithm: the maximum short-term Lyapunov exponent (STLmax), phase of attractor (phase/angular frequency), and approximate entropy (ApEn). A further refined metric derived from STLmax and phase of attractor (the mean distance to EEG epoch samples from their centroid in the feature space) was also utilized as a criterion. Paired t tests were carried out to further clarify the separation between the EEG patterns of NCSE and TME. RESULTS: Computational results showed that the performance of the proposed algorithm was sufficient to distinguish NCSE from TME. The results were consistent in all subjects in our study. CONCLUSIONS: The study presents evidence that the maximum short-term Lyapunov exponents (STLmax) and phase of attractors (phase/angular frequency) can be useful in assisting clinical diagnosis of NCSE. Findings presented in this article provide a promising indication that the proposed algorithm may correctly distinguish NCSE from TME. Although the exact mechanism of this association remains unknown, the authors suggest that epileptic activity is highly associated with and can be modeled by dynamic systems.


Assuntos
Eletroencefalografia/estatística & dados numéricos , Estado Epiléptico/diagnóstico , Adulto , Idoso , Algoritmos , Encefalopatias Metabólicas/diagnóstico , Diagnóstico Diferencial , Erros de Diagnóstico , Eletroencefalografia/métodos , Entropia , Feminino , Humanos , Masculino , Dinâmica não Linear , Projetos Piloto , Estado Epiléptico/classificação
9.
Patterns (N Y) ; 1(1): 100003, 2020 Apr 10.
Artigo em Inglês | MEDLINE | ID: mdl-33205080

RESUMO

Traditionally, networks have been studied in an independent fashion. With the emergence of novel smart city technologies, coupling among networks has been strengthened. To capture the ever-increasing coupling, we explain the notion of interdependent networks, i.e., multi-layered networks with shared decision-making entities, and shared sensing infrastructures with interdisciplinary applications. The main challenge is how to develop data analytics solutions that are capable of enabling interdependent decision making. One of the emerging solutions is agent-based distributed decision making among heterogeneous agents and entities when their decisions are affected by multiple networks. We first provide a big picture of real-world interdependent networks in the context of smart city infrastructures. We then provide an outline of potential challenges and solutions from a data science perspective. We discuss potential hindrances to ensure reliable communication among intelligent agents from different networks. We explore future research directions at the intersection of network science and data science.

10.
IEEE Trans Cybern ; 50(5): 2274-2287, 2020 May.
Artigo em Inglês | MEDLINE | ID: mdl-30530345

RESUMO

Over the last few decades, the decomposition-based multiobjective evolutionary algorithms (DMOEAs) have became one of the mainstreams for multiobjective optimization. However, there is not too much research on applying DMOEAs to uncertain problems until now. Usually, the uncertainty is modeled as additive noise in the objective space, which is the case this paper concentrates on. This paper first carries out experiments to examine the impact of noisy environments on DMOEAs. Then, four noise-handling techniques based upon the analyses of empirical results are proposed. First, a Pareto-based nadir point estimation strategy is put forward to provide a good normalization of each objective. Next, we introduce two adaptive sampling strategies that vary the number of samples used per solution based on the differences among neighboring solutions and their variance to control the tradeoff between exploration and exploitation. Finally, a mixed objective evaluation strategy and a mixed repair mechanism are proposed to alleviate the effects of noise and remedy the loss of diversity in the decision space, respectively. These features are embedded in two popular DMOEAs (i.e., MOEA/D and DMOEA- [Formula: see text]), and DMOEAs with these features are called noise-tolerant DMOEAs (NT-DMOEAs). NT-DMOEAs are compared with their various variants and four noise-tolerant multiobjective algorithms, including the improved NSGA-II, the classical algorithm Bayesian (1+1)-ES (BES), and the state-of-the-art algorithms MOP-EA and rolling tide evolutionary algorithm to show the superiority of proposed features on 17 benchmark problems with different strength levels of noise. Experimental studies demonstrate that two NT-DMOEAs, especially NT-DMOEA- [Formula: see text], show remarkable advantages over competitors in the majority of test instances.

11.
Environ Sci Pollut Res Int ; 26(18): 17918-17926, 2019 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-29238924

RESUMO

This paper shifts the discussion of low-carbon technology from science to the economy, especially the reactions of a manufacturer to government regulations. One major concern in this paper is uncertainty about the effects of government regulation on the manufacturing industry. On the trust side, will manufacturers trust the government's commitment to strictly supervise carbon emission reduction? Will a manufacturer that is involved in traditional industry consciously follow a low-carbon policy? On the profit side, does equilibrium between a manufacturer and a government exist on deciding which strategy to undertake to meet a profit maximization objective under carbon emission reduction? To identify the best solutions to these problems, this paper estimates the economic benefits of manufacturers associated with policy regulations in a low-carbon technology market. The problem of an interest conflict between the government and the manufacturer is formalized as a game theoretic model, and a mixed strategy Nash equilibrium is derived and analyzed. The experiment results indicate that when the punishment levied on the manufacturer or the loss to the government is sizable, the manufacturer will be prone to developing innovative technology and the government will be unlikely to supervise the manufacturer.


Assuntos
Carbono , Poluição Ambiental/legislação & jurisprudência , Regulamentação Governamental , Indústria Manufatureira/legislação & jurisprudência , China , Tomada de Decisões , Tecnologia
12.
Sci Rep ; 9(1): 4511, 2019 03 14.
Artigo em Inglês | MEDLINE | ID: mdl-30872604

RESUMO

Diversity, understood as the variety of different elements or configurations that an extensive system has, is a crucial property that allows maintaining the system's functionality in a changing environment, where failures, random events or malicious attacks are often unavoidable. Despite the relevance of preserving diversity in the context of ecology, biology, transport, finances, etc., the elements or configurations that more contribute to the diversity are often unknown, and thus, they can not be protected against failures or environmental crises. This is due to the fact that there is no generic framework that allows identifying which elements or configurations have crucial roles in preserving the diversity of the system. Existing methods treat the level of heterogeneity of a system as a measure of its diversity, being unsuitable when systems are composed of a large number of elements with different attributes and types of interactions. Besides, with limited resources, one needs to find the best preservation policy, i.e., one needs to solve an optimization problem. Here we aim to bridge this gap by developing a metric between labeled graphs to compute the diversity of the system, which allows identifying the most relevant components, based on their contribution to a global diversity value. The proposed framework is suitable for large multiplex structures, which are constituted by a set of elements represented as nodes, which have different types of interactions, represented as layers. The proposed method allows us to find, in a genetic network (HIV-1), the elements with the highest diversity values, while in a European airline network, we systematically identify the companies that maximize (and those that less compromise) the variety of options for routes connecting different airports.

15.
Phys Rev E ; 95(1-1): 012322, 2017 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-28208369

RESUMO

For many power-limited networks, such as wireless sensor networks and mobile ad hoc networks, maximizing the network lifetime is the first concern in the related designing and maintaining activities. We study the network lifetime from the perspective of network science. In our model, nodes are initially assigned a fixed amount of energy moving in a square area and consume the energy when delivering packets. We obtain four different traffic regimes: no, slow, fast, and absolute congestion regimes, which are basically dependent on the packet generation rate. We derive the network lifetime by considering the specific regime of the traffic flow. We find that traffic congestion inversely affects network lifetime in the sense that high traffic congestion results in short network lifetime. We also discuss the impacts of factors such as communication radius, node moving speed, routing strategy, etc., on network lifetime and traffic congestion.

16.
Nat Commun ; 8: 13928, 2017 01 09.
Artigo em Inglês | MEDLINE | ID: mdl-28067266

RESUMO

Identifying and quantifying dissimilarities among graphs is a fundamental and challenging problem of practical importance in many fields of science. Current methods of network comparison are limited to extract only partial information or are computationally very demanding. Here we propose an efficient and precise measure for network comparison, which is based on quantifying differences among distance probability distributions extracted from the networks. Extensive experiments on synthetic and real-world networks show that this measure returns non-zero values only when the graphs are non-isomorphic. Most importantly, the measure proposed here can identify and quantify structural topological differences that have a practical impact on the information flow through the network, such as the presence or absence of critical links that connect or disconnect connected components.

17.
J Clin Neurophysiol ; 23(6): 509-20, 2006 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-17143139

RESUMO

Epileptic seizures of mesial temporal origin are preceded by changes in signal properties detectable in the intracranial EEG. A series of computer algorithms designed to detect the changes in spatiotemporal dynamics of the EEG signals and to warn of impending seizures have been developed. In this study, we evaluated the performance of a novel adaptive threshold seizure warning algorithm (ATSWA), which detects the convergence in Short-Term Maximum Lyapunov Exponent (STLmax) values among critical intracranial EEG electrode sites, as a function of different seizure warning horizons (SWHs). The ATSWA algorithm was compared to two statistical based naïve prediction algorithms (periodic and random) that do not employ EEG information. For comparison purposes, three performance indices "area above ROC curve" (AAC), "predictability power" (PP) and "fraction of time under false warnings" (FTF) were defined and the effect of SWHs on these indices was evaluated. The results demonstrate that this EEG based seizure warning method performed significantly better (P < 0.05) than both naïve prediction schemes. Our results also show that the performance indexes are dependent on the length of the SWH. These results suggest that the EEG based analysis has the potential to be a useful tool for seizure warning.


Assuntos
Algoritmos , Eletroencefalografia/métodos , Processamento Eletrônico de Dados/métodos , Convulsões/diagnóstico , Convulsões/fisiopatologia , Adulto , Mapeamento Encefálico , Diagnóstico por Computador , Eletrodos , Eletroencefalografia/estatística & dados numéricos , Feminino , Humanos , Estudos Longitudinais , Masculino , Pessoa de Meia-Idade , Valor Preditivo dos Testes , Curva ROC , Estudos Retrospectivos , Sensibilidade e Especificidade , Fatores de Tempo
18.
IEEE Trans Biomed Eng ; 51(3): 493-506, 2004 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-15000380

RESUMO

Epileptic seizures occur intermittently as a result of complex dynamical interactions among many regions of the brain. By applying signal processing techniques from the theory of nonlinear dynamics and global optimization to the analysis of long-term (3.6 to 12 days) continuous multichannel electroencephalographic recordings from four epileptic patients, we present evidence that epileptic seizures appear to serve as dynamical resetting mechanisms of the brain, that is the dynamically entrained brain areas before seizures disentrain faster and more frequently (p < 0.05) at epileptic seizures than any other periods. We expect these results to shed light into the mechanisms of epileptogenesis, seizure intervention and control, as well as into investigations of intermittent spatiotemporal state transitions in other complex biological and physical systems.


Assuntos
Algoritmos , Encéfalo/fisiopatologia , Diagnóstico por Computador/métodos , Eletroencefalografia/métodos , Epilepsia/fisiopatologia , Modelos Neurológicos , Dinâmica não Linear , Processamento de Sinais Assistido por Computador , Adaptação Fisiológica , Mapeamento Encefálico/métodos , Simulação por Computador , Epilepsia/diagnóstico , Humanos , Processos Estocásticos
19.
IEEE Trans Biomed Eng ; 50(5): 616-27, 2003 May.
Artigo em Inglês | MEDLINE | ID: mdl-12769437

RESUMO

Current epileptic seizure "prediction" algorithms are generally based on the knowledge of seizure occurring time and analyze the electroencephalogram (EEG) recordings retrospectively. It is then obvious that, although these analyses provide evidence of brain activity changes prior to epileptic seizures, they cannot be applied to develop implantable devices for diagnostic and therapeutic purposes. In this paper, we describe an adaptive procedure to prospectively analyze continuous, long-term EEG recordings when only the occurring time of the first seizure is known. The algorithm is based on the convergence and divergence of short-term maximum Lyapunov exponents (STLmax) among critical electrode sites selected adaptively. A warning of an impending seizure is then issued. Global optimization techniques are applied for selecting the critical groups of electrode sites. The adaptive seizure prediction algorithm (ASPA) was tested in continuous 0.76 to 5.84 days intracranial EEG recordings from a group of five patients with refractory temporal lobe epilepsy. A fixed parameter setting applied to all cases predicted 82% of seizures with a false prediction rate of 0.16/h. Seizure warnings occurred an average of 71.7 min before ictal onset. Similar results were produced by dividing the available EEG recordings into half training and testing portions. Optimizing the parameters for individual patients improved sensitivity (84% overall) and reduced false prediction rate (0.12/h overall). These results indicate that ASPA can be applied to implantable devices for diagnostic and therapeutic purposes.


Assuntos
Algoritmos , Eletrodos Implantados , Eletroencefalografia/métodos , Convulsões/diagnóstico , Mapeamento Encefálico/métodos , Epilepsia/diagnóstico , Epilepsia/fisiopatologia , Reações Falso-Positivas , Retroalimentação , Lobo Frontal/fisiopatologia , Hipocampo/fisiopatologia , Humanos , Monitorização Ambulatorial/métodos , Controle de Qualidade , Reprodutibilidade dos Testes , Convulsões/fisiopatologia , Sensibilidade e Especificidade , Lobo Temporal/fisiopatologia
20.
Int J Bioinform Res Appl ; 10(1): 59-74, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24449693

RESUMO

Identification of targets, generally viruses or bacteria, in a biological sample is a relevant problem in medicine. Biologists can use hybridisation experiments to determine whether a specific DNA fragment, that represents the virus, is presented in a DNA solution. A probe is a segment of DNA or RNA, labelled with a radioactive isotope, dye or enzyme, used to find a specific target sequence on a DNA molecule by hybridisation. Selecting unique probes through hybridisation experiments is a difficult task, especially when targets have a high degree of similarity, for instance in a case of closely related viruses. After preliminary experiments, performed by a canonical Monte Carlo method with Heuristic Reduction (MCHR), a new combinatorial optimisation approach, the Space Pruning Monotonic Search (SPMS) method, is introduced. The experiments show that SPMS provides high quality solutions and outperforms the current state-of-the-art algorithms.


Assuntos
Algoritmos , Sondas de DNA/genética , DNA Viral/genética , Interpretação Estatística de Dados , Hibridização In Situ/métodos , Análise de Sequência de DNA/métodos , Sequência de Bases , Dados de Sequência Molecular
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA