Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 42
Filtrar
1.
Neural Netw ; 166: 379-395, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37549607

RESUMO

Support vector machines (SVMs) are powerful statistical learning tools, but their application to large datasets can cause time-consuming training complexity. To address this issue, various instance selection (IS) approaches have been proposed, which choose a small fraction of critical instances and screen out others before training. However, existing methods have not been able to balance accuracy and efficiency well. Some methods miss critical instances, while others use complicated selection schemes that require even more execution time than training with all original instances, thus violating the initial intention of IS. In this work, we present a newly developed IS method called Valid Border Recognition (VBR). VBR selects the closest heterogeneous neighbors as valid border instances and incorporates this process into the creation of a reduced Gaussian kernel matrix, thus minimizing the execution time. To improve reliability, we propose a strengthened version of VBR (SVBR). Based on VBR, SVBR gradually adds farther heterogeneous neighbors as complements until the Lagrange multipliers of already selected instances become stable. In numerical experiments, the effectiveness of our proposed methods is verified on benchmark and synthetic datasets in terms of accuracy, execution time and inference time.


Assuntos
Algoritmos , Máquina de Vetores de Suporte , Reprodutibilidade dos Testes
3.
Nat Commun ; 14(1): 2217, 2023 Apr 18.
Artigo em Inglês | MEDLINE | ID: mdl-37072418

RESUMO

Understanding diffusive processes in networks is a significant challenge in complexity science. Networks possess a diffusive potential that depends on their topological configuration, but diffusion also relies on the process and initial conditions. This article presents Diffusion Capacity, a concept that measures a node's potential to diffuse information based on a distance distribution that considers both geodesic and weighted shortest paths and dynamical features of the diffusion process. Diffusion Capacity thoroughly describes the role of individual nodes during a diffusion process and can identify structural modifications that may improve diffusion mechanisms. The article defines Diffusion Capacity for interconnected networks and introduces Relative Gain, which compares the performance of a node in a single structure versus an interconnected one. The method applies to a global climate network constructed from surface air temperature data, revealing a significant change in diffusion capacity around the year 2000, suggesting a loss of the planet's diffusion capacity that could contribute to the emergence of more frequent climatic events.

4.
Ann Math Artif Intell ; 91(2-3): 349-372, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36721866

RESUMO

In this paper, we investigate a novel physician scheduling problem in the Mobile Cabin Hospitals (MCH) which are constructed in Wuhan, China during the outbreak of the Covid-19 pandemic. The shortage of physicians and the surge of patients brought great challenges for physicians scheduling in MCH. The purpose of the studied problem is to get an approximately optimal schedule that reaches the minimum workload for physicians on the premise of satisfying the service requirements of patients as much as possible. We propose a novel hybrid algorithm integrating particle swarm optimization (PSO) and variable neighborhood descent (VND) (named as PSO-VND) to find the approximate global optimal solution. A self-adaptive mechanism is developed to choose the updating operators dynamically during the procedures. Based on the special features of the problem, three neighborhood structures are designed and searched in VND to improve the solution. The experimental comparisons show that the proposed PSO-VND has a significant performance increase than the other competitors.

5.
IEEE Trans Cybern ; 53(7): 4619-4629, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-34910659

RESUMO

Realistic epidemic spreading is usually driven by traffic flow in networks, which is not captured in classic diffusion models. Moreover, the progress of a node's infection from mild to severe phase has not been particularly addressed in previous epidemic modeling. To address these issues, we propose a novel traffic-driven epidemic spreading model by introducing a new epidemic state, that is, the severe state, which characterizes the serious infection of a node different from the initial mild infection. We derive the dynamic equations of our model with the tools of individual-based mean-field approximation and continuous-time Markov chain. We find that, besides infection and recovery rates, the epidemic threshold of our model is determined by the largest real eigenvalue of a communication frequency matrix we construct. Finally, we study how the epidemic spreading is influenced by representative distributions of infection control resources. In particular, we observe that the uniform and Weibull distributions of control resources, which have very close performance, are much better than the Pareto distribution in suppressing the epidemic spreading.


Assuntos
Epidemias , Cadeias de Markov , Comunicação , Difusão
8.
Ann Oper Res ; 316(1): 699-721, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35531563

RESUMO

Global vaccine revenues are projected at $59.2 billion, yet large-scale vaccine distribution remains challenging for many diseases in countries around the world. Poor management of the vaccine supply chain can lead to a disease outbreak, or at worst, a pandemic. Fortunately, a large number of those challenges, such as decision-making for optimal allocation of resources, vaccination strategy, inventory management, among others, can be improved through optimization approaches. This work aims to understand how optimization has been applied to vaccine supply chain and logistics. To achieve this, we conducted a rapid review and searched for peer-reviewed journal articles, published between 2009 and March 2020, in four scientific databases. The search resulted in 345 articles, of which 25 unique studies met our inclusion criteria. Our analysis focused on the identification of article characteristics such as research objectives, vaccine supply chain stage addressed, the optimization method used, whether outbreak scenarios were considered, among others. Approximately 64% of the studies dealt with vaccination strategy, and the remainder dealt with logistics and inventory management. Only one addressed market competition (4%). There were 14 different types of optimization methods used, but control theory, linear programming, mathematical model and mixed integer programming were the most common (12% each). Uncertainties were considered in the models of 44% of the studies. One resulting observation was the lack of studies using optimization for vaccine inventory management and logistics. The results provide an understanding of how optimization models have been used to address challenges in large-scale vaccine supply chains.

9.
Comput Intell Neurosci ; 2022: 5699472, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35535198

RESUMO

Human Learning Optimization (HLO) is an efficient metaheuristic algorithm in which three learning operators, i.e., the random learning operator, the individual learning operator, and the social learning operator, are developed to search for optima by mimicking the learning behaviors of humans. In fact, people not only learn from global optimization but also learn from the best solution of other individuals in the real life, and the operators of Differential Evolution are updated based on the optima of other individuals. Inspired by these facts, this paper proposes two novel differential human learning optimization algorithms (DEHLOs), into which the Differential Evolution strategy is introduced to enhance the optimization ability of the algorithm. And the two optimization algorithms, based on improving the HLO from individual and population, are named DEHLO1 and DEHLO2, respectively. The multidimensional knapsack problems are adopted as benchmark problems to validate the performance of DEHLOs, and the results are compared with the standard HLO and Modified Binary Differential Evolution (MBDE) as well as other state-of-the-art metaheuristics. The experimental results demonstrate that the developed DEHLOs significantly outperform other algorithms and the DEHLO2 achieves the best overall performance on various problems.


Assuntos
Algoritmos , Humanos
10.
BMJ Health Care Inform ; 28(1)2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34876451

RESUMO

OBJECTIVES: Acute kidney injury (AKI) affects up to one-quarter of hospitalised patients and 60% of patients in the intensive care unit (ICU). We aim to understand the baseline characteristics of patients who will develop distinct AKI trajectories, determine the impact of persistent AKI and renal non-recovery on clinical outcomes, resource use, and assess the relative importance of AKI severity, duration and recovery on survival. METHODS: In this retrospective, longitudinal cohort study, 156 699 patients admitted to a quaternary care hospital between January 2012 and August 2019 were staged and classified (no AKI, rapidly reversed AKI, persistent AKI with and without renal recovery). Clinical outcomes, resource use and short-term and long-term survival adjusting for AKI severity were compared among AKI trajectories in all cohort and subcohorts with and without ICU admission. RESULTS: Fifty-eight per cent (31 500/54 212) had AKI that rapidly reversed within 48 hours; among patients with persistent AKI, two-thirds (14 122/22 712) did not have renal recovery by discharge. One-year mortality was significantly higher among patients with persistent AKI (35%, 7856/22 712) than patients with rapidly reversed AKI (15%, 4714/31 500) and no AKI (7%, 22 117/301 466). Persistent AKI without renal recovery was associated with approximately fivefold increased hazard rates compared with no AKI in all cohort and ICU and non-ICU subcohorts, independent of AKI severity. DISCUSSION: Among hospitalised, ICU and non-ICU patients, persistent AKI and the absence of renal recovery are associated with reduced long-term survival, independent of AKI severity. CONCLUSIONS: It is essential to identify patients at risk of developing persistent AKI and no renal recovery to guide treatment-related decisions.


Assuntos
Injúria Renal Aguda , Estudos de Coortes , Humanos , Unidades de Terapia Intensiva , Estudos Longitudinais , Estudos Retrospectivos
11.
Patterns (N Y) ; 1(1): 100003, 2020 Apr 10.
Artigo em Inglês | MEDLINE | ID: mdl-33205080

RESUMO

Traditionally, networks have been studied in an independent fashion. With the emergence of novel smart city technologies, coupling among networks has been strengthened. To capture the ever-increasing coupling, we explain the notion of interdependent networks, i.e., multi-layered networks with shared decision-making entities, and shared sensing infrastructures with interdisciplinary applications. The main challenge is how to develop data analytics solutions that are capable of enabling interdependent decision making. One of the emerging solutions is agent-based distributed decision making among heterogeneous agents and entities when their decisions are affected by multiple networks. We first provide a big picture of real-world interdependent networks in the context of smart city infrastructures. We then provide an outline of potential challenges and solutions from a data science perspective. We discuss potential hindrances to ensure reliable communication among intelligent agents from different networks. We explore future research directions at the intersection of network science and data science.

12.
IEEE Trans Cybern ; 50(5): 2274-2287, 2020 May.
Artigo em Inglês | MEDLINE | ID: mdl-30530345

RESUMO

Over the last few decades, the decomposition-based multiobjective evolutionary algorithms (DMOEAs) have became one of the mainstreams for multiobjective optimization. However, there is not too much research on applying DMOEAs to uncertain problems until now. Usually, the uncertainty is modeled as additive noise in the objective space, which is the case this paper concentrates on. This paper first carries out experiments to examine the impact of noisy environments on DMOEAs. Then, four noise-handling techniques based upon the analyses of empirical results are proposed. First, a Pareto-based nadir point estimation strategy is put forward to provide a good normalization of each objective. Next, we introduce two adaptive sampling strategies that vary the number of samples used per solution based on the differences among neighboring solutions and their variance to control the tradeoff between exploration and exploitation. Finally, a mixed objective evaluation strategy and a mixed repair mechanism are proposed to alleviate the effects of noise and remedy the loss of diversity in the decision space, respectively. These features are embedded in two popular DMOEAs (i.e., MOEA/D and DMOEA- [Formula: see text]), and DMOEAs with these features are called noise-tolerant DMOEAs (NT-DMOEAs). NT-DMOEAs are compared with their various variants and four noise-tolerant multiobjective algorithms, including the improved NSGA-II, the classical algorithm Bayesian (1+1)-ES (BES), and the state-of-the-art algorithms MOP-EA and rolling tide evolutionary algorithm to show the superiority of proposed features on 17 benchmark problems with different strength levels of noise. Experimental studies demonstrate that two NT-DMOEAs, especially NT-DMOEA- [Formula: see text], show remarkable advantages over competitors in the majority of test instances.

13.
Sci Rep ; 9(1): 4511, 2019 03 14.
Artigo em Inglês | MEDLINE | ID: mdl-30872604

RESUMO

Diversity, understood as the variety of different elements or configurations that an extensive system has, is a crucial property that allows maintaining the system's functionality in a changing environment, where failures, random events or malicious attacks are often unavoidable. Despite the relevance of preserving diversity in the context of ecology, biology, transport, finances, etc., the elements or configurations that more contribute to the diversity are often unknown, and thus, they can not be protected against failures or environmental crises. This is due to the fact that there is no generic framework that allows identifying which elements or configurations have crucial roles in preserving the diversity of the system. Existing methods treat the level of heterogeneity of a system as a measure of its diversity, being unsuitable when systems are composed of a large number of elements with different attributes and types of interactions. Besides, with limited resources, one needs to find the best preservation policy, i.e., one needs to solve an optimization problem. Here we aim to bridge this gap by developing a metric between labeled graphs to compute the diversity of the system, which allows identifying the most relevant components, based on their contribution to a global diversity value. The proposed framework is suitable for large multiplex structures, which are constituted by a set of elements represented as nodes, which have different types of interactions, represented as layers. The proposed method allows us to find, in a genetic network (HIV-1), the elements with the highest diversity values, while in a European airline network, we systematically identify the companies that maximize (and those that less compromise) the variety of options for routes connecting different airports.

14.
Environ Sci Pollut Res Int ; 26(18): 17918-17926, 2019 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-29238924

RESUMO

This paper shifts the discussion of low-carbon technology from science to the economy, especially the reactions of a manufacturer to government regulations. One major concern in this paper is uncertainty about the effects of government regulation on the manufacturing industry. On the trust side, will manufacturers trust the government's commitment to strictly supervise carbon emission reduction? Will a manufacturer that is involved in traditional industry consciously follow a low-carbon policy? On the profit side, does equilibrium between a manufacturer and a government exist on deciding which strategy to undertake to meet a profit maximization objective under carbon emission reduction? To identify the best solutions to these problems, this paper estimates the economic benefits of manufacturers associated with policy regulations in a low-carbon technology market. The problem of an interest conflict between the government and the manufacturer is formalized as a game theoretic model, and a mixed strategy Nash equilibrium is derived and analyzed. The experiment results indicate that when the punishment levied on the manufacturer or the loss to the government is sizable, the manufacturer will be prone to developing innovative technology and the government will be unlikely to supervise the manufacturer.


Assuntos
Carbono , Poluição Ambiental/legislação & jurisprudência , Regulamentação Governamental , Indústria Manufatureira/legislação & jurisprudência , China , Tomada de Decisões , Tecnologia
15.
Expert Syst ; 36(5)2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-33162636

RESUMO

In this paper, the problem of mining complex temporal patterns in the context of multivariate time series is considered. A new method called the Fast Temporal Pattern Mining with Extended Vertical Lists is introduced. The method is based on an extension of the level-wise property, which requires a more complex pattern to start at positions within a record where all of the subpatterns of the pattern start. The approach is built around a novel data structure called the Extended Vertical List that tracks positions of the first state of the pattern inside records and links them to appropriate positions of a specific subpattern of the pattern called the prefix. Extensive computational results indicate that the new method performs significantly faster than the previous version of the algorithm for Temporal Pattern Mining; however, the increase in speed comes at the expense of increased memory usage.

16.
Phys Rev E ; 95(1-1): 012322, 2017 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-28208369

RESUMO

For many power-limited networks, such as wireless sensor networks and mobile ad hoc networks, maximizing the network lifetime is the first concern in the related designing and maintaining activities. We study the network lifetime from the perspective of network science. In our model, nodes are initially assigned a fixed amount of energy moving in a square area and consume the energy when delivering packets. We obtain four different traffic regimes: no, slow, fast, and absolute congestion regimes, which are basically dependent on the packet generation rate. We derive the network lifetime by considering the specific regime of the traffic flow. We find that traffic congestion inversely affects network lifetime in the sense that high traffic congestion results in short network lifetime. We also discuss the impacts of factors such as communication radius, node moving speed, routing strategy, etc., on network lifetime and traffic congestion.

17.
Nat Commun ; 8: 13928, 2017 01 09.
Artigo em Inglês | MEDLINE | ID: mdl-28067266

RESUMO

Identifying and quantifying dissimilarities among graphs is a fundamental and challenging problem of practical importance in many fields of science. Current methods of network comparison are limited to extract only partial information or are computationally very demanding. Here we propose an efficient and precise measure for network comparison, which is based on quantifying differences among distance probability distributions extracted from the networks. Extensive experiments on synthetic and real-world networks show that this measure returns non-zero values only when the graphs are non-isomorphic. Most importantly, the measure proposed here can identify and quantify structural topological differences that have a practical impact on the information flow through the network, such as the presence or absence of critical links that connect or disconnect connected components.

18.
PLoS One ; 11(5): e0155705, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27232332

RESUMO

OBJECTIVE: To compare performance of risk prediction models for forecasting postoperative sepsis and acute kidney injury. DESIGN: Retrospective single center cohort study of adult surgical patients admitted between 2000 and 2010. PATIENTS: 50,318 adult patients undergoing major surgery. MEASUREMENTS: We evaluated the performance of logistic regression, generalized additive models, naïve Bayes and support vector machines for forecasting postoperative sepsis and acute kidney injury. We assessed the impact of feature reduction techniques on predictive performance. Model performance was determined using the area under the receiver operating characteristic curve, accuracy, and positive predicted value. The results were reported based on a 70/30 cross validation procedure where the data were randomly split into 70% used for training the model and the 30% for validation. MAIN RESULTS: The areas under the receiver operating characteristic curve for different models ranged between 0.797 and 0.858 for acute kidney injury and between 0.757 and 0.909 for severe sepsis. Logistic regression, generalized additive model, and support vector machines had better performance compared to Naïve Bayes model. Generalized additive models additionally accounted for non-linearity of continuous clinical variables as depicted in their risk patterns plots. Reducing the input feature space with LASSO had minimal effect on prediction performance, while feature extraction using principal component analysis improved performance of the models. CONCLUSIONS: Generalized additive models and support vector machines had good performance as risk prediction model for postoperative sepsis and AKI. Feature extraction using principal component analysis improved the predictive performance of all models.


Assuntos
Biologia Computacional/métodos , Aprendizado de Máquina , Complicações Pós-Operatórias/diagnóstico , Injúria Renal Aguda/diagnóstico , Injúria Renal Aguda/etiologia , Adulto , Idoso , Estudos de Coortes , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Modelos Estatísticos , Complicações Pós-Operatórias/etiologia , Estudos Retrospectivos , Risco , Sepse/diagnóstico , Sepse/etiologia
19.
Ann Surg ; 263(6): 1219-1227, 2016 06.
Artigo em Inglês | MEDLINE | ID: mdl-26181482

RESUMO

OBJECTIVE: Calculate mortality risk that accounts for both severity and recovery of postoperative kidney dysfunction using the pattern of longitudinal change in creatinine. BACKGROUND: Although the importance of renal recovery after acute kidney injury (AKI) is increasingly recognized, the complex association that accounts for longitudinal creatinine changes and mortality is not fully described. METHODS: We used routinely collected clinical information for 46,299 adult patients undergoing major surgery to develop a multivariable probabilistic model optimized for nonlinearity of serum creatinine time series that calculates the risk function for 90-day mortality. We performed a 70/30 cross validation analysis to assess the accuracy of the model. RESULTS: All creatinine time series exhibited nonlinear risk function in relation to 90-day mortality and their addition to other clinical factors improved the model discrimination. For any given severity of AKI, patients with complete renal recovery, as manifested by the return of the discharge creatinine to the baseline value, experienced a significant decrease in the odds of dying within 90 days of admission compared with patients with partial recovery. Yet, for any severity of AKI, even complete renal recovery did not entirely mitigate the increased odds of dying, as patients with mild AKI and complete renal recovery still had significantly increased odds for dying compared with patients without AKI [odds ratio: 1.48 (95% confidence interval: 1.30-1.68)]. CONCLUSIONS: We demonstrate the nonlinear relationship between both severity and recovery of renal dysfunction and 90-day mortality after major surgery. We have developed an easily applicable computer algorithm that calculates this complex relationship.


Assuntos
Injúria Renal Aguda/sangue , Injúria Renal Aguda/mortalidade , Creatinina/sangue , Complicações Pós-Operatórias/sangue , Complicações Pós-Operatórias/mortalidade , Procedimentos Cirúrgicos Operatórios , Idoso , Idoso de 80 Anos ou mais , Biomarcadores/sangue , Feminino , Florida/epidemiologia , Humanos , Masculino , Pessoa de Meia-Idade , Fatores de Risco , Índice de Gravidade de Doença
20.
PLoS One ; 10(8): e0137012, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26317983

RESUMO

Functional Magnetic Resonance (fMRI) data can be used to depict functional connectivity of the brain. Standard techniques have been developed to construct brain networks from this data; typically nodes are considered as voxels or sets of voxels with weighted edges between them representing measures of correlation. Identifying cognitive states based on fMRI data is connected with recording voxel activity over a certain time interval. Using this information, network and machine learning techniques can be applied to discriminate the cognitive states of the subjects by exploring different features of data. In this work we wish to describe and understand the organization of brain connectivity networks under cognitive tasks. In particular, we use a regularity partitioning algorithm that finds clusters of vertices such that they all behave with each other almost like random bipartite graphs. Based on the random approximation of the graph, we calculate a lower bound on the number of triangles as well as the expectation of the distribution of the edges in each subject and state. We investigate the results by comparing them to the state of the art algorithms for exploring connectivity and we argue that during epochs that the subject is exposed to stimulus, the inspected part of the brain is organized in an efficient way that enables enhanced functionality.


Assuntos
Mapeamento Encefálico/métodos , Cognição , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética , Algoritmos , Encéfalo/fisiologia , Análise por Conglomerados , Gráficos por Computador , Rede Nervosa/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA