Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 45
Filtrar
1.
Front Immunol ; 14: 1223464, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37622119

RESUMO

Objectives: This study aimed to investigate the association between the neutrophil percentage to albumin ratio (NPAR) on the day of admission and mortality 1 year after surgery in elderly patients with hip fractures. Methods: Clinical characteristics and blood markers of inflammation were retrospectively collected from October 2016 to January 2022 in elderly patients with hip fractures at two different regional tertiary medical centers. It is divided into a training set and an external validation set. Multivariate Nomogram models such as NPAR were constructed using the least absolute shrinkage and selection operator (LASSO) regression results and multi-factor logistic regression analysis. In addition, multivariate Cox regression analysis and Kaplan-Meier survival curves were used to explore the relationship between NPAR values and mortality within 1 year in elderly patients with hip fractures. The predictive performance of the Nomogram was evaluated using the concordance index (C Index) and receiver operating characteristic curve (ROC) and validated by Bootstrap, Hosmer-Lemesow goodness of fit test, calibration curve, decision curve, and clinical impact curve analysis. Results: The study included data from 1179 (mean age, 80.34 ± 8.06 years; 61.4[52.1%] male) patients from the Guangzhou Red Cross Hospital affiliated with Jinan University and 476 (mean age, 81.18 ± 8.33 years; 233 [48.9%] male) patients from the Xiaogan Central Hospital affiliated with Wuhan University of Science and Technology. The results showed that NPAR has good sensitivity and specificity in assessing patients' prognosis 1 year after surgery. Multivariate logistic regression models based on influencing factors such as NPAR have good discrimination and calibration ability (AUC=0.942, 95% CI:0.927-0.955; Hosmer-Lemeshow test: P >0.05). Kaplan-Meier survival curves for the training and validation sets showed that patients in the high NPAR group had a higher mortality rate at 1 year compared to the low NPAR group (P< 0.001). Multivariate Cox regression showed that high NPAR values were an independent risk factor for death within 1 year in elderly hip fracture patients (P< 0.001, HR =2.38,95%CI:1.84-3.08). Conclusion: Our study showed that NPAR levels were significantly higher in patients who died within 1 year after surgery in both the training and validation sets. NPAR has good clinical value in assessing 1-year postoperative prognosis in elderly patients with hip fractures.


Assuntos
Fraturas do Quadril , Neutrófilos , Idoso , Humanos , Masculino , Idoso de 80 Anos ou mais , Feminino , Estudos Retrospectivos , Fraturas do Quadril/cirurgia , Albuminas , Calibragem
2.
Front Immunol ; 14: 1168780, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37503333

RESUMO

Background: Osteoarthritis (OA) is a degenerative disease closely related to aging. Nevertheless, the role and mechanisms of aging in osteoarthritis remain unclear. This study aims to identify potential aging-related biomarkers in OA and to explore the role and mechanisms of aging-related genes and the immune microenvironment in OA synovial tissue. Methods: Normal and OA synovial gene expression profile microarrays were obtained from the Gene Expression Omnibus (GEO) database and aging-related genes (ARGs) from the Human Aging Genomic Resources database (HAGR). Gene Ontology (GO), Kyoto Encyclopedia of Genes and Genomes (KEGG), Disease Ontology (DO), and Gene set variation analysis (GSVA) enrichment analysis were used to uncover the underlying mechanisms. To identify Hub ARDEGs with highly correlated OA features (Hub OA-ARDEGs), Weighted Gene Co-expression Network Analysis (WGCNA) and machine learning methods were used. Furthermore, we created diagnostic nomograms and receiver operating characteristic curves (ROC) to assess Hub OA-ARDEGs' ability to diagnose OA and predict which miRNAs and TFs they might act on. The Single sample gene set enrichment analysis (ssGSEA) algorithm was applied to look at the immune infiltration characteristics of OA and their relationship with Hub OA-ARDEGs. Results: We discovered 87 ARDEGs in normal and OA synovium samples. According to functional enrichment, ARDEGs are primarily associated with inflammatory regulation, cellular stress response, cell cycle regulation, and transcriptional regulation. Hub OA-ARDEGs with excellent OA diagnostic ability were identified as MCL1, SIK1, JUND, NFKBIA, and JUN. Wilcox test showed that Hub OA-ARDEGs were all significantly downregulated in OA and were validated in the validation set and by qRT-PCR. Using the ssGSEA algorithm, we discovered that 15 types of immune cell infiltration and six types of immune cell activation were significantly increased in OA synovial samples and well correlated with Hub OA-ARDEGs. Conclusion: Synovial aging may promote the progression of OA by inducing immune inflammation. MCL1, SIK1, JUND, NFKBIA, and JUN can be used as novel diagnostic biomolecular markers and potential therapeutic targets for OA.


Assuntos
Biologia Computacional , Osteoartrite , Humanos , Proteína de Sequência 1 de Leucemia de Células Mieloides , Osteoartrite/diagnóstico , Osteoartrite/genética , Envelhecimento , Biomarcadores
3.
IEEE Trans Cybern ; 53(3): 1790-1801, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-34936563

RESUMO

Designing effective and efficient classifiers is a challenging task given the facts that data may exhibit different geometric structures and complex intrarelationships may exist within data. As a fundamental component of granular computing, information granules play a key role in human cognition. Therefore, it is of great interest to develop classifiers based on information granules such that highly interpretable human-centric models with higher accuracy can be constructed. In this study, we elaborate on a novel design methodology of granular classifiers in which information granules play a fundamental role. First, information granules are formed on the basis of labeled patterns following the principle of justifiable granularity. The diversity of samples embraced by each information granule is quantified and controlled in terms of the entropy criterion. This design implies that the information granules constructed in this way form sound homogeneous descriptors characterizing the structure and the diversity of available experimental data. Next, granular classifiers are built in the presence of formed information granules. The classification result for any input instance is determined by summing the contents of the related information granules weighted by membership degrees. The experiments concerning both synthetic data and publicly available datasets demonstrate that the proposed models exhibit better prediction abilities than some commonly encountered classifiers (namely, linear regression, support vector machine, Naïve Bayes, decision tree, and neural networks) and come with enhanced interpretability.

4.
IEEE Trans Cybern ; 53(6): 3818-3828, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-35468071

RESUMO

A distributed flow-shop scheduling problem with lot-streaming that considers completion time and total energy consumption is addressed. It requires to optimally assign jobs to multiple distributed factories and, at the same time, sequence them. A biobjective mathematic model is first developed to describe the considered problem. Then, an improved Jaya algorithm is proposed to solve it. The Nawaz-Enscore-Ham (NEH) initializing rule, a job-factory assignment strategy, the improved strategies for makespan and energy efficiency are designed based on the problem's characteristic to improve the Jaya's performance. Finally, experiments are carried out on 120 instances of 12 scales. The performance of the improved strategies is verified. Comparisons and discussions show that the Jaya algorithm improved by the designed strategies is highly competitive for solving the considered problem with makespan and total energy consumption criteria.

5.
Sci Rep ; 12(1): 21572, 2022 12 14.
Artigo em Inglês | MEDLINE | ID: mdl-36517648

RESUMO

Due to the proliferation of contemporary computer-integrated systems and communication networks, there is more concern than ever regarding privacy, given the potential for sensitive data exploitation. A recent cyber-security research trend is to focus on security principles and develop the foundations for designing safety-critical systems. In this work, we investigated the problem of verifying current-state opacity in discrete event systems using labeled Petri nets. A system is current-state opaque provided that the current-state estimate cannot be revealed as a subset of secret states. We introduced a new sub-model of the system, named an observer net. The observer net have the same structure as the plant, but it is distinguished by the use of colored markers as well as simultaneous and recursive transition enabling and firing, which offer an efficient state estimation. We considered two settings of the proposed approach: an on-line setting, in which a current-state opacity algorithm is proposed. The algorithm waits for the occurrence of an observable event and determines if the current observation of a plant reveals the secret behaviour, as well as, an off-line setting, where the verification problem is solved based on a state estimator called a colored estimator. In this context, necessary and sufficient conditions for verifying opacity are developed with illustrative examples to demonstrate the presented approach.


Assuntos
Algoritmos , Segurança Computacional , Sistemas Computacionais
6.
IEEE Trans Cybern ; PP2022 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-36455086

RESUMO

Since a noisy image has inferior characteristics, the direct use of Fuzzy C-Means (FCM) to segment it often produces poor image segmentation results. Intuitively, using its ideal value (noise-free image) benefits FCM's robustness enhancement. Therefore, the realization of accurate noise estimation in FCM is a new and important task. To date, only two noise-estimation-based FCM algorithms have been proposed for image segmentation, that is: 1) deviation-sparse FCM (DSFCM) and 2) our earlier proposed residual-driven FCM (RFCM). In this article, we make a thorough comparative study of DSFCM and RFCM. We demonstrate that an RFCM framework can realize more accurate noise estimation than DSFCM when different types of noise are involved. It is mainly thanks to its utilization of noise distribution characteristics instead of noise sparsity used in DSFCM. We show that DSFCM is a particular case of RFCM, thus signifying that they are the same when only impulse noise is involved. With a spatial information constraint, we demonstrate RFCM's superior effectiveness and efficiency over DSFCM in terms of supporting experiments with different levels of single, mixed, and unknown noise.

7.
Geobiology ; 20(6): 790-809, 2022 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-36250398

RESUMO

Most previous studies focused on the redox state of the deep water, leading to an incomplete understanding of the spatiotemporal evolution of the redox-stratified ocean during the Ediacaran-Cambrian transition. In order to decode the redox condition of shallow marine environments during the late Ediacaran, this study presents I/(Ca + Mg), carbon and oxygen isotope, major, trace, and rare earth element data of subtidal to peritidal dolomite from the Dengying Formation at Yangba, South China. In combination with the reported radiometric and biostratigraphic data, the Dengying Formation and coeval successions worldwide are subdivided into a positive δ13 C excursion (up to ~6‰) in the lower part (~551-547 Ma) and a stable δ13 C plateau (generally between 0‰ and 3‰) in the middle-upper part (~547-541 Ma). The overall low I/(Ca + Mg) ratios (<0.5 µmol/mol) and slightly negative to no Ce anomalies (0.80 < [Ce/Ce*]SN < 1.25), point to low-oxygen levels in shallow marine environments at Yangba. Moreover, four pulsed negative excursions in (Ce/Ce*)SN (between 0.62 and 0.8) and the associated two positive excursions in I/(Ca + Mg) ratios (up to 2.02 µmol/mol) are observed, indicative of weak oxygenations in the shallow marine environments. The comparison with other upper Ediacaran shallow water successions worldwide reveals that the (Ce/Ce*)SN and I/(Ca + Mg) values generally fall in the Precambrian range but their temporal trends differ among these successions (e.g., Ce anomaly profiles significantly different between Yangba and the Yangtze Gorge sections), which point to low oxygen levels with high redox heterogeneity in the surface ocean. This is consistent with the widespread anoxia as revealed by low δ238 U values reported by previous studies. Thus, the atmospheric oxygen concentrations during the late Ediacaran are estimated to be very low, similar to the case during the most Mesoproterozoic to early Neoproterozoic period.


Assuntos
Fósseis , Água do Mar , Carbono , Sedimentos Geológicos , Oceanos e Mares , Oxirredução , Oxigênio/análise , Isótopos de Oxigênio , Água
8.
J Environ Manage ; 324: 116282, 2022 Dec 15.
Artigo em Inglês | MEDLINE | ID: mdl-36191506

RESUMO

The prediction of air pollution plays an important role in reducing the emission of air pollutants and guiding people to carry out early warning and control, so it attracts many scholars to conduct modeling and research on it. However, most of the current researches fail to quantify the uncertainty in prediction and only use traditional fuzzy information granulation to process data, resulting in the loss of much detail information. Therefore, this paper proposes a hybrid model based on decomposition and granular fuzzy information to solve these problems. The trend item and the Granulation fluctuation item are respectively predicted and the results are combined to obtain the change trend and fluctuation range of the sequence. This paper selects PM2.5 concentrations of 3 cities. The experimental results show that the evaluation index of the prediction model is significantly lower than other benchmark models, and a variety of statistical methods are used to further verify the effectiveness of the prediction model.


Assuntos
Poluentes Atmosféricos , Poluição do Ar , Humanos , Incerteza , Monitoramento Ambiental/métodos , Poluição do Ar/análise , Poluentes Atmosféricos/análise , Material Particulado/análise
9.
Sci Rep ; 12(1): 16302, 2022 09 29.
Artigo em Inglês | MEDLINE | ID: mdl-36175585

RESUMO

In this paper we consider the problem of joint state estimation under attack in partially-observed discrete event systems. An operator observes the evolution of the plant to evaluate its current states. The attacker may tamper with the sensor readings received by the operator inserting dummy events or erasing real events that have occurred in the plant with the goal of preventing the operator from computing the correct state estimation. An attack function is said to be harmful if the state estimation consistent with the correct observation and the state estimation consistent with the corrupted observation satisfy a given misleading relation. On the basis of an automaton called joint estimator, we show how to compute a supremal stealthy joint subestimator that allows the attacker to remain stealthy, no matter what the future evolution of the plant is. Finally, we show how to select a stealthy and harmful attack function based on such a subestimator.

10.
IEEE Trans Cybern ; PP2022 Jun 21.
Artigo em Inglês | MEDLINE | ID: mdl-35727789

RESUMO

In this study, we establish a new design methodology of granular models realized by augmenting the existing numeric models through analyzing and modeling their associated prediction error. Several novel approaches to the construction of granular architectures through augmenting existing numeric models by incorporating modeling errors are proposed in order to improve and quantify the numeric models' prediction abilities. The resulting construct arises as a granular model that produces granular outcomes generated as a result of the aggregation of the outputs produced by the numeric model (or its granular counterpart) and the corresponding error terms. Three different architectural developments are formulated and analyzed. In comparison with the numeric models, which strive to achieve the highest accuracy, granular models are developed in a way such that they produce comprehensive prediction outcomes realized as information granules. In virtue of the granular nature of results, the coverage and specificity of the constructed information granules express the quality of the results of prediction in a more descriptive and comprehensive manner. The performance of the granular constructs is evaluated using the criteria of coverage and specificity, which are pertinent to granular outputs produced by the granular models.

11.
Sci Rep ; 12(1): 5000, 2022 03 23.
Artigo em Inglês | MEDLINE | ID: mdl-35322073

RESUMO

In recent years, the XACML (eXtensible Access Control Markup Language) is widely used in a variety of research fields, especially in access control. However, when policy sets defined by the XACML become large and complex, the policy evaluation time increases significantly. In order to improve policy evaluation performance, we propose an optimization algorithm based on the DPCA (Density Peak Cluster Algorithm) to improve the clustering effect on large-scale complex policy sets. Combined with this algorithm, an efficient policy evaluation engine, named DPEngine, is proposed to speed up policy matching and reduce the policy evaluation time. We compare the policy evaluation time of DPEngine with the Sun PDP, HPEngine, XEngine and SBA-XACML. The experiment results show that (1) when the number of requests reaches 10,000, the DPEngine evaluation time on a large-scale policy set with 100,000 rules is approximately 2.23%, 3.47%, 3.67% and 4.06% of that of the Sun PDP, HPEngine, XEngine and SBA-XACML, respectively and (2) as the number of requests increases, the DPEngine evaluation time grows linearly. Compared with other policy evaluation engines, the DPEngine has the advantages of efficiency and stability.


Assuntos
Algoritmos , Idioma , Análise por Conglomerados , Políticas
12.
Sci Prog ; 105(1): 368504221075466, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35196198

RESUMO

This work deals with the language-based opacity verification and enforcement problems in discrete event systems modeled with labeled Petri nets. Opacity is a security property that relates to privacy protection by hiding secret information of a system from an external observer called an "intruder". A secret can be a subset of a system's language. In this case, opacity is referred to as language-based opacity. A system is said to be language-based opaque if an intruder, with a partial observation on the system's behavior, cannot deduce whether the sequences of events corresponding to the generated observations are included in the secret language or not. We propose a novel and efficient approach for language-based opacity verification and enforcement, using the concepts of basis markings and basis partition. First, a sufficient condition is formulated to check language-based opacity for labeled Petri nets by solving an integer-programming problem. A unique graph, called a modified basis reachability graph (MBRG), is then derived to verify different language-based opacity properties. The proposed method relaxes the acyclicity assumption of the unobservable transition subnet thanks to the basis partition notion. A new embedded insertion function technique is also provided to deal with opacity enforcement. This technique ensures that no new observed behavior is created. A verification algorithm is developed to check the enforceability of a system. Finally, once a system is proved to be enforceable, an algorithm is given to construct a new structure, called an insertion automaton, which synthesizes all possible insertion functions that ensure opacity.


Assuntos
Algoritmos , Idioma , Simulação por Computador
13.
IEEE Trans Cybern ; 52(6): 4126-4135, 2022 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-33119518

RESUMO

Information granulation and degranulation play a fundamental role in granular computing (GrC). Given a collection of information granules (referred to as reference information granules), the essence of the granulation process (encoding) is to represent each data (either numeric or granular) in terms of these reference information granules. The degranulation process (decoding) that realizes the reconstruction of original data is associated with a certain level of reconstruction error. An important issue is how to reduce the reconstruction error such that the data could be reconstructed more accurately. In this study, the granulation process is realized by involving fuzzy clustering. A novel neural network is leveraged in the consecutive degranulation process, which could help significantly reduce the reconstruction error. We show that the proposed degranulation architecture exhibits improved capabilities in reconstructing original data in comparison with other methods. A series of experiments with the use of synthetic data and publicly available datasets coming from the machine-learning repository demonstrates the superiority of the proposed method over some existing alternatives.


Assuntos
Algoritmos , Reconhecimento Automatizado de Padrão , Análise por Conglomerados , Redes Neurais de Computação , Reconhecimento Automatizado de Padrão/métodos
14.
IEEE Trans Cybern ; 52(7): 7029-7038, 2022 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-33151886

RESUMO

Rule-based fuzzy models play a dominant role in fuzzy modeling and come with extensive applications in the system modeling area. Due to the presence of system modeling error, it is impossible to construct a model that fits exactly the experimental evidence and, at the same time, exhibits high generalization capabilities. To alleviate these problems, in this study, we elaborate on a realization of granular outputs for rule-based fuzzy models with the aim of effectively quantifying the associated modeling errors. Through analyzing the characteristics of modeling errors, an error model is constructed to characterize deviations among the estimated outputs and the expected ones. The resulting granular model comes into play as an aggregation of the regression model and the error model. Information granularity plays a central role in the construction of granular outputs (intervals). The quality of the produced interval estimates is quantified in terms of the coverage and specificity criteria. The optimal allocation of information granularity is determined through a combined index involving these two criteria pertinent to the evaluation of interval outputs. A series of experimental studies is provided to demonstrate the effectiveness of the proposed approach and show its superiority over the traditional statistical-based method.


Assuntos
Algoritmos , Lógica Fuzzy
15.
IEEE Trans Cybern ; 52(4): 2214-2224, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-32721903

RESUMO

In this article, we are concerned with the formation of type-2 information granules in a two-stage approach. We present a comprehensive algorithmic framework which gives rise to information granules of a higher type (type-2, to be specific) such that the key structure of the local granular data, their topologies, and their diversities become fully reflected and quantified. In contrast to traditional collaborative clustering where local structures (information granules) are obtained by running algorithms on the local datasets and communicating findings across sites, we propose a way of characterizing granular data (formed) by forming a suite of higher type information granules to reveal an overall structure of a collection of locally available datasets. Information granules built at the lower level on a basis of local sources of data are weighted by the number of data they represent while the information granules formed at the higher level of hierarchy are more abstract and general, thus facilitating a formation of a hierarchical description of data realized at different levels of detail. The construction of information granules is completed by resorting to fuzzy clustering algorithms (more specifically, the well-known Fuzzy C-Means). In the formation of information granules, we follow the fundamental principle of granular computing, viz., the principle of justifiable granularity. Experimental studies concerning selected publicly available machine-learning datasets are reported.


Assuntos
Algoritmos , Reconhecimento Automatizado de Padrão , Análise por Conglomerados
16.
IEEE Trans Cybern ; 52(8): 7563-7576, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-33417577

RESUMO

This article provides a solution to tube-based output feedback robust model predictive control (RMPC) for discrete-time linear parameter varying (LPV) systems with bounded disturbances and noises. The proposed approach synthesizes an offline optimization problem to design a look-up table and an online tube-based output feedback RMPC with tightened constraints and scaled terminal constraint sets. In the offline optimization problem, a sequence of nested robust positively invariant (RPI) sets and robust control invariant (RCI) sets, respectively, for estimation errors and control errors is optimized and stored in the look-up table. In the online optimization problem, real-time control parameters are searched based on the bounds of time-varying estimation error sets. Considering the characteristics of the uncertain scheduling parameter in LPV systems, the online tube-based output feedback RMPC scheme adopts one-step nominal system prediction with scaled terminal constraint sets. The formulated simple and efficient online optimization problem with fewer decision variables and constraints has a lower online computational burden. Recursive feasibility of the optimization problem and robust stability of the controlled LPV system are guaranteed by ensuring that the nominal system converges to the terminal constraint set, and uncertain state trajectories are constrained within robust tubes with the center of the nominal system. A numerical example is given to verify the approach.

17.
IEEE Trans Cybern ; 52(8): 7612-7623, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-34623288

RESUMO

In this article, we elaborate on a Kullback-Leibler (KL) divergence-based Fuzzy C -Means (FCM) algorithm by incorporating a tight wavelet frame transform and morphological reconstruction (MR). To make membership degrees of each image pixel closer to those of its neighbors, a KL divergence term on the partition matrix is introduced as a part of FCM, thus resulting in KL divergence-based FCM. To make the proposed FCM robust, a filtered term is augmented in its objective function, where MR is used for image filtering. Since tight wavelet frames provide redundant representations of images, the proposed FCM is performed in a feature space constructed by tight wavelet frame decomposition. To further improve its segmentation accuracy (SA), a segmented feature set is reconstructed by minimizing the inverse process of its objective function. Each reconstructed feature is reassigned to the closest prototype, thus modifying abnormal features produced in the reconstruction process. Moreover, a segmented image is reconstructed by using tight wavelet frame reconstruction. Finally, supporting experiments coping with synthetic, medical, and real-world images are reported. The experimental results exhibit that the proposed algorithm works well and comes with better segmentation performance than other peers. In a quantitative fashion, its average SA improvements over its peers are 4.06%, 3.94%, and 4.41%, respectively, when segmenting synthetic, medical, and real-world images. Moreover, the proposed algorithm requires less time than most of the FCM-related algorithms.


Assuntos
Lógica Fuzzy , Imageamento por Ressonância Magnética , Algoritmos , Análise por Conglomerados , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Análise de Ondaletas
18.
Sensors (Basel) ; 21(21)2021 Nov 04.
Artigo em Inglês | MEDLINE | ID: mdl-34770637

RESUMO

Due to the limitations of data transfer technologies, existing studies on urban traffic control mainly focused on isolated dimension control such as traffic signal control or vehicle route guidance to alleviate traffic congestion. However, in real traffic, the distribution of traffic flow is the result of multiple dimensions whose future state is influenced by each dimension's decisions. Presently, the development of the Internet of Vehicles enables an integrated intelligent transportation system. This paper proposes an integrated intelligent transportation model that can optimize predictive traffic signal control and predictive vehicle route guidance simultaneously to alleviate traffic congestion based on their feedback regulation relationship. The challenges of this model lie in that the formulation of the nonlinear feedback relationship between various dimensions is hard to describe and the design of a corresponding solving algorithm that can obtain Pareto optimality for multi-dimension control is complex. In the integrated model, we introduce two medium variables-predictive traffic flow and the predictive waiting time-to two-way link the traffic signal control and vehicle route guidance. Inspired by game theory, an asymmetric information exchange framework-based updating distributed algorithm is designed to solve the integrated model. Finally, an experimental study in two typical traffic scenarios shows that more than 73.33% of the considered cases adopting the integrated model achieve Pareto optimality.

19.
Risk Manag Healthc Policy ; 14: 4211-4222, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34675715

RESUMO

PURPOSE: The aim of this paper was to build a performance evaluation index system for the combination of medical and old-age care services in pension institutions of China. METHODS: A two-stage data envelopment analysis (DEA) is used to evaluate the performance of 30 pension institutions in China. RESULTS: The results show that the two-stage DEA accounted for a relatively high affiance of medical and nursing care services, but resource allocation still needs to be further optimized. Institutions with ineffective DEA need to reduce the five factors of operations, management, fixed assets, technology and services in the input dimension. CONCLUSION: In the output dimension, the service evaluation effect and safety management effect need to be improved. The performance of combined old-age care and medical care in old-age institutions can be improved in terms of investment in fixed assets, methods of capital subsidies, supervision and management, as well as standardized operations.

20.
Sci Prog ; 104(3): 368504211030833, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34292845

RESUMO

Model abstraction for finite state automata is helpful for decreasing computational complexity and improving comprehensibility for the verification and control synthesis of discrete-event systems (DES). Supremal quasi-congruence equivalence is an effective method for reducing the state space of DES and its effective algorithms based on graph theory have been developed. In this paper, a new method is proposed to convert the supremal quasi-congruence computation into a binary linear programming problem which can be solved by many powerful integer linear programming and satisfiability (SAT) solvers. Partitioning states to cosets is considered as allocating states to an unknown number of cosets and the requirement of finding the coarsest quasi-congruence is equivalent to using the least number of cosets. The novelty of this paper is to solve the optimal partitioning problem as an optimal state-to-coset allocation problem. The task of finding the coarsest quasi-congruence is equivalent to the objective of finding the least number of cosets. Then the problem can be solved by optimization methods, which are respectively implemented by mixed integer linear programming (MILP) in MATLAB and binary linear programming (BLP) in CPLEX. To reduce the computation time, the translation process is first optimized by introducing fewer decision variables and simplifying constraints in the programming problem. Second, the translation process formulates a few techniques of converting logic constraints on finite automata into binary linear constraints. These techniques will be helpful for other researchers exploiting integer linear programming and SAT solvers for solving partitioning or grouping problems. Third, the computational efficiency and correctness of the proposed method are verified by two different solvers. The proposed model abstraction approach is applied to simplify the large-scale supervisor model of a manufacturing system with five automated guided vehicles. The proposed method is not only a new solution for the coarsest quasi-congruence computation, but also provides us a more intuitive understanding of the quasi-congruence relation in the supervisory control theory. A future research direction is to apply more computationally efficient solvers to compute the optimal state-to-coset allocation problem.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA