Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 153
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Inf Sci (N Y) ; 623: 20-39, 2023 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-36532157

RESUMEN

The automatic segmentation of COVID-19 pneumonia from a computerized tomography (CT) scan has become a major interest for scholars in developing a powerful diagnostic framework in the Internet of Medical Things (IoMT). Federated deep learning (FDL) is considered a promising approach for efficient and cooperative training from multi-institutional image data. However, the nonindependent and identically distributed (Non-IID) data from health care remain a remarkable challenge, limiting the applicability of FDL in the real world. The variability in features incurred by different scanning protocols, scanners, or acquisition parameters produces the learning drift phenomena during the training, which impairs both the training speed and segmentation performance of the model. This paper proposes a novel FDL approach for reliable and efficient multi-institutional COVID-19 segmentation, called MIC-Net. MIC-Net consists of three main building modules: the down-sampler, context enrichment (CE) module, and up-sampler. The down-sampler was designed to effectively learn both local and global representations from input CT scans by combining the advantages of lightweight convolutional and attention modules. The contextual enrichment (CE) module is introduced to enable the network to capture the contextual representation that can be later exploited to enrich the semantic knowledge of the up-sampler through skip connections. To further tackle the inter-site heterogeneity within the model, the approach uses an adaptive and switchable normalization (ASN) to adaptively choose the best normalization strategy according to the underlying data. A novel federated periodic selection protocol (FED-PCS) is proposed to fairly select the training participants according to their resource state, data quality, and loss of a local model. The results of an experimental evaluation of MIC-Net on three publicly available data sets show its robust performance, with an average dice score of 88.90% and an average surface dice of 87.53%.

2.
Appl Intell (Dordr) ; : 1-17, 2023 Jan 26.
Artículo en Inglés | MEDLINE | ID: mdl-36718382

RESUMEN

Domain adaptation (DA) is a popular strategy for pattern recognition and classification tasks. It leverages a large amount of data from the source domain to help train the model applied in the target domain. Supervised domain adaptation (SDA) approaches are desirable when only few labeled samples from the target domain are available. They can be easily adopted in many real-world applications where data collection is expensive. In this study, we propose a new supervision signal, namely center transfer loss (CTL), to efficiently align features under the SDA setting in the deep learning (DL) field. Unlike most previous SDA methods that rely on pairing up training samples, the proposed loss is trainable only using one-stream input based on the mini-batch strategy. The CTL exhibits two main functionalities in training to increase the performance of DL models, i.e., domain alignment and increasing the feature's discriminative power. The hyper-parameter to balance these two functionalities is waived in CTL, which is the second improvement from the previous approaches. Extensive experiments completed on well-known public datasets show that the proposed method performs better than recent state-of-the-art approaches.

3.
Expert Syst Appl ; 203: 117514, 2022 Oct 01.
Artículo en Inglés | MEDLINE | ID: mdl-35607612

RESUMEN

For preventing the outbreaks of Covid-19 infection in different countries, many organizations and governments have extensively studied and applied different kinds of quarantine isolation policies, medical treatments as well as organized massive/fast vaccination strategy for over-18 citizens. There are several valuable lessons have been achieved in different countries this Covid-19 battle. These studies have presented the usefulness of prompt actions in testing, isolating confirmed infectious cases from community as well as social resource planning/optimization through data-driven anticipation. In recent times, many studies have demonstrated the effectiveness of short/long-term forecasting in number of new Covid-19 cases in forms of time-series data. These predictions have directly supported to effectively optimize the available healthcare resources as well as imposing suitable policies for slowing down the Covid-19 spreads, especially in high-populated cities/regions/nations. There are several progresses of deep neural architectures, such as recurrent neural network (RNN) have demonstrated significant improvements in analyzing and learning the time-series datasets for conducting better predictions. However, most of recent RNN-based techniques are considered as unable to handle chaotic/non-smooth sequential datasets. The consecutive disturbances and lagged observations from chaotic time-series dataset like as routine Covid-19 confirmed cases have led to the low performance in temporal feature learning process through recent RNN-based models. To meet this challenge, in this paper, we proposed a novel dual attention-based sequential auto-encoding architecture, called as: DAttAE. Our proposed model supports to effectively learn and predict the new Covid-19 cases in forms of chaotic and non-smooth time series dataset. Specifically, the integration between dual self-attention mechanism in a given Bi-LSTM based auto-encoder in our proposed model supports to directly focus the model on a specific time-range sequence in order to achieve better prediction. We evaluated the performance of our proposed DAttAE model by comparing with multiple traditional and state-of-the-art deep learning-based techniques for time-series prediction task upon different real-world datasets. Experimental outputs demonstrated the effectiveness of our proposed attention-based deep neural approach in comparing with state-of-the-art RNN-based architectures for time series based Covid-19 outbreak prediction task.

4.
Sensors (Basel) ; 20(24)2020 Dec 12.
Artículo en Inglés | MEDLINE | ID: mdl-33322723

RESUMEN

Although biometrics systems using an electrocardiogram (ECG) have been actively researched, there is a characteristic that the morphological features of the ECG signal are measured differently depending on the measurement environment. In general, post-exercise ECG is not matched with the morphological features of the pre-exercise ECG because of the temporary tachycardia. This can degrade the user recognition performance. Although normalization studies have been conducted to match the post- and pre-exercise ECG, limitations related to the distortion of the P wave, QRS complexes, and T wave, which are morphological features, often arise. In this paper, we propose a method for matching pre- and post-exercise ECG cycles based on time and frequency fusion normalization in consideration of morphological features and classifying users with high performance by an optimized system. One cycle of post-exercise ECG is expanded by linear interpolation and filtered with an optimized frequency through the fusion normalization method. The fusion normalization method aims to match one post-exercise ECG cycle to one pre-exercise ECG cycle. The experimental results show that the average similarity between the pre- and post-exercise states improves by 25.6% after normalization, for 30 ECG cycles. Additionally, the normalization algorithm improves the maximum user recognition performance from 96.4 to 98%.


Asunto(s)
Electrocardiografía , Prueba de Esfuerzo , Algoritmos , Arritmias Cardíacas , Biometría , Humanos , Procesamiento de Señales Asistido por Computador
5.
Sensors (Basel) ; 18(7)2018 Jul 02.
Artículo en Inglés | MEDLINE | ID: mdl-30004417

RESUMEN

In settings wherein discussion topics are not statically assigned, such as in microblogs, a need exists for identifying and separating topics of a given event. We approach the problem by using a novel type of similarity, calculated between the major terms used in posts. The occurrences of such terms are periodically sampled from the posts stream. The generated temporal series are processed by using marker-based stigmergy, i.e., a biologically-inspired mechanism performing scalar and temporal information aggregation. More precisely, each sample of the series generates a functional structure, called mark, associated with some concentration. The concentrations disperse in a scalar space and evaporate over time. Multiple deposits, when samples are close in terms of instants of time and values, aggregate in a trail and then persist longer than an isolated mark. To measure similarity between time series, the Jaccard's similarity coefficient between trails is calculated. Discussion topics are generated by such similarity measure in a clustering process using Self-Organizing Maps, and are represented via a colored term cloud. Structural parameters are correctly tuned via an adaptation mechanism based on Differential Evolution. Experiments are completed for a real-world scenario, and the resulting similarity is compared with Dynamic Time Warping (DTW) similarity.


Asunto(s)
Blogging , Análisis por Conglomerados , Medios de Comunicación Sociales , Algoritmos , Biomimética
6.
Biomed Eng Online ; 15: 7, 2016 Jan 15.
Artículo en Inglés | MEDLINE | ID: mdl-26772751

RESUMEN

BACKGROUNDS: The heartbeat is fundamental cardiac activity which is straightforwardly detected with a variety of measurement techniques for analyzing physiological signals. Unfortunately, unexpected noise or contaminated signals can distort or cut out electrocardiogram (ECG) signals in practice, misleading the heartbeat detectors to report a false heart rate or suspend itself for a considerable length of time in the worst case. To deal with the problem of unreliable heartbeat detection, PhysioNet/CinC suggests a challenge in 2014 for developing robust heart beat detectors using multimodal signals. METHODS: This article proposes a multimodal data association method that supplements ECG as a primary input signal with blood pressure (BP) and electroencephalogram (EEG) as complementary input signals when input signals are unreliable. If the current signal quality index (SQI) qualifies ECG as a reliable input signal, our method applies QRS detection to ECG and reports heartbeats. Otherwise, the current SQI selects the best supplementary input signal between BP and EEG after evaluating the current SQI of BP. When BP is chosen as a supplementary input signal, our association model between ECG and BP enables us to compute their regular intervals, detect characteristics BP signals, and estimate the locations of the heartbeat. When both ECG and BP are not qualified, our fusion method resorts to the association model between ECG and EEG that allows us to apply an adaptive filter to ECG and EEG, extract the QRS candidates, and report heartbeats. RESULTS: The proposed method achieved an overall score of 86.26 % for the test data when the input signals are unreliable. Our method outperformed the traditional method, which achieved 79.28 % using QRS detector and BP detector from PhysioNet. Our multimodal signal processing method outperforms the conventional unimodal method of taking ECG signals alone for both training and test data sets. CONCLUSIONS: To detect the heartbeat robustly, we have proposed a novel multimodal data association method of supplementing ECG with a variety of physiological signals and accounting for the patient-specific lag between different pulsatile signals and ECG. Multimodal signal detectors and data-fusion approaches such as those proposed in this article can reduce false alarms and improve patient monitoring.


Asunto(s)
Presión Sanguínea , Electroencefalografía , Corazón/fisiología , Modelos Estadísticos , Procesamiento de Señales Asistido por Computador , Adulto , Algoritmos , Humanos , Factores de Tiempo
7.
Int J Mol Sci ; 17(5)2016 May 18.
Artículo en Inglés | MEDLINE | ID: mdl-27213346

RESUMEN

Microbial fuel cells (MFCs) are envisioned as one of the most promising alternative renewable energy sources because they can generate electric current continuously while treating waste. Terrestrial Microbial Fuel Cells (TMFCs) can be inoculated and work on the use of soil, which further extends the application areas of MFCs. Energy supply, as a primary influential factor determining the lifetime of Wireless Sensor Network (WSN) nodes, remains an open challenge in sensor networks. In theory, sensor nodes powered by MFCs have an eternal life. However, low power density and high internal resistance of MFCs are two pronounced problems in their operation. A single-hop WSN powered by a TMFC experimental setup was designed and experimented with. Power generation performance of the proposed TMFC, the relationships between the performance of the power generation and the environment temperature, the water content of the soil by weight were measured by experiments. Results show that the TMFC can achieve good power generation performance under special environmental conditions. Furthermore, the experiments with sensor data acquisition and wireless transmission of the TMFC powering WSN were carried out. We demonstrate that the obtained experimental results validate the feasibility of TMFCs powering WSNs.


Asunto(s)
Fuentes de Energía Bioeléctrica/microbiología , Biodegradación Ambiental , Tecnología Inalámbrica
8.
ScientificWorldJournal ; 2014: 713490, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-24587746

RESUMEN

Although Particle Swarm Optimization (PSO) has demonstrated competitive performance in solving global optimization problems, it exhibits some limitations when dealing with optimization problems with high dimensionality and complex landscape. In this paper, we integrate some problem-oriented knowledge into the design of a certain PSO variant. The resulting novel PSO algorithm with an inner variable learning strategy (PSO-IVL) is particularly efficient for optimizing functions with symmetric variables. Symmetric variables of the optimized function have to satisfy a certain quantitative relation. Based on this knowledge, the inner variable learning (IVL) strategy helps the particle to inspect the relation among its inner variables, determine the exemplar variable for all other variables, and then make each variable learn from the exemplar variable in terms of their quantitative relations. In addition, we design a new trap detection and jumping out strategy to help particles escape from local optima. The trap detection operation is employed at the level of individual particles whereas the trap jumping out strategy is adaptive in its nature. Experimental simulations completed for some representative optimization functions demonstrate the excellent performance of PSO-IVL. The effectiveness of the PSO-IVL stresses a usefulness of augmenting evolutionary algorithms by problem-oriented domain knowledge.


Asunto(s)
Algoritmos , Inteligencia Artificial , Modelos Teóricos , Reconocimiento de Normas Patrones Automatizadas/métodos , Simulación por Computador
9.
IEEE Trans Cybern ; 54(1): 519-532, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37030830

RESUMEN

Information granularity and information granules are fundamental concepts that permeate the entire area of granular computing. With this regard, the principle of justifiable granularity was proposed by Pedrycz, and subsequently a general two-phase framework of designing information granules based on Fuzzy C-means clustering was successfully developed. This design process leads to information granules that are likely to intersect each other in substantially overlapping clusters, which inevitably leads to some ambiguity and misperception as well as loss of semantic clarity of information granules. This limitation is largely due to imprecise description of boundary-overlapping data in the existing algorithms. To address this issue, the rough k -means clustering is introduced in an innovative way into Pedrycz's two-phase information granulation framework, together with the proposed local boundary fuzzy metric. To further strengthen the characteristics of support and inhibition of boundary-overlapping data, an augmented parametric version of the principle is refined. On this basis, a local boundary fuzzified rough k -means-based information granulation algorithm is developed. In this manner, the generated granules are unique and representative whilst ensuring clearer boundaries. The validity and performance of this algorithm are demonstrated through the results of comparative experiments.

10.
IEEE Trans Cybern ; 54(1): 533-545, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37018706

RESUMEN

Thanks to the efficient retrieval speed and low storage consumption, learning to hash has been widely used in visual retrieval tasks. However, the known hashing methods assume that the query and retrieval samples lie in homogeneous feature space within the same domain. As a result, they cannot be directly applied to heterogeneous cross-domain retrieval. In this article, we propose a generalized image transfer retrieval (GITR) problem, which encounters two crucial bottlenecks: 1) the query and retrieval samples may come from different domains, leading to an inevitable domain distribution gap and 2) the features of the two domains may be heterogeneous or misaligned, bringing up an additional feature gap. To address the GITR problem, we propose an asymmetric transfer hashing (ATH) framework with its unsupervised/semisupervised/supervised realizations. Specifically, ATH characterizes the domain distribution gap by the discrepancy between two asymmetric hash functions, and minimizes the feature gap with the help of a novel adaptive bipartite graph constructed on cross-domain data. By jointly optimizing asymmetric hash functions and the bipartite graph, not only can knowledge transfer be achieved but information loss caused by feature alignment can also be avoided. Meanwhile, to alleviate negative transfer, the intrinsic geometrical structure of single-domain data is preserved by involving a domain affinity graph. Extensive experiments on both single-domain and cross-domain benchmarks under different GITR subtasks indicate the superiority of our ATH method in comparison with the state-of-the-art hashing methods.

11.
Artículo en Inglés | MEDLINE | ID: mdl-38896511

RESUMEN

Unsupervised feature selection (UFS) aims to learn an indicator matrix relying on some characteristics of the high-dimensional data to identify the features to be selected. However, traditional unsupervised methods perform only at the feature level, i.e., they directly select useful features by feature ranking. Such methods do not pay any attention to the interaction information with other tasks such as classification, which severely degrades their feature selection performance. In this article, we propose an UFS method which also takes into account the classification level, and selects features that perform well both in clustering and classification. To achieve this, we design a bi-level spectral feature selection (BLSFS) method, which combines classification level and feature level. More concretely, at the classification level, we first apply the spectral clustering to generate pseudolabels, and then train a linear classifier to obtain the optimal regression matrix. At the feature level, we select useful features via maintaining the intrinsic structure of data in the embedding space with the learned regression matrix from the classification level, which in turn guides classifier training. We utilize a balancing parameter to seamlessly bridge the classification and feature levels together to construct a unified framework. A series of experiments on 12 benchmark datasets are carried out to demonstrate the superiority of BLSFS in both clustering and classification performance.

12.
IEEE Trans Cybern ; PP2024 Feb 28.
Artículo en Inglés | MEDLINE | ID: mdl-38416627

RESUMEN

A novel fuzzy adaptive knowledge-based inference neural network (FAKINN) is proposed in this study. Conventional fuzzy cluster-based neural networks (FCBNNs) suffer from the challenge of a direct extraction of fuzzy rules that can capture and represent the interclass heterogeneity and intraclass homogeneity when the data possess complex structures. Moreover, the capability of the cluster-based rule generator in FCBNNs may decrease with the increase of data dimensionality. These drawbacks impede the generation of desired fuzzy rules, and affect the inference results depending on the fuzzy rules, thereby limiting their generalization ability. To address these drawbacks, an adaptive knowledge generator (AKG), consisting of the observation paradigm (OP) and clustering strategy (CS), is effectively designed to improve the generalization ability in FAKINN. The OP distills the characteristic information (CI) from data to highlight the homogeneity and heterogeneity of objects, and the CS, viz., the weighted condition-driven fuzzy clustering method (WCFCM), is proposed to summarize the CI to construct fuzzy rules. Moreover, the feedback between the OP and CS can control the dimensionality of CI, which endows FAKINN with the potential to tackle high-dimensional data. The main originality of the study focuses on the AKG and WCFCM that are proposed to develop the structural design methodology of FNNs. The performance of FAKINN is evaluated on various benchmarks with 27 comparative methods, and two real-world problems are adopted to validate its effectiveness. Experimental results show that FAKINN outperforms the comparison methods.

13.
IEEE Trans Biomed Eng ; 71(5): 1587-1598, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38113159

RESUMEN

OBJECTIVE: Convolutional neural network (CNN), a classical structure in deep learning, has been commonly deployed in the motor imagery brain-computer interface (MIBCI). Many methods have been proposed to evaluate the vulnerability of such CNN models, primarily by attacking them using direct temporal perturbations. In this work, we propose a novel attacking approach based on perturbations in the frequency domain instead. METHODS: For a given natural MI trial in the frequency domain, the proposed approach, called frequency domain channel-wise attack (FDCA), generates perturbations at each channel one after another to fool the CNN classifiers. The advances of this strategy are two-fold. First, instead of focusing on the temporal domain, perturbations are generated in the frequency domain where discriminative patterns can be extracted for motor imagery (MI) classification tasks. Second, the perturbing optimization is performed based on differential evolution algorithm in a black-box scenario where detailed model knowledge is not required. RESULTS: Experimental results demonstrate the effectiveness of the proposed FDCA which achieves a significantly higher success rate than the baselines and existing methods in attacking three major CNN classifiers on four public MI benchmarks. CONCLUSION: Perturbations generated in the frequency domain yield highly competitive results in attacking MIBCI deployed by CNN models even in a black-box setting, where the model information is well-protected. SIGNIFICANCE: To our best knowledge, existing MIBCI attack approaches are all gradient-based methods and require details about the victim model, e.g., the parameters and objective function. We provide a more flexible strategy that does not require model details but still produces an effective attack outcome.


Asunto(s)
Algoritmos , Interfaces Cerebro-Computador , Imaginación , Redes Neurales de la Computación , Humanos , Imaginación/fisiología , Seguridad Computacional , Procesamiento de Señales Asistido por Computador
14.
ScientificWorldJournal ; 2013: 172193, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-24250256

RESUMEN

Discovering and utilizing problem domain knowledge is a promising direction towards improving the efficiency of evolutionary algorithms (EAs) when solving optimization problems. We propose a knowledge-based variable reduction strategy (VRS) that can be integrated into EAs to solve unconstrained and first-order derivative optimization functions more efficiently. VRS originates from the knowledge that, in an unconstrained and first-order derivative optimization function, the optimal solution locates in a local extreme point at which the partial derivative over each variable equals zero. Through this collective of partial derivative equations, some quantitative relations among different variables can be obtained. These variable relations have to be satisfied in the optimal solution. With the use of such relations, VRS could reduce the number of variables and shrink the solution space when using EAs to deal with the optimization function, thus improving the optimizing speed and quality. When we apply VRS to optimization problems, we just need to modify the calculation approach of the objective function. Therefore, practically, it can be integrated with any EA. In this study, VRS is combined with particle swarm optimization variants and tested on several benchmark optimization functions and a real-world optimization problem. Computational results and comparative study demonstrate the effectiveness of VRS.


Asunto(s)
Algoritmos , Solución de Problemas , Inteligencia Artificial , Simulación por Computador , Programas Informáticos
15.
IEEE Trans Cybern ; 53(9): 5414-5423, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-35427227

RESUMEN

A visible trend in representing knowledge through information granules manifests in the developments of information granules of higher type and higher order, in particular, type-2 fuzzy sets and order-2 fuzzy sets. All these constructs are aimed at the formalization and processing data at a certain level of abstraction. Along the same line, in the recent years, we have seen intensive developments in fuzzy clustering, which are not surprising in light of a growing impact of clustering on fundamentals of fuzzy sets (as supporting ways to elicit membership functions) as well as algorithms (in which clustering and clusters form an integral functional component of various fuzzy models). In this study, we investigate order-2 information granules (fuzzy sets) by analyzing their formal description and properties to cope with structural and hierarchically organized concepts emerging from data. The design of order-2 information granules on a basis of available experimental evidence is discussed and a way of expressing similarity (resemblance) of two order-2 information granules by engaging semantically oriented distance is discussed. In the sequel, the study reported here delivers highly original contributions in the realm of order-2 clustering algorithms. Formally, the clustering problem under discussion is posed as follows: given is a finite collection of reference information granules. Determine a structure in data defined over the space of such granules. Conceptually, this makes a radical shift in comparison with data defined in the p -dimensional space of real numbers Rp. In this situation, expressing distance between two data deserves prudent treatment so that such distance properly captures the semantics and consequently, the closeness between any two information granules to be determined in cluster formation. Following the proposal of the semantically guided distance (and its ensuing design process), we develop an order-2 variant of the fuzzy C-means (FCM), discuss its detailed algorithmic steps, and deliver interpretation of the obtained clustering results. Several relevant applied scenarios of order-2 FCM are identified for spatially and temporally distributed data, which deliver interesting motivating arguments and underline the practical relevance of this category of clustering. Experimental studies are provided to further elicit the performance of the clustering method and discuss essential ways of interpreting results.

16.
IEEE Trans Pattern Anal Mach Intell ; 45(2): 2054-2070, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-35420983

RESUMEN

In artificial intelligence systems, a question on how to express the uncertainty in knowledge remains an open issue. The negation scheme provides a new perspective to solve this issue. In this paper, we study quantum decisions from the negation perspective. Specifically, complex evidence theory (CET) is considered to be effective to express and handle uncertain information in a complex plane. Therefore, we first express CET in the quantum framework of Hilbert space. On this basis, a generalized negation method is proposed for quantum basic belief assignment (QBBA), called QBBA negation. In addition, a QBBA entropy is revisited to study the QBBA negation process to reveal the variation tendency of negation iteration. Meanwhile, the properties of the QBBA negation function are analyzed and discussed along with special cases. Then, several multisource quantum information fusion (MSQIF) algorithms are designed to support decision making. Finally, these MSQIF algorithms are applied in pattern classification to demonstrate their effectiveness. This is the first work to design MSQIF algorithms to support quantum decision making from a new perspective of "negation", which provides promising solutions to knowledge representation, uncertainty measure, and fusion of quantum information.

17.
IEEE Trans Cybern ; 53(1): 303-314, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-34347619

RESUMEN

Under the assumption of rational economics, the opinions of decision makers should exhibit some transitivity properties. It is an important issue on how to measure the transitivity properties of the provided preference relations over a set of alternatives. In this study, we report the methods for measuring weak consistency (w-consistency) and weak transitivity (w-transitivity) of pairwise comparison matrices (PCMs) originating from the analytic hierarchy process (AHP). First, some interesting properties of PCMs with w-consistency and w-transitivity are studied. Second, novel methods are proposed to construct the quantification indices of w-consistency and w-transitivity of PCMs, respectively. Some comparisons with the existing methods are offered to illustrate the novelty of the proposed ones. Third, an optimization model is put forward to modify a PCM without any transitivity property to a new one with w-consistency and w-transitivity, respectively. The particle swarm optimization (PSO) algorithm is adopted to solve the nonlinear optimization problems. A novel decision-making model is established by considering the w-transitivity as the minimum requirement. Some numerical examples are carried out to illustrate the developed methods and models. It is observed that the proposed indices can be computed efficiently and reflect the inherent relations of the entries in a PCM with w-consistency and w-transitivity, respectively.

18.
IEEE Trans Cybern ; 53(5): 2899-2913, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-34767519

RESUMEN

Recently, granular models have been highlighted in system modeling and applied to many fields since their outcomes are information granules supporting human-centric comprehension and reasoning. In this study, a design method of granular model driven by hyper-box iteration granulation is proposed. The method is composed mainly of partition of input space, formation of input hyper-box information granules with confidence levels, and granulation of output data corresponding to input hyper-box information granules. Among them, the formation of input hyper-box information granules is realized through performing the hyper-box iteration granulation algorithm governed by information granularity on input space, and the granulation of out data corresponding to input hyper-box information granules is completed by the improved principle of justifiable granularity to produce triangular fuzzy information granules. Compared with the existing granular models, the resulting one can yield the more accurate numeric and preferable granular outcomes simultaneously. Experiments completed on the synthetic and publicly available datasets demonstrate the superiority of the granular model designed by the proposed method at granular and numeric levels. Also, the impact of parameters involved in the proposed design method on the performance of ensuing granular model is explored.

19.
IEEE Trans Cybern ; 53(3): 1790-1801, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-34936563

RESUMEN

Designing effective and efficient classifiers is a challenging task given the facts that data may exhibit different geometric structures and complex intrarelationships may exist within data. As a fundamental component of granular computing, information granules play a key role in human cognition. Therefore, it is of great interest to develop classifiers based on information granules such that highly interpretable human-centric models with higher accuracy can be constructed. In this study, we elaborate on a novel design methodology of granular classifiers in which information granules play a fundamental role. First, information granules are formed on the basis of labeled patterns following the principle of justifiable granularity. The diversity of samples embraced by each information granule is quantified and controlled in terms of the entropy criterion. This design implies that the information granules constructed in this way form sound homogeneous descriptors characterizing the structure and the diversity of available experimental data. Next, granular classifiers are built in the presence of formed information granules. The classification result for any input instance is determined by summing the contents of the related information granules weighted by membership degrees. The experiments concerning both synthetic data and publicly available datasets demonstrate that the proposed models exhibit better prediction abilities than some commonly encountered classifiers (namely, linear regression, support vector machine, Naïve Bayes, decision tree, and neural networks) and come with enhanced interpretability.

20.
IEEE Trans Cybern ; 53(3): 1905-1919, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-35486566

RESUMEN

This article proposes a new multiattribute group decision-making (MAGDM) method with probabilistic linguistic information that considers the following three aspects: an allocation of ignorance information, a realization of group consensus, and an aggregation of assessments. To allocate ignorance information, an optimization model based on minimizing the distances among experts is developed. To measure the consensus degree, a consensus index that considers the information granules of linguistic terms (LTs) is defined. On this basis, a suitable optimization model is established to realize the group consensus adaptively by optimizing the allocation of information granules of LTs with the particle swarm optimization (PSO) algorithm. With an objective to reduce the information loss during aggregation phases, the process of generating comprehensive assessments of alternatives with the evidential reasoning (ER) algorithm is presented. Therefore, a new method is developed based on the adaptive consensus reaching (ACR) model and the ER algorithm. Finally, the applicability of the proposed method is demonstrated by solving a selection problem of a financial technology company. Comparative analyses are conducted to show the advantages of the proposed method.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA