RESUMO
Currently, the use of acoustic echo cancellers (AECs) plays a crucial role in IoT applications, such as voice control appliances, hands-free telephony and intelligent voice control devices, among others. Therefore, these IoT devices are mostly controlled by voice commands. However, the performance of these devices is significantly affected by echo noise in real acoustic environments. Despite good results being achieved in terms of echo noise reductions using conventional adaptive filtering based on gradient optimization algorithms, recently, the use of bio-inspired algorithms has attracted significant attention in the science community, since these algorithms exhibit a faster convergence rate when compared with gradient optimization algorithms. To date, several authors have tried to develop high-performance AEC systems to offer high-quality and realistic sound. In this work, we present a new AEC system based on the grey wolf optimization (GWO) and particle swarm optimization (PSO) algorithms to guarantee a higher convergence speed compared with previously reported solutions. This improvement potentially allows for high tracking capabilities. This aspect has special relevance in real acoustic environments since it indicates the rate at which noise is reduced.
RESUMO
Open or short-circuit faults, as well as discrete parameter faults, are the most commonly used models in the simulation prior to testing methodology. However, since analog circuits exhibit continuous responses to input signals, faults in specific circuit elements may not fully capture all potential component faults. Consequently, diagnosing faults in analog circuits requires three key aspects: identifying faulty components, determining faulty element values, and considering circuit tolerance constraints. To tackle this problem, a methodology is proposed and implemented for fault diagnosis using swarm intelligence. The investigated optimization techniques are Particle Swarm Optimization (PSO) and the Bat Algorithm (BA). In this methodology, the nonlinear equations of the tested circuit are employed to calculate its parameters. The primary objective is to identify the specific circuit component that could potentially exhibit the fault by comparing the responses obtained from the actual circuit and the responses obtained through the optimization process. Two circuits are used as case studies to evaluate the performance of the proposed methodologies: the Tow-Thomas Biquad filter (case study 1) and the Butterworth filter (case study 2). The proposed methodologies are able to identify or at least reduce the number of possible faulty components. Four main performance metrics are extracted: accuracy, precision, sensitivity, and specificity. The BA technique demonstrates superior performance by utilizing the maximum combination of accessible nodes in the tested circuit, with an average accuracy of 95.5%, while PSO achieved only 93.9%. Additionally, the BA technique outperforms in terms of execution time, with an average time reduction of 7.95% reduction for the faultless circuit and an 8.12% reduction for the faulty cases. Compared to the machine-learning-based approach, using BA with the proposed methodology achieves similar accuracy rates but does not require any datasets nor any time-demanding training to proceed with circuit diagnostic.
RESUMO
This study investigates the application of the Gaussian Radial Basis Function Neural Network (GRNN), Gaussian Process Regression (GPR), and Multilayer Perceptron Optimized by Particle Swarm Optimization (MLP-PSO) models in analyzing the relationship between rainfall and runoff and in predicting runoff discharge. These models utilize autoregressive input vectors based on daily-observed TRMM rainfall and TMR inflow data. The performance evaluation of each model is conducted using statistical measures to compare their effectiveness in capturing the complex relationships between input and output variables. The results consistently demonstrate that the MLP-PSO model outperforms the GRNN and GPR models, achieving the lowest root mean square error (RMSE) across multiple input combinations. Furthermore, the study explores the application of the Empirical Mode Decomposition-Hilbert-Huang Transform (EMD-HHT) in conjunction with the GPR and MLP-PSO models. This combination yields promising results in streamflow prediction, with the MLP-PSO-EMD model exhibiting superior accuracy compared to the GPR-EMD model. The incorporation of different components into the MLP-PSO-EMD model significantly improves its accuracy. Among the presented scenarios, Model M4, which incorporates the simplest components, emerges as the most favorable choice due to its lowest RMSE values. Comparisons with other models reported in the literature further underscore the effectiveness of the MLP-PSO-EMD model in streamflow prediction. This study offers valuable insights into the selection and performance of different models for rainfall-runoff analysis and prediction.
RESUMO
Nowadays, high-performance audio communication devices demand superior audio quality. To improve the audio quality, several authors have developed acoustic echo cancellers based on particle swarm optimization algorithms (PSO). However, its performance is reduced significantly since the PSO algorithm suffers from premature convergence. To overcome this issue, we propose a new variant of the PSO algorithm based on the Markovian switching technique. Furthermore, the proposed algorithm has a mechanism to dynamically adjust the population size over the filtering process. In this way, the proposed algorithm exhibits great performance by reducing its computational cost significantly. To adequately implement the proposed algorithm in a Stratix IV GX EP4SGX530 FPGA, we present for the first time, the development of a parallel metaheuristic processor, in which each processing core simulates the different number of particles by using the time-multiplexing technique. In this way, the variation of the size of the population can be effective. Therefore, the properties of the proposed algorithm along with the proposed parallel hardware architecture potentially allow the development of high-performance acoustic echo canceller (AEC) systems.
RESUMO
Scheduling residential loads for financial savings and user comfort may be performed by smart home controllers (SHCs). For this purpose, the electricity utility's tariff variation costs, the lowest tariff cost schedules, the user's preferences, and the level of comfort that each load may add to the household user are examined. However, the user's comfort modeling, found in the literature, does not take into account the user's comfort perceptions, and only uses the user-defined preferences for load on-time when it is registered in the SHC. The user's comfort perceptions are dynamic and fluctuating, while the comfort preferences are fixed. Therefore, this paper proposes the modeling of a comfort function that takes into account the user's perceptions using fuzzy logic. The proposed function is integrated into an SHC that uses PSO for scheduling residential loads, and aims at economy and user comfort as multiple objectives. The analysis and validation of the proposed function includes different scenarios related to economy-comfort, load shifting, consideration of energy tariffs, user preferences, and user perceptions. The results show that it is more beneficial to use the proposed comfort function method only when the user requires SHC to prioritize comfort at the expense of financial savings. Otherwise, it is more beneficial to use a comfort function that only considers the user's comfort preferences and not their perceptions.
RESUMO
In the optimization field, the ability to efficiently tackle complex and high-dimensional problems remains a persistent challenge. Metaheuristic algorithms, with a particular emphasis on their autonomous variants, are emerging as promising tools to overcome this challenge. The term "autonomous" refers to these variants' ability to dynamically adjust certain parameters based on their own outcomes, without external intervention. The objective is to leverage the advantages and characteristics of an unsupervised machine learning clustering technique to configure the population parameter with autonomous behavior, and emphasize how we incorporate the characteristics of search space clustering to enhance the intensification and diversification of the metaheuristic. This allows dynamic adjustments based on its own outcomes, whether by increasing or decreasing the population in response to the need for diversification or intensification of solutions. In this manner, it aims to imbue the metaheuristic with features for a broader search of solutions that can yield superior results. This study provides an in-depth examination of autonomous metaheuristic algorithms, including Autonomous Particle Swarm Optimization, Autonomous Cuckoo Search Algorithm, and Autonomous Bat Algorithm. We submit these algorithms to a thorough evaluation against their original counterparts using high-density functions from the well-known CEC LSGO benchmark suite. Quantitative results revealed performance enhancements in the autonomous versions, with Autonomous Particle Swarm Optimization consistently outperforming its peers in achieving optimal minimum values. Autonomous Cuckoo Search Algorithm and Autonomous Bat Algorithm also demonstrated noteworthy advancements over their traditional counterparts. A salient feature of these algorithms is the continuous nature of their population, which significantly bolsters their capability to navigate complex and high-dimensional search spaces. However, like all methodologies, there were challenges in ensuring consistent performance across all test scenarios. The intrinsic adaptability and autonomous decision making embedded within these algorithms herald a new era of optimization tools suited for complex real-world challenges. In sum, this research accentuates the potential of autonomous metaheuristics in the optimization arena, laying the groundwork for their expanded application across diverse challenges and domains. We recommend further explorations and adaptations of these autonomous algorithms to fully harness their potential.
RESUMO
Soft sensors based on deep learning approaches are growing in popularity due to their ability to extract high-level features from training, improving soft sensors' performance. In the training process of such a deep model, the set of hyperparameters is critical to archive generalization and reliability. However, choosing the training hyperparameters is a complex task. Usually, a random approach defines the set of hyperparameters, which may not be adequate regarding the high number of sets and the soft sensing purposes. This work proposes the RB-PSOSAE, a Representation-Based Particle Swarm Optimization with a modified evaluation function to optimize the hyperparameter set of a Stacked AutoEncoder-based soft sensor. The evaluation function considers the mean square error (MSE) of validation and the representation of the features extracted through mutual information (MI) analysis in the pre-training step. By doing this, the RB-PSOSAE computes hyperparameters capable of supporting the training process to generate models with improved generalization and relevant hidden features. As a result, the proposed method can generate more than 16.4% improvement in RMSE compared to another standard PSO-based method and, in some cases, more than 50% improvement compared to traditional methods applied to the same real-world nonlinear industrial process. Thus, the results demonstrate better prediction performance than traditional and state-of-the-art methods.
Assuntos
Algoritmos , Redes Neurais de Computação , Reprodutibilidade dos TestesRESUMO
This work aims to analyze two metaheuristics optimization techniques, Genetic Algorithm (GA) and Particle Swarm Optimization (PSO), with six variations each, and compare them regarding their convergence, quality, and dispersion of solutions. The optimization target is the Gaussian Adaptive PID control (GAPID) to find the best parameters to achieve enhanced performance and robustness to load variations related to the traditional PID. The adaptive rule of GAPID is based on a Gaussian function that has as adjustment parameters its concavity and the lower and upper bound of the gains. It is a smooth function with smooth derivatives. As a result, it helps avoid problems related to abrupt increases transition, commonly found in other adaptive methods. Because there is no mathematical methodology to set these parameters, this work used bio-inspired optimization algorithms. The test plant is a DC motor with a beam with a variable load. Results obtained by load and gain sweep tests prove the GAPID presents fast responses with very low overshoot and good robustness to load changes, with minimal variations, which is impossible to achieve when using the linear PID.
Assuntos
AlgoritmosRESUMO
An artificial neural network (ANN) hybrid structure was proposed that, unlike the standard ANN structure optimization, allows the fit of several adsorption curves simultaneously by indirectly minimizing the real output error. To model a case study of 3-aminophenol adsorption phenomena onto avocado seed activated carbon, a hybrid ANN was applied to fit the parameters of the Langmuir and Sips isotherm models. Network weights and biases were optimized with two different methods: particle swarm optimization (PSO) and genetic algorithm (GA), due to their good convergence in large-scale problems. In addition, the data were also fitted with the Levenberg-Marquardt feedforward optimization method to compare the performance between a standard ANN model and the hybrid model proposed. Results showed that the ANN-isotherm hybrid models with both PSO and GA were able to accurately fit the experimental equilibrium adsorption capacity data using the Sips isotherm model, obtaining Pearson's correlation coefficient (R) of the order of 0.9999 and mean squared error (MSE) around 0.5, very similar to the performance of standard ANN using Levenberg-Marquardt optimization. On the other hand, the results with Langmuir isotherm models were quite inferior in the ANN-isotherm hybrid models with both PSO and GA, with R and MSE of around 0.944 and 4.04 × 102, respectively. The proposed ANN-isotherm hybrid structure was successfully applied to estimate the parameters of adsorption isotherms, reducing the computational demand and the exhausting task of estimating the parameters of each adsorption curve individually.
Assuntos
Algoritmos , Carvão Vegetal , Adsorção , Carvão Vegetal/química , Redes Neurais de ComputaçãoRESUMO
The image stitching process is based on the alignment and composition of multiple images that represent parts of a 3D scene. The automatic construction of panoramas from multiple digital images is a technique of great importance, finding applications in different areas such as remote sensing and inspection and maintenance in many work environments. In traditional automatic image stitching, image alignment is generally performed by the Levenberg-Marquardt numerical-based method. Although these traditional approaches only present minor flaws in the final reconstruction, the final result is not appropriate for industrial grade applications. To improve the final stitching quality, this work uses a RGBD robot capable of precise image positing. To optimize the final adjustment, this paper proposes the use of bio-inspired algorithms such as Bat Algorithm, Grey Wolf Optimizer, Arithmetic Optimization Algorithm, Salp Swarm Algorithm and Particle Swarm Optimization in order verify the efficiency and competitiveness of metaheuristics against the classical Levenberg-Marquardt method. The obtained results showed that metaheuristcs have found better solutions than the traditional approach.
Assuntos
Algoritmos , HumanosRESUMO
The study of power quality (PQ) has gained relevance over the years due to the increase in non-linear loads connected to the grid. Therefore, it is important to study the propagation of power quality disturbances (PQDs) to determine the propagation points in the grid, and their source of generation. Some papers in the state of the art perform the analysis of punctual measurements of a limited number of PQDs, some of them using high-cost commercial equipment. The proposed method is based upon a developed proprietary system, composed of a data logger FPGA with GPS, that allows the performance of synchronized measurements merged with the full parameterized PQD model, allowing the detection and tracking of disturbances propagating through the grid using wavelet transform (WT), fast Fourier transform (FFT), Hilbert-Huang transform (HHT), genetic algorithms (GAs), and particle swarm optimization (PSO). Measurements have been performed in an industrial installation, detecting the propagation of three PQDs: impulsive transients propagated at two locations in the grid, voltage fluctuation, and harmonic content propagated to all the locations. The results obtained show that the low-cost system and the developed methodology allow the detection of several PQDs, and track their propagation within a grid with 100% accuracy.
Assuntos
Algoritmos , Análise de Ondaletas , Análise de FourierRESUMO
The COVID-19 pandemic, which originated in December 2019 in the city of Wuhan, China, continues to have a devastating effect on the health and well-being of the global population. Currently, approximately 8.8 million people have already been infected and more than 465,740 people have died worldwide. An important step in combating COVID-19 is the screening of infected patients using chest X-ray (CXR) images. However, this task is extremely time-consuming and prone to variability among specialists owing to its heterogeneity. Therefore, the present study aims to assist specialists in identifying COVID-19 patients from their chest radiographs, using automated computational techniques. The proposed method has four main steps: (1) the acquisition of the dataset, from two public databases; (2) the standardization of images through preprocessing; (3) the extraction of features using a deep features-based approach implemented through the networks VGG19, Inception-v3, and ResNet50; (4) the classifying of images into COVID-19 groups, using eXtreme Gradient Boosting (XGBoost) optimized by particle swarm optimization (PSO). In the best-case scenario, the proposed method achieved an accuracy of 98.71%, a precision of 98.89%, a recall of 99.63%, and an F1-score of 99.25%. In our study, we demonstrated that the problem of classifying CXR images of patients under COVID-19 and non-COVID-19 conditions can be solved efficiently by combining a deep features-based approach with a robust classifier (XGBoost) optimized by an evolutionary algorithm (PSO). The proposed method offers considerable advantages for clinicians seeking to tackle the current COVID-19 pandemic.
RESUMO
A hybrid neural model (HNM) and particle swarm optimization (PSO) was used to optimize ethanol production by a flocculating yeast, grown on cashew apple juice. HNM was obtained by combining artificial neural network (ANN), which predicted reaction specific rates, to mass balance equations for substrate (S), product and biomass (X) concentration, being an alternative method for predicting the behavior of complex systems. ANNs training was conducted using an experimental set of data of X and S, temperature and stirring speed. The HNM was statistically validated against a new dataset, being capable of representing the system behavior. The model was optimized based on a multiobjective function relating efficiency and productivity by applying the PSO. Optimal estimated conditions were: S0 = 127 g L-1, X0 = 5.8 g L-1, 35 °C and 111 rpm. In this condition, an efficiency of 91.5% with a productivity of 8.0 g L-1 h-1 was obtained at approximately 7 h of fermentation.
Assuntos
Etanol/metabolismo , Sucos de Frutas e Vegetais , Malus/química , Modelos Biológicos , Redes Neurais de Computação , Saccharomyces cerevisiae/crescimento & desenvolvimentoRESUMO
Studies in air pollution epidemiology are of paramount importance in diagnosing and improve life quality. To explore new methods or modify existing ones is critical to obtain better results. Most air pollution epidemiology studies use the Generalized Linear Model, especially the default version of R, Splus, SAS, and Stata softwares, which use maximum likelihood estimators in parameter optimization. Also, a smooth time function (usually spline) is generally used as a pre-processing step to consider seasonal and long-term tendencies. This investigation introduces a new approach to GLM, proposing the estimation of the free coefficients through bio-inspired metaheuristics - Particle Swarm Optimization (PSO), Genetic Algorithms, and Differential Evolution, as well as the replacement of the spline function by a simple normalization procedure. The considered case studies comprise three important cities of São Paulo state, Brazil with distinct characteristics: São Paulo, Campinas, and Cubatão. We considered the impact of particles with an aerodynamic diameter less than 10 µm (PM10), ambient temperature, and relative humidity in the number of hospital admissions for respiratory diseases (ICD-10, J00 to J99). The results showed that the new approach (especially PSO) brings performance gains compared to the default version of statistical software like R.
Assuntos
Poluentes Atmosféricos , Poluição do Ar , Transtornos Respiratórios , Poluentes Atmosféricos/análise , Poluição do Ar/análise , Brasil/epidemiologia , Humanos , Modelos LinearesRESUMO
Automatic and reliable prostate segmentation is an essential prerequisite for assisting the diagnosis and treatment, such as guiding biopsy procedure and radiation therapy. Nonetheless, automatic segmentation is challenging due to the lack of clear prostate boundaries owing to the similar appearance of prostate and surrounding tissues and the wide variation in size and shape among different patients ascribed to pathological changes or different resolutions of images. In this regard, the state-of-the-art includes methods based on a probabilistic atlas, active contour models, and deep learning techniques. However, these techniques have limitations that need to be addressed, such as MRI scans with the same spatial resolution, initialization of the prostate region with well-defined contours and a set of hyperparameters of deep learning techniques determined manually, respectively. Therefore, this paper proposes an automatic and novel coarse-to-fine segmentation method for prostate 3D MRI scans. The coarse segmentation step combines local texture and spatial information using the Intrinsic Manifold Simple Linear Iterative Clustering algorithm and probabilistic atlas in a deep convolutional neural networks model jointly with the particle swarm optimization algorithm to classify prostate and non-prostate tissues. Then, the fine segmentation uses the 3D Chan-Vese active contour model to obtain the final prostate surface. The proposed method has been evaluated on the Prostate 3T and PROMISE12 databases presenting a dice similarity coefficient of 84.86%, relative volume difference of 14.53%, sensitivity of 90.73%, specificity of 99.46%, and accuracy of 99.11%. Experimental results demonstrate the high performance potential of the proposed method compared to those previously published.
Assuntos
Interpretação de Imagem Assistida por Computador/estatística & dados numéricos , Imageamento Tridimensional/estatística & dados numéricos , Imageamento por Ressonância Magnética/estatística & dados numéricos , Redes Neurais de Computação , Neoplasias da Próstata/diagnóstico por imagem , Algoritmos , Bases de Dados Factuais , Aprendizado Profundo , Humanos , Análise de Classes Latentes , Masculino , Modelos EstatísticosRESUMO
In supply chain management, fast and accurate decisions in supplier selection and order quantity allocation have a strong influence on the company's profitability and the total cost of finished products. In this paper, a novel and non-linear model is proposed for solving the supplier selection and order quantity allocation problem. The model is introduced for minimizing the total cost per time unit, considering ordering, purchasing, inventory, and transportation cost with freight rate discounts. Perfect rate and capacity constraints are also considered in the model. Since metaheuristic algorithms have been successfully applied in supplier selection, and due to the non-linearity of the proposed model, particle swarm optimization (PSO), genetic algorithm (GA), and differential evolution (DE), are implemented as optimizing solvers instead of analytical methods. The model is tested by solving a reference model using PSO, GA, and DE. The performance is evaluated by comparing the solution to the problem against other solutions reported in the literature. Experimental results prove the effectiveness of the proposed model, and demonstrate that metaheuristic algorithms can find lower-cost solutions in less time than analytical methods.
RESUMO
Automatic text summarization tools have a great impact on many fields, such as medicine, law, and scientific research in general. As information overload increases, automatic summaries allow handling the growing volume of documents, usually by assigning weights to the extracted phrases based on their significance in the expected summary. Obtaining the main contents of any given document in less time than it would take to do that manually is still an issue of interest. In this article, a new method is presented that allows automatically generating extractive summaries from documents by adequately weighting sentence scoring features using Particle Swarm Optimization. The key feature of the proposed method is the identification of those features that are closest to the criterion used by the individual when summarizing. The proposed method combines a binary representation and a continuous one, using an original variation of the technique developed by the authors of this paper. Our paper shows that using user labeled information in the training set helps to find better metrics and weights. The empirical results yield an improved accuracy compared to previous methods used in this field.
RESUMO
The Generalized Renewal Process (GRP) is a probabilistic model for repairable systems that can represent the usual states of a system after a repair: as new, as old, or in a condition between new and old. It is often coupled with the Weibull distribution, widely used in the reliability context. In this paper, we develop novel GRP models based on probability distributions that stem from the Tsallis' non-extensive entropy, namely the q-Exponential and the q-Weibull distributions. The q-Exponential and Weibull distributions can model decreasing, constant or increasing failure intensity functions. However, the power law behavior of the q-Exponential probability density function for specific parameter values is an advantage over the Weibull distribution when adjusting data containing extreme values. The q-Weibull probability distribution, in turn, can also fit data with bathtub-shaped or unimodal failure intensities in addition to the behaviors already mentioned. Therefore, the q-Exponential-GRP is an alternative for the Weibull-GRP model and the q-Weibull-GRP generalizes both. The method of maximum likelihood is used for their parameters' estimation by means of a particle swarm optimization algorithm, and Monte Carlo simulations are performed for the sake of validation. The proposed models and algorithms are applied to examples involving reliability-related data of complex systems and the obtained results suggest GRP plus q-distributions are promising techniques for the analyses of repairable systems.
RESUMO
Resumo Otimização por Enxame de Partículas (PSO) é uma técnica de inteligência artificial (AI), que pode ser usada para encontrar soluções aproximadas para problemas numéricos de maximização e minimização extremamente difíceis. Neste trabalho, utilizou-se um algoritmo PSO para comparar os deslocamentos sofridos por uma amostra de córnea humana submetida à uma pressão interna de 45 mmHg com resultados de simulações numéricas e identificar valores otimizados para propriedades hiperelásticas da córnea (µ e α). Por meio dos resultados das simulações via análise inversa pelo Método dos Elementos Finitos (MEF), em conjunto com o algoritmo PSO, foram encontrados valores otimizados de µ = 0,047 e α = 106,7. Quando comparado com resultados otimizados por meio de um software comercial, foram encontrados erros de aproximadamente 0,15%. Por meio dos resultados obtidos, verificou-se ainda que, variando os valores dos coeficientes de inércia da partícula no algoritmo PSO, os resultados podem sofrer ligeira melhoria, o que demonstra potencial uso do PSO em conjunto com análise inversa do MEF para caracterização de materiais hiperelásticos, utilizando modelos geométricos simplificados
Abstract Particle Swarm Optimization (PSO) is an artificial intelligence technique (AI) that can be used to find approximate solutions to numerical problems of maximization and minimization. In this study, it was used a PSO algorithm to compare displacements from human cornea sample subjected to internal pressure of 45 mmHg with Results of numerical simulations were provided which identified optimized values for hyperelastic properties of the cornea (µ and α). By means of the results from numerical simulations via inverse analysis by the Finite Element Method (FEM), in conjunction with the PSO algorithm, optimized values of µ = 0.047 and α = 106.7 were found. When compared with optimized results from commercial software, errors around 0.15% were found. Results showed that, varying the values of particle inertia coefficients in the PSO algorithm, simulated displacements have improved when compared to experimental data. This demonstrates the potential use of PSO algorithm in conjunction with the FEM inverse analysis for hyperelastic materials characterization, using simplified geometrical models
Assuntos
Humanos , Fenômenos Biomecânicos , Córnea/fisiologia , Algoritmos , Simulação por Computador , Córnea/anatomia & histologia , Análise de Elementos Finitos , Módulo de Elasticidade/fisiologia , Modelos BiológicosRESUMO
ABSTRACT The primary challenge in organizing sensor networks is energy efficacy. This requisite for energy efficacy is because sensor nodes capacities are limited and replacing them is not viable. This restriction further decreases network lifetime. Node lifetime varies depending on the requisites expected of its battery. Hence, primary element in constructing sensor networks is resilience to deal with decreasing lifetime of all sensor nodes. Various network infrastructures as well as their routing protocols for reduction of power utilization as well as to prolong network lifetime are studied. After analysis, it is observed that network constructions that depend on clustering are the most effective methods in terms of power utilization. Clustering divides networks into inter-related clusters such that every cluster has several sensor nodes with a Cluster Head (CH) at its head. Sensor gathered information is transmitted to data processing centers through CH hierarchy in clustered environments. The current study utilizes Multi-Objective Particle Swarm Optimization (MOPSO)-Differential Evolution (DE) (MOPSO-DE) technique for optimizing clustering.