RESUMO
This research focuses on utilizing injection moulding to assess defects in plastic products, including sink marks, shrinkage, and warpages. Process parameters, such as pure cooling time, mould temperature, melt temperature, and pressure holding time, are carefully selected for investigation. A full factorial design of experiments is employed to identify optimal settings. These parameters significantly affect the physical and mechanical properties of the final product. Soft computing methods, such as finite element (FE), help mitigate behaviour by considering different input parameters. A CAD model of a dashboard component integrates into an FE simulation to quantify shrinkage, warpage, and sink marks. Four chosen parameters of the injection moulding machine undergo comprehensive experimental design. Decision tree, multilayer perceptron, long short-term memory, and gated recurrent units models are explored for injection moulding process modelling. The best model estimates defects. Multiple objectives particle swarm optimisation extracts optimal process parameters. The proposed method is implemented in MATLAB, providing 18 optimal solutions based on the extracted Pareto-Front.
RESUMO
The development of soft computing methods has had a significant influence on the subject of autonomous intelligent agriculture. This paper offers a system for autonomous greenhouse navigation that employs a fuzzy control algorithm and a deep learning-based disease classification model for tomato plants, identifying illnesses using photos of tomato leaves. The primary novelty in this study is the introduction of an upgraded Deep Convolutional Generative Adversarial Network (DCGAN) that creates augmented pictures of disease tomato leaves from original genuine samples, considerably enhancing the training dataset. To find the optimum training model, four deep learning networks (VGG19, Inception-v3, DenseNet-201, and ResNet-152) were carefully compared on a dataset of nine tomato leaf disease classes. These models have validation accuracy of 92.32%, 90.83%, 96.61%, and 97.07%, respectively, when using the original PlantVillage dataset. The system then uses an enhanced dataset with ResNet-152 network design to achieve a high accuracy of 99.69%, as compared to the original dataset with ResNet-152's accuracy of 97.07%. This improvement indicates the use of the proposed DCGAN in improving the performance of the deep learning model for greenhouse plant monitoring and disease detection. Furthermore, the proposed approach may have a broader use in various agricultural scenarios, potentially altering the field of autonomous intelligent agriculture.
Assuntos
Agricultura , Aprendizado Profundo , Doenças das Plantas , Folhas de Planta , Solanum lycopersicum , Solanum lycopersicum/crescimento & desenvolvimento , Agricultura/métodos , Robótica/métodos , Algoritmos , Redes Neurais de Computação , Computação FlexívelRESUMO
It is well known that the roughness of a wall plays a crucial role in determining the passive earth pressure that is exerted on a rigid wall. While the effects of positive wall roughness have been extensively studied in the past few decades, the study of passive earth pressure with negative wall friction is rarely found in the literature. This study aims to provide a precise solution for negative friction walls under passive wall conditions. The research is initiated by adopting a radial stress field for the cohesionless backfill and employs the concept of stress self-similarity. The problem is then formulated in a way that a statically admissible stress field be developed throughout an analyzed domain using a two-step numerical framework. The framework involves the successful execution of numerical integration, which leads to the exploration of the statically admissible stress field in cohesionless backfills under negative wall friction. This, in turn, helps to shed light on the mechanism of load transfer in such situations so that reliable design charts and tables be provided for practical uses. The study continues with a soft computing model that leads to more robust and effective designs for earth-retaining structures under various negative wall frictions and sloping backfills.
RESUMO
Drought is deemed a major natural disaster that can lead to severe economic and social implications. Drought indices are utilized worldwide for drought management and monitoring. However, as a result of the inherent complexity of drought phenomena and hydroclimatic condition differences, no universal drought index is available for effectively monitoring drought across the world. Therefore, this study aimed to develop a new meteorological drought index to describe and forecast drought based on various artificial intelligence (AI) models: decision tree (DT), generalized linear model (GLM), support vector machine, artificial neural network, deep learning, and random forest. A comparative assessment was conducted between the developed AI-based indices and nine conventional drought indices based on their correlations with multiple drought indicators. Historical records of five drought indicators, namely runoff, along with deep, lower, root, and upper soil moisture, were utilized to evaluate the models' performance. Different combinations of climatic datasets from Alice Springs, Australia, were utilized to develop and train the AI models. The results demonstrated that the rainfall anomaly drought index was the best conventional drought index, scoring the highest correlation (0.718) with the upper soil moisture. The highest correlation between the new and conventional indices was found between the DT-based index and the rainfall anomaly index at a value of 0.97, whereas the lowest correlation was 0.57 between the GLM and the Palmer drought severity index. The GLM-based index achieved the best performance according to its high correlations with conventional drought indicators, e.g., a correlation coefficient of 0.78 with the upper soil moisture. Overall, the developed AI-based drought indices outperformed the conventional indices, hence contributing effectively to more accurate drought forecasting and monitoring. The findings emphasized that AI can be a promising and reliable prediction approach for achieving better drought assessment and mitigation.
RESUMO
Advancements in AI have notably changed cancer research, improving patient care by enhancing detection, survival prediction, and treatment efficacy. This review covers the role of Machine Learning, Soft Computing, and Deep Learning in oncology, explaining key concepts and algorithms (like SVM, Naïve Bayes, and CNN) in a clear, accessible manner. It aims to make AI advancements understandable to a broad audience, focusing on their application in diagnosing, classifying, and predicting various cancer types, thereby underlining AI's potential to better patient outcomes. Moreover, we present a tabular summary of the most significant advances from the literature, offering a time-saving resource for readers to grasp each study's main contributions. The remarkable benefits of AI-powered algorithms in cancer care underscore their potential for advancing cancer research and clinical practice. This review is a valuable resource for researchers and clinicians interested in the transformative implications of AI in cancer care.
Assuntos
Algoritmos , Inteligência Artificial , Neoplasias , Humanos , Neoplasias/diagnóstico , Neoplasias/terapia , Pesquisa Biomédica , Aprendizado de MáquinaRESUMO
Water resources are constantly threatened by pollution of potentially toxic elements (PTEs). In efforts to monitor and mitigate PTEs pollution in water resources, machine learning (ML) algorithms have been utilized to predict them. However, review studies have not paid attention to the suitability of input variables utilized for PTE prediction. Therefore, the present review analyzed studies that employed three ML algorithms: MLP-NN (multilayer perceptron neural network), RBF-NN (radial basis function neural network), and ANFIS (adaptive neuro-fuzzy inference system) to predict PTEs in water. A total of 139 models were analyzed to ascertain the input variables utilized, the suitability of the input variables, the trends of the ML model applications, and the comparison of their performances. The present study identified seven groups of input variables commonly used to predict PTEs in water. Group 1 comprised of physical parameters (P), chemical parameters (C), and metals (M). Group 2 contains only P and C; Group 3 contains only P and M; Group 4 contains only C and M; Group 5 contains only P; Group 6 contains only C; and Group 7 contains only M. Studies that employed the three algorithms proved that Groups 1, 2, 3, 5, and 7 parameters are suitable input variables for forecasting PTEs in water. The parameters of Groups 4 and 6 also proved to be suitable for the MLP-NN algorithm. However, their suitability with respect to the RBF-NN and ANFIS algorithms could not be ascertained. The most commonly predicted PTEs using the MLP-NN algorithm were Fe, Zn, and As. For the RBF-NN algorithm, they were NO3, Zn, and Pb, and for the ANFIS, they were NO3, Fe, and Mn. Based on correlation and determination coefficients (R, R2), the overall order of performance of the three ML algorithms was ANFIS > RBF-NN > MLP-NN, even though MLP-NN was the most commonly used algorithm.
Assuntos
Algoritmos , Aprendizado de Máquina , Redes Neurais de Computação , Poluentes Químicos da Água , Recursos Hídricos , Poluentes Químicos da Água/análise , Monitoramento Ambiental/métodos , Lógica FuzzyRESUMO
Concrete-filled steel tube columns (CFSTCs) are important elements in the construction sector and predictive analysis of their behavior is essential. Recent works have revealed the potential of metaheuristic-assisted approximators for this purpose. The main idea of this paper, therefore, is to introduce a novel integrative model for appraising the axial compression capacity (Pu) of CFSTCs. The proposed model represents an artificial neural network (ANN) supervised by satin bowerbird optimizer (SBO). In other words, this metaheuristic algorithm trains the ANN optimally to find the best contribution of input parameters to the Pu. In this sense, column length and the compressive strength of concrete, as well as the characteristics of the steel tube (i.e., diameter, thickness, yield stress, and ultimate stress), are considered input data. The prediction results are compared to five ANNs supervised by backtracking search algorithm (BSA), earthworm optimization algorithm (EWA), social spider algorithm (SOSA), salp swarm algorithm (SSA), and wind-driven optimization. Evaluating various accuracy indicators showed that the proposed model surpassed all of them in both learning and reproducing the Pu pattern. The obtained values of mean absolute percentage error of the SBO-ANN was 2.3082% versus 4.3821%, 17.4724%, 15.7898%, 4.2317%, and 3.6884% for the BSA-ANN, EWA-ANN, SOSA-ANN, SSA-ANN and WDA-ANN, respectively. The higher accuracy of the SBO-ANN against several hybrid models from earlier literature was also deduced. Moreover, the outcomes of principal component analysis on the dataset showed that the yield stress, diameter, and ultimate stress of the steel tube are the three most important factors in Pu prediction. A predictive formula is finally derived from the optimized SBO-ANN by extracting and organizing the weights and biases of the ANN. Owing to the accurate estimation shown by this model, the derived formula can reliably predict the Pu of concrete-filled steel tube columns.
RESUMO
Computer-aided diagnosis (CAD) systems play a vital role in modern research by effectively minimizing both time and costs. These systems support healthcare professionals like radiologists in their decision-making process by efficiently detecting abnormalities as well as offering accurate and dependable information. These systems heavily depend on the efficient selection of features to accurately categorize high-dimensional biological data. These features can subsequently assist in the diagnosis of related medical conditions. The task of identifying patterns in biomedical data can be quite challenging due to the presence of numerous irrelevant or redundant features. Therefore, it is crucial to propose and then utilize a feature selection (FS) process in order to eliminate these features. The primary goal of FS approaches is to improve the accuracy of classification by eliminating features that are irrelevant or less informative. The FS phase plays a critical role in attaining optimal results in machine learning (ML)-driven CAD systems. The effectiveness of ML models can be significantly enhanced by incorporating efficient features during the training phase. This empirical study presents a methodology for the classification of biomedical data using the FS technique. The proposed approach incorporates three soft computing-based optimization algorithms, namely Teaching Learning-Based Optimization (TLBO), Elephant Herding Optimization (EHO), and a proposed hybrid algorithm of these two. These algorithms were previously employed; however, their effectiveness in addressing FS issues in predicting human diseases has not been investigated. The following evaluation focuses on the categorization of benign and malignant tumours using the publicly available Wisconsin Diagnostic Breast Cancer (WDBC) benchmark dataset. The five-fold cross-validation technique is employed to mitigate the risk of over-fitting. The evaluation of the proposed approach's proficiency is determined based on several metrics, including sensitivity, specificity, precision, accuracy, area under the receiver-operating characteristic curve (AUC), and F1-score. The best value of accuracy computed through the suggested approach is 97.96%. The proposed clinical decision support system demonstrates a highly favourable classification performance outcome, making it a valuable tool for medical practitioners to utilize as a secondary opinion and reducing the overburden of expert medical practitioners.
RESUMO
Transformer performance and efficiency can be enhanced by effectively address the properties of its insulation system. The power transformer insulation system weakens as a result of operational thermal stresses brought on by dynamic loading and shifting environmental patterns. Winding hot spot temperature is a crucial metric that must be maintained below the prescribed limit while power transformers are operating so as to maintained power system reliability. This is due to the fact that, among other variables, the time-dependent aging effect of insulation depends on transitions in hot spot temperatures. Due to the non-linear nature of the conventional mathematical models used to determine these temperatures, and complexity of thermal phenomena, investigations still need to be exercised to fully understand the variables that associate with hot spot temperature computation with minimum error. This paper explores the possibilities of enhancing top oil and hot spot temperature estimation accuracy through the use of an adaptive neuro-fuzzy inference (ANFIS) technique. The paper presents an adaptive neuro fuzzy model to approximate the hot spot temperature of a mineral oil-filled power transformer based on loading, and established top oil temperature. Initially, a sub-ANFIS top oil temperature estimation model based on loading and ambient temperature as inputs is established. Using a hybrid optimization technique, the ANFIS membership functions were fine-tuned throughout the training process to reduce the difference between the actual and anticipated outcomes. The correctness and reliability of the created adaptive neural fuzzy model have been verified using real-world field data from a 60/90MVA, 132 kV power transformers under dynamic operating regimes. The ANFIS model results were validated against field measured values and literature-based electrical-thermal analogous models, establishing a precise input-output correlation. The developed ANFIS model achieves the highest coefficient of determination for both TOT and HST (0.98 and 0.96) and the lowest mean square error (7.8 and 10.3) among the compared thermal models. Correct determination of HST can help asset managers in thermal analysis trending of the in-service transformers, helping them to make proper loading recommendations for safeguarding the asset.
RESUMO
In the urban environment, the quality refers to the capacity that provides and fulfills the material and spiritual needs of inhabitants. In order to improve the quality of urban life and standard of living for their citizens, planners and managers strive to raise Urban Environmental Quality. The objective of this study is to evaluate the quality of urban environment through the spatial analysis of a multi-criteria decision making (MCDM) method utilizing CRITIC. This research is conducted in district 4 and district 2 of the Tabriz Metropolis Municipality. In order to determine the quality of an urban environment, air pollution, vegetation coverage, land surface temperature, production of waste, population density, noise pollution, health care per capita, green spaces per capita, recreational spaces per capita, and distance from fault lines are used. After evaluating and producing environmental quality maps in two separate districts, 10 indicators were tested for significance and a comparative evaluation of two districts was conducted in order to determine which district was in better condition based on a statistical analysis of the T-test results. In accordance with the CRITIC method, there are significant differences between averages of waste production, population density, noise pollution, distance from fault lines, Land Surface Temperature, Normalized difference vegetation index, and distance from fault lines between the two districts. It appears that recreational space, air pollution, health care per capita, and green space per capita are not meaningfully different on averages. The preparation of environmental quality maps reveals the importance of meaningful indicators at the neighborhood level in two urban districts. In both districts by strengthening the continuity of the landscape through the development of ecological corridors and an increase in per capita can contribute to the improvement of the quality of the urban environment.
RESUMO
This research aims to conduct a comparative analysis of the first crack load, flexural strength, and shear strength in reinforced concrete beams without stirrups. The comparison is made between the conventional model developed according to the current design code (ACI building code) and an unconventional approach using Artificial Neural Networks (ANNs). To accomplish this, a dataset comprising 110 samples of reinforced concrete beams without stirrup reinforcement was collected and utilised to train a Multilayer Backpropagation Neural Network in MATLAB. The primary objective of this work is to establish a knowledge-based structural analysis model capable of accurately predicting the responses of reinforced concrete structures. The coefficient of determination obtained from this comparison yields values of 0.9404 for the first cracking load, 0.9756 for flexural strength, and 0.9787 for shear strength. Through an assessment of the coefficient of determination and linear regression coefficients, it becomes evident that the ANN model produces results that closely align with those obtained from the conventional model. This demonstrates the ANN's potential for precise prediction of the structural behaviour of reinforced concrete beams.
RESUMO
The production of fisheries and shrimp has been twice every 10 years for the previous five decades, making it the most rapidly expanding food industry. This growth is due to intensive farming and the conversion of agriculture into aquaculture in many parts of South Asia. Furthermore, intensive aquaculture generates positive economic growth but leads to environmental degradation without proper monitoring. Unfortunately, technical innovation is less in aquaculture than agricultural and manufacturing industries. The advent of remote sensing and soft computing has expanded various opportunities for utilizing and integrating technological advances in civil and environmental disciplines. This paper presents the aquaculture scenario in the western Godavari delta region of Andhra Pradesh and proposes various novel assessment tools to monitor the aquaculture environment. An experimental investigation was carried out on the physicochemical characteristics of the inland aquaculture ponds to evaluate water quality in the aquaculture ponds. Furthermore, to assess the intensity of inland aquaculture, the current work concentrates on the potential application of remote sensing and soft computing approaches. Geospatial models of kriging and inverse distance weighing (IDW) show higher performance in estimating ammonia levels in the intensive aquaculture groundwaters with coefficient of determination (R2) values of 0.947 and 0.901, respectively. Teaching learning-based optimization (TLBO) and adaptive particle swarm optimization (APSO), two of the five soft computing techniques utilized in the study, perform better than the others. Additionally, it was found that remote sensing-based assessment tools and soft computing prediction models were both trustworthy, accurate, and easy to use. Furthermore, these methods could assist in the real-time evaluation of inland aquaculture waters by stakeholders and policymakers.
RESUMO
Microfluidic devices have gained subsequent attention due to their controlled manipulation of fluid for various biomedical applications. These devices can be used to study the behavior of fluid under several micrometer ranges within the channel. The major applications are the filtration of fluid, blood filtration and bio-medical analysis. For the filtration of water, as well as other liquids, the micro-filtration based microfluidic devices are considered as potential candidates to fulfill the desired conditions and requirements. The micro pore membrane can be designed and fabricated in such a way that it maximizes the removal of impurities from fluid. The low-cost micro-filtration method has been reported to provide clean fluid for biomedical applications and other purposes. In the work, anodic-aluminum-oxide-based membranes have been fabricated with different pore sizes ranging from 70 to 500 nm. A soft computing technique like fuzzy logic has been used to estimate the filtration parameters. Then, the finite-element-based analysis system software has been used to study the fluid flow through the double membrane. Then, filtration is performed by using a dual membrane and the clogging of the membrane has been studied after different filtration cycles using characterization like a scanning electron microscope. The filtration has been done to purify the contaminated fluid which has impurities like bacteria and protozoans. The membranes have been tested after each cycle to verify the results. The decrease in permeance with respect to the increase in the velocity of the fluid and the permeate volume per unit clearly depicts the removal of containments from the fluid after four and eight cycles of filtration. The results clearly show that the filtration efficiency can be improved by increasing the number of cycles and adding a dual membrane in the micro-fluidic device. The results show the potential of dual anodic aluminum oxide membranes for the effective filtration of fluids for biomedical applications, thereby offering a promising solution to address current challenges.
RESUMO
Nitrogen pollution in water bodies has become a pressing environmental and public health issue worldwide, demanding the implementation of effective nitrogen removal strategies. This research paper delves into the performance evaluation of hybrid constructed wetlands (HCWs) as a sustainable and innovative approach for nitrogen removal, employing a comprehensive year-long dataset gathered from a practical setup. The study collected data under diverse operating conditions to investigate the effectiveness of HCWs in removing nitrogen. Results revealed that HCWs achieved nitrogen removal efficiencies ranging from 28% to 65%, influenced by temperature and hydraulic retention time. Optimal removal occurred at an average temperature of 28°C and a 4-day hydraulic retention time. Notably, performance declined during colder periods, with temperatures below 15°C. The study also aims to predict nitrogen removal by three modeling techniques, that is, artificial neural networks (ANNs), support vector machines Pearson VII kernel function (SVM PUK), and multiple linear regression (MLR). Prediction has been done considering temperature (TEMP), hydraulic loading rate (HLR), initial concentration of chemical oxygen demand (COD) (CODin), initial concentration of total nitrogen (TNin ), initial concentration of total phosphorous (TPin ), and initial concentration of turbidity (TBin ) as input parameters, whereas reduction of total nitrogen (RED TN) is regarded as output parameter. The performance of the soft computing techniques has been compared in terms of coefficient of determination (R2 ), root mean square error (RMSE), and mean absolute error (MAE). The analysis revealed that the performance of the SVM (PUK) model (R2 : 0.572, RMSE: 0.0359, MAE: 0.0294) for the prediction of TN reduction is superior followed by MLR (R2 : 0.562, RMSE: 0.0365, MAE: 0.0294) and ANN (R2 : 0.597, RMSE: 0.0377, MAE: 0.0301). The present study concludes that the treated effluent by the HCWs, using water hyacinth and water lettuce, is of fair quality, thus having potential application for the treatment of rice mill wastewater in warmer climates. Further, machine learning approaches employed in estimating the total nitrogen reduction by HCWs technology have shown promising applicability and utilization in such studies. PRACTITIONER POINTS: Hybrid constructed wetlands (HCWs) are effective in removing nitrogen from wastewater. The performance of HCWs in nitrogen removal can vary due to physical, chemical, and biological processes. The performance of the HCWs highly depends on temperature and hydraulic retention time. Artificial neural networks (ANNs) and support vector machines (SVMs) provided better predictions of nitrogen removal with high accuracy and low root mean square error.
Assuntos
Águas Residuárias , Áreas Alagadas , Desnitrificação , Nitrogênio/análise , Redes Neurais de Computação , Eliminação de Resíduos Líquidos/métodosRESUMO
The intensity and frequency of diverse hydro-meteorological disasters viz., extreme droughts, severe floods, and cyclones have increasing trends due to unsustainable management of land and water resources, coupled with increasing industrialization, urbanization and climate change. This study focuses on the forecasting of drought using selected Artificial Neural Network (ANN)-based models to enable decision-makers to improve regional water management plans and disaster mitigation/reduction plans. Four ANN models were developed in this study, viz., one conventional ANN model and three hybrid ANN models: (a) Wavelet based-ANN (WANN), (b) Bootstrap based-ANN (BANN), and (c) Wavelet-Bootstrap based-ANN (WBANN). The Standardized Precipitation Evapotranspiration Index (SPEI), the best drought index identified for the study area, was used as a variable for drought forecasting. Three drought indices, such as SPEI-3, SPEI-6 and SPEI-12 respectively representing "short-term", "intermediate-term", and "long-term" drought conditions, were forecasted for 1-month to 3-month lead times for six weather stations over the study area. Both statistical and graphical indicators were considered to assess the performance of the developed models. For the hybrid wavelet model, the performance was evaluated for different vanishing moments of Daubechies wavelets and decomposition levels. The best-performing bootstrap-based model was further used for analysing the uncertainty associated with different drought forecasts. Among the models developed for drought forecasting for 1 to 3 months, the performances of the WANN and WBANN models are superior to the simple ANN and BANN models for the SPEI-3, SPEI-6, and SPEI-12 up to the 3-month lead time. The performance of the WANN and WBANN models is the best for SPEI-12 (MAE = 0.091-0.347, NSE = 0.873-0.982) followed by SPEI-6 (MAE = 0.258-0.593; NSE = 0.487-0.848) and SPEI-3 (MAE = 0.332-0.787, NSE = 0.196-0.825) for all the stations up to 3-month lead time. This finding is supported by the WBANN analyze uncertainties as narrower band width for SPEI-12 (0.240-0.898) as compared to SPEI-6 (0.402-1.62) and SPEI-3 (0.474-2.304). Therefore, the WBANN model is recommended for the early warning of drought events as it facilitates the uncertainty analysis of drought forecasting results.
Assuntos
Secas , Monitoramento Ambiental , Índia , Tempo (Meteorologia) , Redes Neurais de ComputaçãoRESUMO
Recently, image thresholding methods based on various entropy functions have been found popularity. Nonetheless, entropic-based methods depend on the spatial distribution of the grey level values in an image. Hence, the accuracy of these methods is limited due to the non-uniform distribution of the grey values. Further, the analysis of the COVID-19 X-ray images is evolved as an important area of research. Therefore, it is needed to develop an efficient method for the segmentation of the COVID-19 X-ray images. To address these issues, an efficient non-entropy-based thresholding method is suggested. A novel fitness function in terms of the segmentation score (SS) is introduced, which is used to reduce the segmentation error. A soft computing approach is suggested. An efficient optimizer using the chance-based birds' intelligence is introduced to maximize the fitness values. The new optimizer is validated utilizing the benchmark test functions. The statistical parameters reveal that the suggested optimizer is efficient. It shows a quite significant improvement over its counterparts-optimization based on seagull/cuckoo birds. Precisely, the paper includes three novel contributions-(i) fitness function, (ii) chance-based birds' intelligence for optimization, (iii) multiclass segmentation. The COVID-19 X-ray images are taken from the Kaggle Radiography database, to the experiment. Its results are compared with three different state-of-the-art entropy-based techniques-Tsallis, Kapur's, and Masi. For providing a statistical analysis, Friedman's mean rank test is conducted and our method Ranked one. Its superiority is claimed in terms of Peak Signal to Noise Ratio (PSNR), Feature Similarity Index (FSIM) and Structure Similarity Index (SSIM). On the whole, an improvement of about 11% in PSNR values is achieved using the proposed method. This method would be helpful for medical image analysis.
RESUMO
In this article, we analyze the dynamics of the non-linear tumor-immune delayed (TID) model illustrating the interaction among tumor cells and the immune system (cytotoxic T lymphocytes, T helper cells), where the delays portray the times required for molecule formation, cell growth, segregation, and transportation, among other factors by exploiting the knacks of soft computing paradigm utilizing neural networks with back propagation Levenberg Marquardt approach (NNLMA). The governing differential delayed system of non-linear TID, which comprised the densities of the tumor population, cytotoxic T lymphocytes and T helper cells, is represented by non-linear delay ordinary differential equations with three classes. The baseline data is formulated by exploiting the explicit Runge-Kutta method (RKM) by diverting the transmutation rate of Tc to Th of the Tc population, transmutation rate of Tc to Th of the Th population, eradication of tumor cells through Tc cells, eradication of tumor cells through Th cells, Tc cells' natural mortality rate, Th cells' natural mortality rate as well as time delay. The approximated solution of the non-linear TID model is determined by randomly subdividing the formulated data samples for training, testing, as well as validation sets in the network formulation and learning procedures. The strength, reliability, and efficacy of the designed NNLMA for solving non-linear TID model are endorsed by small/negligible absolute errors, error histogram studies, mean squared errors based convergence and close to optimal modeling index for regression measurements.
RESUMO
Intensive aquaculture practices generate highly polluted organic effluents such as biological oxygen demand (BOD), alkalinity, total ammonia, nitrates, calcium, potassium, sodium, iron, and chlorides. In recent years, Inland aquaculture ponds in the western delta region of Andhra Pradesh have been intensively expanding and are more concerned about negative environmental impact. This paper presents the water quality analysis of aquaculture waters in 64 random locations in the western delta region of Andhra Pradesh. The average water quality index (WQI) was 126, with WQI values ranging from 21 to 456. Approximately 78% of the water samples were very poor and unsafe for drinking and domestic usage. The mean ammonia content in aquaculture water was 0.15 mg/L, and 78% of the samples were above the acceptable limit set by the World Health Organization (WHO) of 0.5 mg/L. The quantity of ammonia in the water ranged from 0.05 to 2.8 mg/L. The results show that ammonia levels exceed the permissible limits and are a significant concern in aquaculture waters due to toxicity. This paper also presents an intelligent soft computing approach to predicting ammonia levels in aquaculture ponds, using two novel approaches, such as the pelican optimization algorithm (POA) and POA coupled with discrete wavelet analysis (DWT-POA). The modified and enhanced POA with DWT can converge to higher performance when compared to standard POA, with an average percentage error of 1.964 and a coefficient of determination (R2) value of 0.822. Moreover, it was found that prediction models were reliable with good accuracy and simple to execute. Furthermore, these prediction models could help stakeholders and policymakers to make a real-time prediction of ammonia levels in intensive farming inland aquaculture ponds.
Assuntos
Amônia , Lagoas , Amônia/análise , Análise de Ondaletas , Qualidade da Água , Aquicultura/métodosRESUMO
In this study, we introduce an artificial intelligent method for addressing the batch effect of a transcriptome data. The method has several clear advantages in comparison with the alternative methods presently in use. Batch effect refers to the discrepancy in gene expression data series, measured under different conditions. While the data from the same batch (measurements performed under the same conditions) are compatible, combining various batches into 1 data set is problematic because of incompatible measurements. Therefore, it is necessary to perform correction of the combined data (normalization), before performing biological analysis. There are numerous methods attempting to correct data set for batch effect. These methods rely on various assumptions regarding the distribution of the measurements. Forcing the data elements into pre-supposed distribution can severely distort biological signals, thus leading to incorrect results and conclusions. As the discrepancy between the assumptions regarding the data distribution and the actual distribution is wider, the biases introduced by such "correction methods" are greater. We introduce a heuristic method to reduce batch effect. The method does not rely on any assumptions regarding the distribution and the behavior of data elements. Hence, it does not introduce any new biases in the process of correcting the batch effect. It strictly maintains the integrity of measurements within the original batches.
RESUMO
Advances in new technologies are allowing any field of real life to benefit from using these ones. Among of them, we can highlight the IoT ecosystem making available large amounts of information, cloud computing allowing large computational capacities, and Machine Learning techniques together with the Soft Computing framework to incorporate intelligence. They constitute a powerful set of tools that allow us to define Decision Support Systems that improve decisions in a wide range of real-life problems. In this paper, we focus on the agricultural sector and the issue of sustainability. We propose a methodology that, starting from times series data provided by the IoT ecosystem, a preprocessing and modelling of the data based on machine learning techniques is carried out within the framework of Soft Computing. The obtained model will be able to carry out inferences in a given prediction horizon that allow the development of Decision Support Systems that can help the farmer. By way of illustration, the proposed methodology is applied to the specific problem of early frost prediction. With some specific scenarios validated by expert farmers in an agricultural cooperative, the benefits of the methodology are illustrated. The evaluation and validation show the effectiveness of the proposal.