RESUMO
The presence of aluminum (Al3+) and fluoride (F-) ions in the environment can be harmful to ecosystems and human health, highlighting the need for accurate and efficient monitoring. In this paper, an innovative approach is presented that leverages the power of machine learning to enhance the accuracy and efficiency of fluorescence-based detection for sequential quantitative analysis of aluminum (Al3+) and fluoride (F-) ions in aqueous solutions. The proposed method involves the synthesis of sulfur-functionalized carbon dots (C-dots) as fluorescence probes, with fluorescence enhancement upon interaction with Al3+ ions, achieving a detection limit of 4.2 nmol/L. Subsequently, in the presence of F- ions, fluorescence is quenched, with a detection limit of 47.6 nmol/L. The fingerprints of fluorescence images are extracted using a cross-platform computer vision library in Python, followed by data preprocessing. Subsequently, the fingerprint data is subjected to cluster analysis using the K-means model from machine learning, and the average Silhouette Coefficient indicates excellent model performance. Finally, a regression analysis based on the principal component analysis method is employed to achieve more precise quantitative analysis of aluminum and fluoride ions. The results demonstrate that the developed model excels in terms of accuracy and sensitivity. This groundbreaking model not only showcases exceptional performance but also addresses the urgent need for effective environmental monitoring and risk assessment, making it a valuable tool for safeguarding our ecosystems and public health.
Assuntos
Alumínio , Monitoramento Ambiental , Fluoretos , Aprendizado de Máquina , Alumínio/análise , Fluoretos/análise , Monitoramento Ambiental/métodos , Poluentes Químicos da Água/análise , FluorescênciaRESUMO
Reliable traffic flow data is not only crucial for traffic management and planning, but also the foundation for many intelligent applications. However, the phenomenon of missing traffic flow data often occurs, so we propose an imputation model for missing traffic flow data to overcome the randomness and instability bands of traffic flow. First, k-means clustering is used to classify road segments with traffic flow belonging to the same pattern into a group to utilize the spatial characteristics of roads fully. Then, the LSTM networks optimized with an attention mechanism are used as the base learner to extract the temporal dependence of the traffic flow. Finally, the AdaBoost algorithm is used to integrate all the LSTM-attention networks into a reinforced learner to impute the missing data. To validate the effectiveness of the proposed model, we use the PeMS dataset for validation, we impute the data with missing data rate from 10 to 60% under three missing modes, and we use multiple baseline models for comparison, which confirms that our proposed model improves the stability and accuracy of imputing the missing data of the traffic flow with different scenarios.
RESUMO
Background: Integrating technology with rural development is essential for addressing the unique challenges faced by aging populations in rural areas. China's national rural revitalization strategy emphasizes the importance of developing characteristic towns that focus on health and wellness demands, particularly for the older adults. However, there is a gap in the literature regarding the systematic use of technology to support the health and daily living needs of this demographic. Objective: This study aims to bridge this gap by proposing a wireless sensor network (WSN)-based system that integrates health monitoring and smart home assistance, specifically tailored for older adults residents in sports and wellness towns. Methods: The system's design involves collecting and analyzing historical activity data and basic physiological parameters from older adults residents' homes. The data are cleaned, combined and stored to prepare it for behavior analysis, which is vital for controlling smart home equipment. Preliminary health assessments are conducted using the collected physiological data. A hybrid network leveraging Sub-G and Wi-Fi technologies is optimized for the purposes of collecting and uploading data, based on a comparative analysis of wireless communication options. Significance: This study significantly contributes to the advancement of sports and wellness towns by promoting healthy aging through cutting-edge technological solutions.
Assuntos
Esportes , Tecnologia sem Fio , Humanos , Idoso , China , Masculino , Feminino , População Rural , Monitorização Fisiológica/instrumentação , Idoso de 80 Anos ou mais , Pessoa de Meia-Idade , Promoção da Saúde/métodos , Serviços de Assistência DomiciliarRESUMO
Wireless sensor networks (WSN) have found more and more applications in remote control and monitoring systems. Energy management in the network is crucial because all nodes in the WSN are energy constrained. Therefore, the design and implementation of WSN protocols that reduce energy depletion in the network is still an open scientific problem. In this paper, we propose a new clustering protocol that combines DEC (deterministic energy-efficient clustering) protocol with K-means clustering, called DEC-KM (deterministic energy-efficient clustering protocol with K-means). DEC is a very energy-efficient clustering protocol that outperforms its predecessors, such as LEACH and SEP. K-means ensures more effective clustering and shorter data transmission distances within the network. The shorter distances improve the network's lifetime and stability and reduce power consumption. Additional heuristic rules in DEC-KM ensure improved cluster head selection, taking into account node energy level and position and minimising the risk of premature cluster head exhaustion. The simulation results for the DEC-KM protocol using MATLAB show that cluster heads have shorter distances to nodes in cluster areas than for the original DEC protocol. The proposed protocol ensures reduced energy consumption, outperforms the standard DEC, and extends the stability period and lifetime of the network.
RESUMO
The stratum corneum (SC) plays the most important role in the absorption of topical and transdermal drugs. In this study, we developed a multi-layered SC model using coarse-grained molecular dynamics (CGMD) simulations of ceramides, cholesterol, and fatty acids in equimolar proportions, starting from two different initial configurations. In the first approach, all ceramide molecules were initially in the hairpin conformation, and the membrane bilayers were pre-formed. In the second approach, ceramide molecules were introduced in either the hairpin or splayed conformation, with the lipid molecules randomly oriented at the start of the simulation. The aim was to evaluate the effects of lipid chain length on the structural and dynamic properties of SC. By incorporating ceramides and fatty acids of different chain lengths, we simulated the SC membrane in healthy and diseased states. We calculated key structural properties including the thickness, normalized lipid area, lipid tail order parameters, and spatial ordering of the lipids from each system. The results showed that systems with higher ordering and structural integrity contained an equimolar ratio of ceramides (chain length of 24 carbon atoms), fatty acids with chain lengths ≥ of 20 carbon atoms, and cholesterol. In these systems, strong apolar interactions between the ceramide and fatty acid long acyl chains restricted the mobility of the lipid molecules, thereby maintaining a compact lipid headgroup region and high order in the lipid tail region. The simulations also revealed distinct flip-flop mechanisms for cholesterol and fatty acid within the multi-layered membrane. Cholesterol is mostly diffused through the tail-tail interface region of the membrane and could flip-flop in the same bilayer. In contrast, fatty acids flip-flopped between adjacent leaflets of two bilayers in which the tails crossed the thinner headgroup region of the membrane. To conclude, our SC model provides mechanistic insights into lipid mobility and is flexible in its design and composition of different lipids, enabling studies of varying skin conditions.
RESUMO
In response to the problems of low entity recognition accuracy, low user satisfaction, and weak interactivity in the construction of knowledge graph for digital display of museum cultural relics, this article studied the application of supergroup algorithms and knowledge graph construction in museum digital display platforms to solve the existing problems. By utilizing the K-means algorithm in the supergroup algorithm to conduct a survey of visitors to Museum A and analyze the behavior of 180 selected visitors, the display effect and audience satisfaction can be improved. Various knowledge graph technologies were utilized to construct a knowledge graph of museum cultural relics. Various knowledge resources in museums were associated and integrated, and through the collection and processing of museum cultural relic data, cultural relic ontology construction and relationship extraction were achieved, providing viewers with richer and more in-depth display content. Through experiments, it was found that the visitor satisfaction rate based on the K-means algorithm was above 92.68 %, and the average visitor satisfaction rate after 10 experiments was 94.25 %. The accuracy, recall, and F1 values of the museum cultural relics knowledge graph studied in this article were 90.12 %, 84.69 %, and 82.23 %, respectively, which were much higher than other types of knowledge graphs. By applying these advanced technologies to the digital display platform of museums, not only can the visitor experience be improved, but also the digitalization process of museums can be promoted, contributing to cultural dissemination and development.
RESUMO
Identifying the grain distribution and grain boundaries of nanoparticles is important for predicting their properties. Experimental methods for identifying the crystallographic distribution, such as precession electron diffraction, are limited by their probe size. In this study, we developed an unsupervised learning method by applying a Gabor filter to HAADF-STEM images at the atomic level for image segmentation and automatic counting of grains in polycrystalline nanoparticles. The methodology comprises a Gabor filter for feature extraction, non-negative matrix factorization for dimension reduction, and K-means clustering. We set the threshold distance and angle between the clusters required for the number of clusters to converge so as to automatically determine the optimal number of grains. This approach can shed new light on the nature of polycrystalline nanoparticles and their structure-property relationships.
RESUMO
Polymorphic transformation is important in chemical industries, in particular, in those involving explosive molecular crystals. However, due to simulating challenges in the rare event method and collective variables, understanding the transformation mechanism of molecular crystals with a complex structure at the molecular level is poor. In this work, with the constructed order parameters (OPs) and K-means clustering algorithm, the potential of mean force (PMF) along the minimum free-energy path connecting ß-HMX and δ-HMX was calculated by the finite temperature string method in the collective variables (SMCV), the free-energy profile and nucleation kinetics were obtained by Markovian milestoning with Voronoi tessellations, and the temperature effect on nucleation was also clarified. The barriers of transformation were affected by the finite-size effects. The configuration with the lower potential barrier in the PMF corresponded to the critical nucleus. The time and free-energy barrier of the polymorphic transformation were reduced as the temperature increased, which was explained by the pre-exponential factor and nucleation rate. Thus, the polymorphic transformation of HMX could be controlled by the temperatures, as is consistent with previous experimental results. Finally, the HMX polymorph dependency of the impact sensitivity was discussed. This work provides an effective way to reveal the polymorphic transformation of the molecular crystal with a cyclic molecular structure, and further to prepare the desired explosive by controlling the transformation temperature.
RESUMO
Purpose: To explore the characteristics of the clinical phenotype of ARDS based on Machine Learning. Methods: This is a study on Machine Learning. Screened cases of acute respiratory distress syndrome (ARDS) in the eICU database collected basic information in the cases and clinical data on the Day 1, Day 3, and Day 7 after the diagnosis of ARDS, respectively. Using the Calinski-Harabasz criterion, Gap Statistic, and Silhouette Coefficient, we determine the optimal clustering number k value. By the K-means cluster analysis to derive clinical phenotype, we analyzed the data collected within the first 24 h. We compared it with the survival of cases under the Berlin standard classification, and also examined the phenotypic conversion within the first 24 h, on day 3, and on day 7 after the diagnosis of ARDS. Results: We collected 5054 cases and derived three clinical phenotypes using K-means cluster analysis. Phenotype-I is characterized by fewer abnormal laboratory indicators, higher oxygen partial pressure, oxygenation index, APACHE IV score, systolic and diastolic blood pressure, and lower respiratory rate and heart rate. Phenotype-II is characterized by elevated white blood cell count, blood glucose, creatinine, temperature, heart rate, and respiratory rate. Phenotype-III is characterized by elevated age, partial pressure of carbon dioxide, bicarbonate, GCS score, albumin. The differences in ICU length of stay and in-hospital mortality were significantly different between the three phenotypes (P < 0.05), with phenotype I having the lowest in-hospital mortality (10 %) and phenotype II having the highest (31.8 %). To compare the survival analysis of ARDS patients classified by phenotype and those classified according to Berlin criteria. The results showed that the differences in survival between phenotypes were statistically significant (P < 0.05) under phenotypic classification. Conclusions: The clinical classification of ARDS based on K-means clustering analysis is beneficial for further identifying ARDS patients with different characteristics. Compared to the Berlin standard, the new clinical classification of ARDS provides a clearer display of the survival status of different types of patients, which helps to predict patient prognosis.
RESUMO
When analyzing data combined from multiple sources (e.g., hospitals, studies), the heterogeneity across different sources must be accounted for. In this paper, we consider high-dimensional linear regression models for integrative data analysis. We propose a new adaptive clustering penalty (ACP) method to simultaneously select variables and cluster source-specific regression coefficients with sub-homogeneity. We show that the estimator based on the ACP method enjoys a strong oracle property under certain regularity conditions. We also develop an efficient algorithm based on the alternating direction method of multipliers (ADMM) for parameter estimation. We conduct simulation studies to compare the performance of the proposed method to three existing methods (a fused LASSO with adjacent fusion, a pairwise fused LASSO, and a multi-directional shrinkage penalty method). Finally, we apply the proposed method to the multi-center Childhood Adenotonsillectomy Trial to identify sub-homogeneity in the treatment effects across different study sites.
Insérer votre résumé ici. We will supply a French abstract for those authors who can't prepare it themselves.
RESUMO
The number of diabetic patients is increasing rapidly who have vulnerable feet and might be easily affected by different adversities. Since there is no available footwear sizing system for diabetic patients, manufacturers produce diabetic footwear of different sizes and fittings based on other available footwear sizing systems, which may result in inappropriate fitting. To get footwear with proper fittings, diabetic patients may go for customized or bespoke footwear based on their foot conditions, which is very costly. This study attempts to explore the foot complications of diabetic patients and categorize their feet to create a new sizing system using foot measurements from 102 male diabetic patients based on three dimensions of human feet, namely foot length, ball girth, and instep circumference. K-means data clustering is followed to categorize the data into three broad groups, namely small, medium, and large groups for footwear sizing. The developed footwear sizing system uses a sizing interval of 8 mm and a fitting interval of 6 mm. This study suggests a total of 11 sizes along with 24 different fittings for the footwear manufacturers for producing diabetic footwear. This newly developed footwear sizing system has a total of 79.41 % coverage where there are 10, 10, and 4 fittings in the small, medium, and large groups, respectively. The proposed footwear sizing system can help footwear manufacturers understand the proper size and fit of diabetic patients' feet so that they can make appropriate footwear for diabetic patients economically.
RESUMO
Maintaining the quality and integrity of frozen goods throughout the supply chain necessitates a robust and efficient cold chain logistics network. This research proposes a machine learning-based method for optimizing such networks, resulting in significant cost reduction and resource utilization improvement. The method employs a three-phase approach. First, K-means clustering groups sellers based on their geographical proximity, simplifying the problem and enabling more accurate demand prediction. During the second phase of the proposed method, Gaussian Process Regression models predict future sales volume for each seller cluster, leveraging historical sales data. Finally, the Capuchin Search Algorithm simultaneously optimizes distributor location and resource allocation for each cluster, minimizing both transportation and holding costs. This multi-objective approach achieved a 34.76% reduction in costs and a 15.6% reduction in resource wastage compared to the existing system. This novel method offers a valuable tool for frozen goods distribution networks, with advantages such as considering multiple goals for optimization, focusing on demand prediction, potential for reduced complexity, and focusing on managerial insights over compared methods.
RESUMO
OBJECTIVES: This study assessed the relationship between occupational noise exposure and the incidence of workplace fatal injury (FI) and nonfatal injury (NFI) in the United States from 2006 to 2020. It also examined whether distinct occupational and industrial clusters based on noise exposure characteristics demonstrated varying risks for FI and NFI. METHODS: An ecological study design was utilized, employing data from the U.S. Bureau of Labor Statistics for FI and NFI and demographic data, the U.S. Census Bureau for occupation/industry classification code lists, and the U.S./Canada Occupational Noise Job Exposure Matrix for noise measurements. We examined four noise metrics as predictors of FI and NFI rates: mean Time-Weighted Average (TWA), maximum TWA, standard deviation of TWA, and percentage of work shifts exceeding 85 or 90 dBA for 619 occupation-years and 591 industry-years. K-means clustering was used to identify clusters of noise exposure characteristics. Mixed-effects negative binomial regression examined the relationship between the noise characteristics and FI/NFI rates separately for occupation and industry. RESULTS: Among occupations, we found significant associations between increased FI rates and higher mean TWA (IRR: 1.06, 95% CI: 1.01-1.12) and maximum TWA (IRR: 1.10, 95% CI: 1.07-1.14), as well as TWA exceedance (IRR: 1.04, 95% CI: 1.01-1.07). Increased rates of NFI were found to be significantly associated with maximum TWA (IRR: 1.06, 95% CI: 1.04-1.09) and TWA exceedance (IRR: 1.03, 95% CI: 1.01-1.05). In addition, occupations with both higher exposure variability (IRR with FI rate: 1.49, 95% CI: 1.23-1.80; IRR with NFI rate: 1.40, 95% CI: 1.14-1.73) and higher level of sustained exposure (IRR with FI rate: 1.27, 95% CI: 1.12-1.44; IRR with NFI rate: 1.21, 95% CI: 1.05-1.39) were associated with higher rates of FI and NFI compared to occupations with low noise exposure. Among industries, significant associations between increased NFI rates and higher mean TWA (IRR: 1.05, 95% CI: 1.02-1.08) and maximum TWA (IRR: 1.06, 95% CI: 1.04-1.08) were observed. Unlike the occupation-specific analysis, industries with higher exposure variability and higher sustained exposures did not display significantly higher FI/NFI rates compared to industries with low exposure. CONCLUSIONS: The results suggest that occupational noise exposure may be an independent risk factor for workplace FIs/NFIs, particularly for workplaces with highly variable noise exposures. The study highlights the importance of comprehensive occupational noise assessments.
RESUMO
PURPOSE: To compare the performance of MRI-based Gaussian mixture model (GMM), K-means clustering, and Otsu unsupervised algorithms in predicting sarcopenia and to develop a combined model by integrating clinical indicators. METHODS: Retrospective analysis was conducted on clinical and lumbar MRI data from 118 patients diagnosed with sarcopenia and 222 patients without the sarcopenia. All patients were randomly divided into training and validation groups in a 7:3 ratio. Regions of interest (ROI), specifically the paravertebral muscles at the L3/4 intervertebral disc level, were delineated on axial T2-weighted images (T2WI). The Gaussian mixture model (GMM), K-means clustering, and Otsu's thresholding algorithms were employed to automatically segment muscle and adipose tissues at both the cohort and case levels. Subsequently, the mean signal intensity, volumes, and percentages of these tissues were calculated and compared. Logistic regression analyses were conducted to construct models and identify independent predictors of sarcopenia. An combined model was developed by combining the optimal magnetic resonance imaging (MRI) model and clinical predictors. The performance of the constructed model was assessed using receiver operating characteristic (ROC) curve analysis. RESULTS: Age, BMI, and serum albumin were identified as independent clinical predictors of sarcopenia. The cohort-level GMM demonstrated the best predictive performance both in the training group (AUC=0.840) and validation group (AUC=0.800), while the predictive performance of the other models was lower than that of the clinical model both in the training and validation groups. After combining the cohort-level GMM with the independent clinical predictors, the AUC of the training and validation groups increased to 0.871 and 0.867, respectively. CONCLUSION: The cohort-level GMM shows potential in predicting sarcopenia, and the incorporation of independent clinical predictors further increased the performance.
RESUMO
Over the last several years, the COVID-19 epidemic has spread over the globe. People have become used to the novel standard, which involves working from home, chatting online, and keeping oneself clean, to stop the spread of COVID-19. Due to this, many public spaces make an effort to make sure that their visitors wear proper face masks and maintain a safe distance from one another. It is impossible for monitoring workers to ensure that everyone is wearing a face mask; automated solutions are a far better option for face mask identification and monitoring to assist control public conduct and reduce the COVID-19 epidemic. The motivation for developing this technology was the need to identify those individuals who uncover their faces. Most of the previously published research publications focused on various methodologies. This study built new methods namely K-medoids, K-means, and Fuzzy K-Means(FKM) to use image pre-processing to get the better quality of the face and reduce the noise data. In addition, this study investigates various machine learning models Convolutional neural networks (CNN) with pre-trained (DenseNet201, VGG-16, and VGG-19) models, and Support Vector Machine (SVM) for the detection of face masks. The experimental results of the proposed method K-medoids with pre-trained model DenseNet201 achieved the 97.7 % accuracy best results for face mask identification. Our research results indicate that the segmentation of images may improve the identification of accuracy. More importantly, the face mask identification tool is more beneficial when it can identify the face mask in a side-on approach.
RESUMO
Introduction: The aim of our work was to determine comprehensively the sensitization profile of patients hypersensitive to fungal allergenic components in the Ukrainian population, identifying features of their co-sensitization to allergens of other groups and establishing potential relationships between causative allergens and their ability to provoke this hypersensitivity. Methods: A set of programs was developed using Python and R programming languages, implementing the K-means++ clustering method. Bayesian networks were constructed based on the created clusters, allowing for the assessment of the probabilistic interplay of allergen molecules in the sensitization process of patients. Results and discussion: It was found that patients sensitive to fungi are polysensitized, with 84.77% of them having unique allergological profiles, comprising from 2 to several dozen allergens from different groups. The immune response to Alt a 1 may act as the primary trigger for sensitization to other allergens and may contribute to a high probability of developing sensitivity to grasses (primarily to Phl p 2), ragweed extract, and the Amb a 1 pectate lyase, as well as to pectate lyase Cry j 1 and cat allergen Fel d 1. Individuals polysensitized to molecular components of fungi were often sensitive to such cross-reactive molecules as lipocalins Fel d 4 and Can f 6, as well. Sensitivity to Ambrosia extract which dominated in the development of sensitization to ragweed pollen indicating the importance of different allergenic components of this plant's pollen. This hypothesis, along with the assumption that Phl p 2 may be the main trigger for sensitivity to grasses in patients with Alternaria allergy, requires further clinical investigation.
RESUMO
BACKGROUND: Submaximal muscle strength grading is clinically significant to monitor the progress of rehabilitation. Especially muscle strength grading of core back muscles is challenging using the conventional manual muscle testing (MMT) methods. The muscles are crucial to recovery from back pain, spinal cord injury, stroke and other related diseases. The subjective nature of MMT, adds more ambiguity to grade fine progressions in submaximal strength levels involving 4-, 4 and 4+ grades. Electromyogram (EMG) has been widely used as a quantitative measure to provide insight into the progress of muscle strength. However, several EMG features have been reported in previous studies, and the selection of suitable features pertaining to the problem has remained a challenge. METHOD: Principal Component Analysis (PCA) biplot visualization is employed in this study to select EMG features that highlight fine changes in muscle strength spanning the submaximal range. Features that offer maximum loading in the principal component subspace, as observed in the PCA biplot, are selected for grading submaximal strength. The performance of the proposed feature set is compared with conventional Principal Component (PC) scores. Submaximal muscle strength grades of 4-, 4, 4+ or 5 are assigned using K-means and Gaussian mixture model clustering methods. Clustering performance of the two feature selection methods is compared using the silhouette score metric. RESULTS: The proposed feature set from biplot visualization involving Root Mean Square (RMS) EMG and Waveform Length in combination with Gaussian Mixture Model (GMM) clustering method was observed to offer maximum accuracy. Muscle-wise mean Silhouette Index (SI) scores (p < 0.05) of .81, .74 (Longissimus thoracis left, right) and .73, .77 (Iliocostalis lumborum left, right) were observed. Similarly grade wise mean SI scores (p < 0.05) of .80, .76, .73, and .981 for grades 4-, 4, 4+, and 5 respectively, were observed. CONCLUSION: The study addresses the problem of selecting minimum features that offer maximum variability for EMG assisted submaximal muscle strength grading. The proposed method emphasizes using biplot visualization to overcome the difficulty in choosing appropriate EMG features of the core back muscles that significantly distinguishes between grades 4-, 4, 4+ and 5.
RESUMO
Objectives: Pupil dilation is controlled both by sympathetic and parasympathetic nervous system branches. We hypothesized that the dynamic of pupil size changes under cognitive load with additional false feedback can predict individual behavior along with heart rate variability (HRV) patterns and eye movements reflecting specific adaptability to cognitive stress. To test this, we employed an unsupervised machine learning approach to recognize groups of individuals distinguished by pupil dilation dynamics and then compared their autonomic nervous system (ANS) responses along with time, performance, and self-esteem indicators in cognitive tasks. Methods: Cohort of 70 participants were exposed to tasks with increasing cognitive load and deception, with measurements of pupillary dynamics, HRV, eye movements, and cognitive performance and behavioral data. Utilizing machine learning k-means clustering algorithm, pupillometry data were segmented to distinct responses to increasing cognitive load and deceit. Further analysis compared clusters, focusing on how physiological (HRV, eye movements) and cognitive metrics (time, mistakes, self-esteem) varied across two clusters of different pupillary response patterns, investigating the relationship between pupil dynamics and autonomic reactions. Results: Cluster analysis of pupillometry data identified two distinct groups with statistically significant varying physiological and behavioral responses. Cluster 0 showed elevated HRV, alongside larger initial pupil sizes. Cluster 1 participants presented lower HRV but demonstrated increased and pronounced oculomotor activity. Behavioral differences included reporting more errors and lower self-esteem in Cluster 0, and faster response times with more precise reactions to deception demonstrated by Cluster 1. Lifestyle variations such as smoking habits and differences in Epworth Sleepiness Scale scores were significant between the clusters. Conclusion: The differentiation in pupillary dynamics and related metrics between the clusters underlines the complex interplay between autonomic regulation, cognitive load, and behavioral responses to cognitive load and deceptive feedback. These findings underscore the potential of pupillometry combined with machine learning in identifying individual differences in stress resilience and cognitive performance. Our research on pupillary dynamics and ANS patterns can lead to the development of remote diagnostic tools for real-time cognitive stress monitoring and performance optimization, applicable in clinical, educational, and occupational settings.
RESUMO
In the process of penicillin fermentation, there is a strong nonlinear relationship between the input eigenvector and multiple output vectors, which makes the prediction accuracy of the existing model difficult to meet the requirements of chemical production. Therefore, a local selective ensemble learning multi-objective soft sensing modeling strategy is proposed in this study. Firstly, a localization method based on transfer entropy and k-means is proposed to reconstruct the sample set. Then, based on the reconstructed local samples, the local soft sensing model is established by the multi-objective support vector regression method, and the selective ensemble of sub-models and the adaptive calculation of prediction weights are realized. At the same time, to reduce the adverse effects caused by improper selection of model parameters, the sparrow search algorithm is used to realize the tuning of the mentioned model parameters. Finally, the proposed modeling strategy is simulated. The results show that, compared with other methods, the proposed local selective ensemble learning multi-objective soft sensing modeling strategy has better prediction performance.
RESUMO
Water ecological restoration zoning, which involves articulating goals for restoring water ecosystems upwards and guiding the spatial layout of restoration projects downwards, is key to achieving systematic restoration of water resource elements. There are many challenges in water ecological restoration zoning, including disparate hierarchical systems, incomplete indicators, and vague boundaries. With Guangxi Hechi, a karst ecologically fragile region, as a case, we developed a multidimensional zoning system framework based on "watershed natural unit-dominant ecological function-ecological stress risk". The first-level zoning employed river systems and geomorphic types as indicators and delineated the sub-watershed unit as the boundary. The second-level zoning adopted a "top-down" division method to clarify the goal of water ecological restoration based on watershed natural geography and select three indicators (water conservation, biodiversity, and landscape cultural services) for evaluation. We used the K-means clustering method to identify dominant ecological functions in spatial units, with the sub-watershed unit demarcating second-level zoning boundaries. The third-level zoning was the specific implementation unit for ecological restoration projects. We used three indicators (soil erosion, flooding risk, and human interference) to characterize water ecosystem risk from external coercion, and defined the third-level zoning. We delineated 11 primary water ecological zones, four secondary zones, and three tertiary zones. Synthesizing tertiary zoning results accounted for spatial differentiation characteristics of watershed natural geography, dominant ecological functions, and ecological coercion risks, and combining sub-watershed and township administrative units determined zoning boundaries, water ecological restoration zoning was comprehensively classified into five categories and 32 sub-ecological zones. Corresponding ecological restoration strategies were proposed based on zoning and classification.