RESUMEN
Timely detecting epileptic seizures can significantly reduce accidental injuries of epilepsy patients and offer a novel intervention approach to improve their quality of life. Investigation on seizure detection based on deep learning models has achieved great success. However, there still remain challenging issues, such as the high computational complexity of the models and overfitting caused by the scarce availability of ictal electroencephalogram (EEG) signals for training. Therefore, we propose a novel end-to-end automatic seizure detection model named CNN-Informer, which leverages the capability of Convolutional Neural Network (CNN) to extract EEG local features of multi-channel EEGs, and the low computational complexity and memory usage ability of the Informer to capture the long-range dependencies. In view of the existence of various artifacts in long-term EEGs, we filter those raw EEGs using Discrete Wavelet Transform (DWT) before feeding them into the proposed CNN-Informer model for feature extraction and classification. Post-processing operations are further employed to achieve the final detection results. Our method is extensively evaluated on the CHB-MIT dataset and SH-SDU dataset with both segment-based and event-based criteria. The experimental outcomes demonstrate the superiority of the proposed CNN-Informer model and its strong generalization ability across two EEG datasets. In addition, the lightweight architecture of CNN-Informer makes it suitable for real-time implementation.
RESUMEN
Accurate runoff forecasting is of great significance for water resource allocation flood control and disaster reduction. However, due to the inherent strong randomness of runoff sequences, this task faces significant challenges. To address this challenge, this study proposes a new SMGformer runoff forecast model. The model integrates Seasonal and Trend decomposition using Loess (STL), Informer's Encoder layer, Bidirectional Gated Recurrent Unit (BiGRU), and Multi-head self-attention (MHSA). Firstly, in response to the nonlinear and non-stationary characteristics of the runoff sequence, the STL decomposition is used to extract the runoff sequence's trend, period, and residual terms, and a multi-feature set based on 'sequence-sequence' is constructed as the input of the model, providing a foundation for subsequent models to capture the evolution of runoff. The key features of the input set are then captured using the Informer's Encoder layer. Next, the BiGRU layer is used to learn the temporal information of these features. To further optimize the output of the BiGRU layer, the MHSA mechanism is introduced to emphasize the impact of important information. Finally, accurate runoff forecasting is achieved by transforming the output of the MHSA layer through the Fully connected layer. To verify the effectiveness of the proposed model, monthly runoff data from two hydrological stations in China are selected, and eight models are constructed to compare the performance of the proposed model. The results show that compared with the Informer model, the 1th step MAE of the SMGformer model decreases by 42.2% and 36.6%, respectively; RMSE decreases by 37.9% and 43.6% respectively; NSE increases from 0.936 to 0.975 and from 0.487 to 0.837, respectively. In addition, the KGE of the SMGformer model at the 3th step are 0.960 and 0.805, both of which can maintain above 0.8. Therefore, the model can accurately capture key information in the monthly runoff sequence and extend the effective forecast period of the model.
RESUMEN
Aiming at the problem that the single machine learning model has low prediction accuracy of daily average ozone concentration, an ozone concentration prediction method based on the fusion class Stacking algorithm ï¼FSOPï¼ was proposed, which combined the statistical method ordinary least squares ï¼OLSï¼ with machine learning algorithms and improved the prediction accuracy of the ozone concentration prediction model by integrating the advantages of different learners. Based on the principle of the Stacking algorithm, the observation data of the daily maximum 8h ozone average concentration and meteorological reanalysis data in Hangzhou from January 2017 to December 2022 were used. Firstly, the specific ozone concentration prediction models based on the light gradient boosting machine ï¼LightGBMï¼ algorithm, long short-term memory model ï¼LSTMï¼, and Informer model were established, respectively. Then, the prediction results of the above models were used as meta-features, and the OLS algorithm was used to obtain the prediction expression of ozone concentration to fit the observed ozone concentration. The results showed that the prediction accuracy of the model combined with the class Stacking algorithm was improved, and the fitting effect of ozone concentration was better. Among them, R2, RMSE, and MAE were 0.84, 19.65 µg·m-3, and 15.50 µg·m-3, respectively, which improved the prediction accuracy by approximately 8% compared with that of the single machine learning model.
RESUMEN
Accurately detecting voltage faults is essential for ensuring the safe and stable operation of energy storage power station systems. To swiftly identify operational faults in energy storage batteries, this study introduces a voltage anomaly prediction method based on a Bayesian optimized (BO)-Informer neural network. Firstly, the temporal characteristics and actual data collected by the battery management system (BMS) are considered to establish a long-term operational dataset for the energy storage station. The Pearson correlation coefficient (PCC) is used to quantify the correlations between these data. Secondly, an Informer neural network with BO hyperparameters is used to build the voltage prediction model. The performance of the proposed model is assessed by comparing it with several state-of-the-art models. With a 1 min sampling interval and one-step prediction, trained on 70% of the available data, the proposed model reduces the root mean square error (RMSE), mean square error (MSE), and mean absolute error (MAE) of the predictions to 9.18 mV, 0.0831 mV, and 6.708 mV, respectively. Furthermore, the influence of different sampling intervals and training set ratios on prediction results is analyzed using actual grid operation data, leading to a dataset that balances efficiency and accuracy. The proposed BO-based method achieves more precise voltage abnormity prediction than the existing methods.
RESUMEN
Forecasting stock movements is a crucial research endeavor in finance, aiding traders in making informed decisions for enhanced profitability. Utilizing actual stock prices and correlating factors from the Wind platform presents a potent yet intricate forecasting approach. While previous methodologies have explored this avenue, they encounter challenges including limited comprehension of interrelations among stock data elements, diminished accuracy in extensive series, and struggles with anomaly points. This paper introduces an advanced hybrid model for stock price prediction, termed PMANet. PMANet is founded on Multi-scale Timing Feature Attention, amalgamating Multi-scale Timing Feature Convolution and Ant Particle Swarm Optimization. The model elevates the understanding of dependencies and interrelations within stock data sequences through Probabilistic Positional Attention. Furthermore, the Encoder incorporates Multi-scale Timing Feature Convolution, augmenting the model's capacity to discern multi-scale and significant features while adeptly managing lengthy input sequences. Additionally, the model's proficiency in addressing anomaly points in stock sequences is enhanced by substituting the optimizer with Ant Particle Swarm Optimization. To ascertain the model's efficacy and applicability, we conducted an empirical study using stocks from four pivotal industries in China. The experimental outcomes demonstrate that PMANet is both feasible and versatile in its predictive capability, yielding forecasts closely aligned with actual values, thereby fulfilling application requirements more effectively.
RESUMEN
Accurately predicting blood glucose levels is crucial in diabetes management to mitigate patients' risk of complications. However, blood glucose values exhibit instability, and existing prediction methods often struggle to capture their volatile nature, leading to inaccurate trend forecasts. To address these challenges, we propose a novel blood glucose level prediction model based on the Informer architecture: BGformer. Our model introduces a feature enhancement module and a microscale overlapping concerns mechanism. The feature enhancement module integrates periodic and trend feature extractors, enhancing the model's ability to capture relevant information from the data. By extending the feature extraction capacity of time series data, it provides richer feature representations for analysis. Meanwhile, the microscale overlapping concerns mechanism adopts a window-based strategy, computing attention scores only within specific windows. This approach reduces computational complexity while enhancing the model's capacity to capture local temporal dependencies. Furthermore, we introduce a dual attention enhancement module to augment the model's expressive capability. Through prediction experiments on blood glucose values from sixteen diabetic patients, our model outperformed eight benchmark models in terms of both MAE and RMSE metrics for future 60-minute and 90-minute predictions. Our proposed scheme significantly improves the model's dependency-capturing ability, resulting in more accurate blood glucose level predictions.
Asunto(s)
Glucemia , Humanos , Glucemia/análisis , Algoritmos , Diabetes Mellitus/sangreRESUMEN
Accurate prediction of air quality is crucial for assessing the state of the atmospheric environment, especially considering the nonlinearity, volatility, and abrupt changes in air quality data. This paper introduces an air quality index (AQI) prediction model based on the Dung Beetle Algorithm (DBO) aimed at overcoming limitations in traditional prediction models, such as inadequate access to data features, challenges in parameter setting, and accuracy constraints. The proposed model optimizes the parameters of Variational Mode Decomposition (VMD) and integrates the Informer adaptive sequential prediction model with the Convolutional Neural Network-Long Short Term Memory (CNN-LSTM). Initially, the correlation coefficient method is utilized to identify key impact features from multivariate weather and meteorological data. Subsequently, penalty factors and the number of variational modes in the VMD are optimized using DBO. The optimized parameters are utilized to develop a variationally constrained model to decompose the air quality sequence. The data are categorized based on approximate entropy, and high-frequency data are fed into the Informer model, while low-frequency data are fed into the CNN-LSTM model. The predicted values of the subsystems are then combined and reconstructed to obtain the AQI prediction results. Evaluation using actual monitoring data from Beijing demonstrates that the proposed coupling prediction model of the air quality index in this paper is superior to other parameter optimization models. The Mean Absolute Error (MAE) decreases by 13.59%, the Root-Mean-Square Error (RMSE) decreases by 7.04%, and the R-square (R2) increases by 1.39%. This model surpasses 11 other models in terms of lower error rates and enhances prediction accuracy. Compared with the mainstream swarm intelligence optimization algorithm, DBO, as an optimization algorithm, demonstrates higher computational efficiency and is closer to the actual value. The proposed coupling model provides a new method for air quality index prediction.
RESUMEN
Extensive research has been diligently conducted on wind energy technologies in response to pressing global environmental challenges and the growing demand for energy. Accurate wind speed predictions are crucial for the effective integration of large wind power systems. This study presents a novel and hybrid framework called ICEEMDAN-Informer-GWO, which combines three components to enhance the accuracy of wind speed predictions. The improved complete ensemble empirical mode decomposition with adaptive noise (ICEEMDAN) component improves the decomposition of wind speed data, the Informer model provides computationally efficient wind speed predictions, and the grey wolf optimisation (GWO) algorithm optimises the parameters of the Informer model to achieve superior performance. Three different sets of wind speed prediction (WSP) models and wind farm data from Block Island, Gulf Coast, and Garden City are used to thoroughly assess the proposed hybrid framework. This evaluation focusses on WSP for three specific time horizons: 5 minutes, 30 minutes, and 1 hour ahead. The results obtained from the three conducted experiments conclusively demonstrate that the proposed hybrid framework exhibits superior performance, leading to statistically significant improvements across all three time horizons.
Asunto(s)
Algoritmos , Modelos Teóricos , VientoRESUMEN
The increase in air pollutants and its adverse effects on human health and the environment has raised significant concerns. This implies the necessity of predicting air pollutant levels. Numerous studies have aimed to provide new models for more accurate prediction of air pollutants such as CO2, O3, and PM2.5. Most of the models used in the literature are deep learning models with Transformers being the best for time series prediction. However, there is still a need to enhance accuracy in air pollution prediction using Transformers. Alongside the need for increased accuracy, there is a significant demand for predicting a broader spectrum of air pollutants. To encounter this challenge, this paper proposes a new hybrid deep learning-based Informer model called "Gelato" for multivariate air pollution prediction. Gelato takes a leap forward by taking several air pollutants into consideration simultaneously. Besides introducing new changes to the Informer structure as the base model, Gelato utilizes Particle Swarm Optimization for hyperparameter optimization. Moreover, XGBoost is used at the final stage to achieve minimal errors. Applying the proposed model on a dataset containing eight important air pollutants, including CO2, O3, NO, NO2, SO2, PM10, NH3, and PM2.5, the Gelato performance is assessed. Comparing the results of Gelato with other models shows Gelato's superiority over them, proving it is a high-confidence model for multivariate air pollution prediction.
Asunto(s)
Contaminantes Atmosféricos , Contaminación del Aire , Aprendizaje Profundo , Contaminantes Atmosféricos/análisis , Monitoreo del Ambiente/métodos , Material Particulado , Modelos TeóricosRESUMEN
Time series anomaly detection is very important to ensure the security of industrial control systems (ICSs). Many algorithms have performed well in anomaly detection. However, the performance of most of these algorithms decreases sharply with the increase in feature dimension. This paper proposes an anomaly detection scheme based on Graph Attention Network (GAT) and Informer. GAT learns sequential characteristics effectively, and Informer performs excellently in long time series prediction. In addition, long-time forecasting loss and short-time forecasting loss are used to detect multivariate time series anomalies. Short-time forecasting is used to predict the next time value, and long-time forecasting is employed to assist the short-time prediction. We conduct a large number of experiments on industrial control system datasets SWaT and WADI. Compared with most advanced methods, we achieve competitive results, especially on higher-dimensional datasets. Moreover, the proposed method can accurately locate anomalies and realize interpretability.
RESUMEN
In crop cultivation, particularly in controlled environmental agriculture, light quality is one of the most critical factors affecting crop growth and harvest. Many scholars have studied the effects of light quality on strawberry traits, but they have used relatively simple light components and considered only a small number of light qualities and traits in each experiment, and the results were not complete or objective. In order to comprehensively investigate the effects of different light qualities from 350 nm to 1000 nm on strawberry traits to better predict the future growth trend of strawberries under different light qualities, we proposed a new approach. We introduced Spearman's rank correlation coefficient to handle complex light quality variations and multiple traits, preprocessed the cultivation data through the CEEDMAN method, and predicted them using the Informer network. We took 500 strawberry plants as samples and cultivated them in 72 groups of dynamically changing light qualities. Then, we recorded the growth changes and formed training and testing sets. Finally, we discussed the correlation between light quality and plant trait changes in consistency with current studies, and the proposed prediction model achieved the best performance in the prediction task of nine plant traits compared with the comparison models. Thus, the validity of the proposed method and model was demonstrated.
RESUMEN
Deep learning methods exhibited significant advantages in mapping highly nonlinear relationships with acceptable computational speed, and have been widely used to predict water quality. However, various model selection and construction methods resulted in differences in prediction accuracy and performance. Hence, a unified deep learning framework for water quality prediction was established in the paper, including data processing module, feature enhancement module, and data prediction module. In the established model, the data processing module based on wavelet transform method was applied to decomposing complex nonlinear meteorology, hydrology, and water quality data into multiple frequency domain signals for extracting self characteristics of data cyclic and fluctuations. The feature enhancement module based on Informer Encoder was used to enhance feature encoding of time series data in different frequency domains to discover global time dependent features of variables. Finally, the data prediction module based on the stacked bidirectional long and short term memory network (SBiLSTM) method was employed to strengthen the local correlation of feature sequences and predict the water quality. The established model framework was applied in Lijiang River in Guilin, China. The maximum relative errors between the predicted and observed values for dissolved oxygen (DO), chemical oxygen demand (CODMn) were 12.4% and 20.7%, suggesting a satisfactory prediction performance of the established model. The validation results showed that the established model was superior to all other models in terms of prediction accuracy with RMSE values 0.329, 0.121, MAE values 0.217, 0.057, SMAPE values 0.022, 0.063 for DO and CODMn, respectively. Ablation tests confirmed the necessity and rationality of each module for the established model framework. The established method provided a unified deep learning framework for water quality prediction.
Asunto(s)
Aprendizaje Profundo , Calidad del Agua , China , Hidrología , Meteorología , OxígenoRESUMEN
Accurate prediction of air pollution is essential for public health protection. Air quality, however, is difficult to predict due to the complex dynamics, and its accurate forecast still remains a challenge. This study suggests a spatiotemporal Informer model, which uses a new spatiotemporal embedding and spatiotemporal attention, to improve AQI forecast accuracy. In the first phase of the proposed forecast mechanism, the input data is transformed by the spatiotemporal embedding. Next, the spatiotemporal attention is applied to extract spatiotemporal features from the embedded data. The final forecast is obtained based on the attention tensors. In the proposed forecast model, the input is a 3-dimensional data that consists of air quality data (AQI, PM2.5, O3, SO2, NO2, CO) and geographic information, and the output is a multi-positional, multi-temporal data that shows the AQI forecast result of all the monitoring stations in the study area. The proposed forecast model was evaluated by air quality data of 34 monitoring stations in Beijing, China. Experiments showed that the proposed forecast model could provide highly accurate AQI forecast: the average of MAPE values for from 1 h to 20 h ahead forecast was 11.61%, and it was much smaller than other models. Moreover, the proposed model provided a highly accurate and stable forecast even at the extreme points. These results demonstrated that the proposed spatiotemporal embedding and attention techniques could sufficiently capture the spatiotemporal correlation characteristics of air quality data, and that the proposed spatiotemporal Informer could be successfully applied for air quality forecasting.
RESUMEN
This paper proposes an Informer-based temperature prediction model to leverage data from an automatic weather station (AWS) and a local data assimilation and prediction system (LDAPS), where the Informer as a variant of a Transformer was developed to better deal with time series data. Recently, deep-learning-based temperature prediction models have been proposed, demonstrating successful performances, such as conventional neural network (CNN)-based models, bi-directional long short-term memory (BLSTM)-based models, and a combination of both neural networks, CNN-BLSTM. However, these models have encountered issues due to the lack of time data integration during the training phase, which also lead to the persistence of a long-term dependency problem in the LSTM models. These limitations have culminated in a performance deterioration when the prediction time length was extended. To overcome these issues, the proposed model first incorporates time-periodic information into the learning process by generating time-periodic information and inputting it into the model. Second, the proposed model replaces the LSTM with an Informer as an alternative to mitigating the long-term dependency problem. Third, a series of fusion operations between AWS and LDAPS data are executed to examine the effect of each dataset on the temperature prediction performance. The performance of the proposed temperature prediction model is evaluated via objective measures, including the root-mean-square error (RMSE) and mean absolute error (MAE) over different timeframes, ranging from 6 to 336 h. The experiments showed that the proposed model relatively reduced the average RMSE and MAE by 0.25 °C and 0.203 °C, respectively, compared with the results of the CNN-BLSTM-based model.
RESUMEN
Accurate equipment operation trend prediction plays an important role in ensuring the safe operation of equipment and reducing maintenance costs. Therefore, monitoring the equipment vibration and predicting the time series of the vibration trend is one of the effective means to prevent equipment failures. In order to reduce the error of equipment operation trend prediction, this paper proposes a method for equipment operation trend prediction based on a combination of signal decomposition and an Informer prediction model. Aiming at the problem of high noise in vibration signals, which makes it difficult to obtain intrinsic characteristics when directly using raw data for prediction, the original signal is decomposed once using the variational mode decomposition (VMD) algorithm optimized by the improved sparrow search algorithm (ISSA) to obtain the intrinsic mode function (IMF) for different frequencies and calculate the fuzzy entropy. The improved adaptive white noise complete set empirical mode decomposition (ICEEMDAN) is used to decompose the components with the largest fuzzy entropy to obtain a series of intrinsic mode components, fully combining the advantages of the Informer model in processing long time series, and predict equipment operation trend data. Input all subsequences into the Informer model and reconstruct the results to obtain the predicted results. The experimental results indicate that the proposed method can effectively improve the accuracy of equipment operation trend prediction compared to other models.
Asunto(s)
Aprendizaje Profundo , Vibración , Algoritmos , Entropía , Falla de EquipoRESUMEN
Passenger flow anomaly detection in urban rail transit networks (URTNs) is critical in managing surging demand and informing effective operations planning and controls in the network. Existing studies have primarily focused on identifying the source of anomalies at a single station by analysing the time-series characteristics of passenger flow. However, they ignored the high-dimensional and complex spatial features of passenger flow and the dynamic behaviours of passengers in URTNs during anomaly detection. This article proposes a novel anomaly detection methodology based on a deep learning framework consisting of a graph convolution network (GCN)-informer model and a Gaussian naive Bayes model. The GCN-informer model is used to capture the spatial and temporal features of inbound and outbound passenger flows, and it is trained on normal datasets. The Gaussian naive Bayes model is used to construct a binary classifier for anomaly detection, and its parameters are estimated by feeding the normal and abnormal test data into the trained GCN-informer model. Experiments are conducted on a real-world URTN passenger flow dataset from Beijing. The results show that the proposed framework has superior performance compared to existing anomaly detection algorithms in detecting network-level passenger flow anomalies. This article is part of the theme issue 'Artificial intelligence in failure analysis of transportation infrastructure and materials'.
RESUMEN
The high-temperature compression characteristics of a Ti-55511 alloy are explored through adopting two-stage high-temperature compressed experiments with step-like strain rates. The evolving features of dislocation substructures over hot, compressed parameters are revealed by transmission electron microscopy (TEM). The experiment results suggest that the dislocations annihilation through the rearrangement/interaction of dislocations is aggravated with the increase in forming temperature. Notwithstanding, the generation/interlacing of dislocations exhibit an enhanced trend with the increase in strain in the first stage of forming, or in strain rates at first/second stages of a high-temperature compressed process. According to the testing data, an Informer deep learning model is proposed for reconstructing the stress-strain behavior of the researched Ti-55511 alloy. The input series of the established Informer deep learning model are compression parameters (compressed temperature, strain, as well as strain rate), and the output series are true stresses. The optimal input batch size and sequence length are 64 and 2, respectively. Eventually, the predicted results of the proposed Informer deep learning model are more accordant with the tested true stresses compared to those of the previously established physical mechanism model, demonstrating that the Informer deep learning model enjoys an outstanding forecasted capability for precisely reconstructing the high-temperature compressed features of the Ti-55511 alloy.
RESUMEN
The precision and reliability of electroencephalogram (EEG) data are essential for the effective functioning of a brain-computer interface (BCI). As the number of BCI acquisition channels increases, more EEG information can be gathered. However, having too many channels will reduce the practicability of the BCI system, raise the likelihood of poor-quality channels, and lead to information misinterpretation. These issues pose challenges to the advancement of BCI systems. Determining the optimal configuration of BCI acquisition channels can minimize the number of channels utilized, but it is challenging to maintain the original operating system and accommodate individual variations in channel layout. To address these concerns, this study introduces the EEG-completion-informer (EC-informer), which is based on the Informer architecture known for its effectiveness in time-series problems. By providing input from four BCI acquisition channels, the EC-informer can generate several virtual acquisition channels to extract additional EEG information for analysis. This approach allows for the direct inheritance of the original model, significantly reducing researchers' workload. Moreover, EC-informers demonstrate strong performance in damaged channel repair and poor channel identification. Using the Informer as a foundation, the study proposes the EC-informer, tailored to BCI requirements and demanding only a small number of training samples. This approach eliminates the need for extensive computing units to train an efficient, lightweight model while preserving comprehensive information about target channels. The study also confirms that the proposed model can be transferred to other operators with minimal loss, exhibiting robust applicability. The EC-informer's features enable original BCI devices to adapt to a broader range of classification algorithms and relax the operational requirements of BCI devices, which could facilitate the promotion of the use of BCI devices in daily life.
RESUMEN
Accurate wind power prediction can increase the utilization rate of wind power generation and maintain the stability of the power system. At present, a large number of wind power prediction studies are based on the mean square error (MSE) loss function, which generates many errors when predicting original data with random fluctuation and non-stationarity. Therefore, a hybrid model for wind power prediction named IVMD-FE-Ad-Informer, which is based on Informer with an adaptive loss function and combines improved variational mode decomposition (IVMD) and fuzzy entropy (FE), is proposed. Firstly, the original data are decomposed into K subsequences by IVMD, which possess distinct frequency domain characteristics. Secondly, the sub-series are reconstructed into new elements using FE. Then, the adaptive and robust Ad-Informer model predicts new elements and the predicted values of each element are superimposed to obtain the final results of wind power. Finally, the model is analyzed and evaluated on two real datasets collected from wind farms in China and Spain. The results demonstrate that the proposed model is superior to other models in the performance and accuracy on different datasets, and this model can effectively meet the demand for actual wind power prediction.
RESUMEN
Anemia is a critical complication in hemodialysis patients, but the response to erythropoietin-stimulating agents (ESA) treatment varies from patient to patient and is not linear across different time points. The aim of this study was to develop deep learning algorithms for individualized anemia management. We retrospectively collected 36,677 data points from 623 hemodialysis patients, including clinical data, laboratory values, hemoglobin levels, and previous ESA doses. To reduce the computational complexity associated with recurrent neural networks (RNN) in processing time-series data, we developed neural networks based on multi-head self-attention mechanisms in an efficient and effective hemoglobin prediction model. Our proposed model achieved a more accurate hemoglobin prediction than the state-of-the-art RNN model, as shown by the smaller mean absolute error (MAE) of hemoglobin (0.451 vs. 0.593 g/dL, p = 0.014). In ESA (including darbepoetin and epoetin) dose recommendation, the simulation results by our model revealed a higher rate of achieved hemoglobin targets (physician prescription vs. model: 86.3 % vs. 92.7 %, p < 0.001), a lower rate of hemoglobin levels below 10 g/dL (13.7 % vs. 7.3 %, p < 0.001) and smaller change in hemoglobin levels (0.6 g/dL vs. 0.4 g/dL, p < 0.001) in all patients. Our model holds great potential for individualized anemia management as a computerized clinical decision support system for hemodialysis patients. Further external validation with other datasets and prospective clinical utility studies are warranted.