Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.310
Filtrar
1.
Artigo em Inglês | MEDLINE | ID: mdl-39220673

RESUMO

Glaucoma is a major cause of blindness and vision impairment worldwide, and visual field (VF) tests are essential for monitoring the conversion of glaucoma. While previous studies have primarily focused on using VF data at a single time point for glaucoma prediction, there has been limited exploration of longitudinal trajectories. Additionally, many deep learning techniques treat the time-to-glaucoma prediction as a binary classification problem (glaucoma Yes/No), resulting in the misclassification of some censored subjects into the nonglaucoma category and decreased power. To tackle these challenges, we propose and implement several deep-learning approaches that naturally incorporate temporal and spatial information from longitudinal VF data to predict time-to-glaucoma. When evaluated on the Ocular Hypertension Treatment Study (OHTS) dataset, our proposed convolutional neural network (CNN)-long short-term memory (LSTM) emerged as the top-performing model among all those examined. The implementation code can be found online (https://github.com/rivenzhou/VF_prediction).

2.
J Hazard Mater ; 479: 135709, 2024 Aug 30.
Artigo em Inglês | MEDLINE | ID: mdl-39236536

RESUMO

Ultrafiltration (UF) is widely employed for harmful algae rejection, whereas severe membrane fouling hampers its long-term operation. Herein, calcium peroxide (CaO2) and ferrate (Fe(VI)) were innovatively coupled for low-damage removal of algal contaminants and fouling control in the UF process. As a result, the terminal J/J0 increased from 0.13 to 0.66, with Rr and Rir respectively decreased by 96.74 % and 48.47 %. The cake layer filtration was significantly postponed, and pore blocking was reduced. The ζ-potential of algal foulants was weakened from -34.4 mV to -18.7 mV, and algal cells of 86.15 % were removed with flocs of 300 µm generated. The cell integrity was better remained in comparison to the Fe(VI) treatment, and Fe(IV)/Fe(V) was verified to be the dominant reactive species. The membrane fouling alleviation mechanisms could be attributed to the reduction of the fouling loads and the changes in the interfacial free energies. A membrane fouling prediction model was built based on a long short-term memory deep learning network, which predicted that the filtration volume at J/J0= 0.2 increased from 288 to 1400 mL. The results provide a new routine for controlling algal membrane fouling from the perspective of promoting the generation of Fe(IV)/Fe(V) intermediates.

3.
Artigo em Inglês | MEDLINE | ID: mdl-39235388

RESUMO

Machine learning (ML) has been used to predict lower extremity joint torques from joint angles and surface electromyography (sEMG) signals. This study trained three bidirectional Long Short-Term Memory (LSTM) models, which utilize joint angle, sEMG, and combined modalities as inputs, using a publicly accessible dataset to estimate joint torques during normal walking and assessed the performance of models, that used specific inputs independently plus the accuracy of the joint-specific torque prediction. The performance of each model was evaluated using normalized root mean square error (nRMSE) and Pearson correlation coefficient (PCC). Each model's median scores for the PCC and nRMSE values were highly convergent and the bulk of the mean nRMSE values of all joints were less than 10%. The ankle joint torque was the most successfully predicted output, having a mean nRMSE of less than 9% for all models. The knee joint torque prediction has reached the highest accuracy with a mean nRMSE of 11% and the hip joint torque prediction of 10%. The PCC values of each model were significantly high and remarkably comparable for the ankle (∼ 0.98), knee (∼ 0.92), and hip (∼ 0.95) joints. The model obtained significantly close accuracy with single and combined input modalities, indicating that one of either input may be sufficient for predicting the torque of a particular joint, obviating the need for the other in certain contexts.

4.
Environ Res ; : 119911, 2024 Sep 02.
Artigo em Inglês | MEDLINE | ID: mdl-39233036

RESUMO

Establishing a highly reliable and accurate water quality prediction model is critical for effective water environment management. However, enhancing the performance of these predictive models continues to pose challenges, especially in the plain watershed with complex hydraulic conditions. This study aims to evaluate the efficacy of three traditional machine learning models versus three deep learning models in predicting the water quality of plain river networks and to develop a novel hybrid deep learning model to further improve prediction accuracy. The performance of the proposed model was assessed under various input feature sets and data temporal frequencies. The findings indicated that deep learning models outperformed traditional machine learning models in handling complex time series data. Long Short-Term Memory (LSTM) models improved the R2 by approximately 29% and lowered the Root Mean Square Error (RMSE) by about 48.6% on average. The hybrid Bayes-LSTM-GRU (Gated Recurrent Unit) model significantly enhanced prediction accuracy, reducing the average RMSE by 18.1% compared to the single LSTM model. Models trained on feature-selected datasets exhibited superior performance compared to those trained on original datasets. Higher temporal frequencies of input data generally provide more useful information. However, in datasets with numerous abrupt changes, increasing the temporal interval proves beneficial. Overall, the proposed hybrid deep learning model demonstrates an efficient and cost-effective method for improving water quality prediction performance, showing significant potential for application in managing water quality in plain watershed.

5.
J Integr Bioinform ; 2024 Sep 02.
Artigo em Inglês | MEDLINE | ID: mdl-39238451

RESUMO

Drug therapy remains the primary approach to treating tumours. Variability among cancer patients, including variations in genomic profiles, often results in divergent therapeutic responses to analogous anti-cancer drug treatments within the same cohort of cancer patients. Hence, predicting the drug response by analysing the genomic profile characteristics of individual patients holds significant research importance. With the notable progress in machine learning and deep learning, many effective methods have emerged for predicting drug responses utilizing features from both drugs and cell lines. However, these methods are inadequate in capturing a sufficient number of features inherent to drugs. Consequently, we propose a representational approach for drugs that incorporates three distinct types of features: the molecular graph, the SMILE strings, and the molecular fingerprints. In this study, a novel deep learning model, named MCMVDRP, is introduced for the prediction of cancer drug responses. In our proposed model, an amalgamation of these extracted features is performed, followed by the utilization of fully connected layers to predict the drug response based on the IC50 values. Experimental results demonstrate that the presented model outperforms current state-of-the-art models in performance.

6.
Sci Rep ; 14(1): 20622, 2024 09 04.
Artigo em Inglês | MEDLINE | ID: mdl-39232053

RESUMO

Alzheimer's Disease (AD) causes slow death in brain cells due to shrinkage of brain cells which is more prevalent in older people. In most cases, the symptoms of AD are mistaken as age-related stresses. The most widely utilized method to detect AD is Magnetic Resonance Imaging (MRI). Along with Artificial Intelligence (AI) techniques, the efficacy of identifying diseases related to the brain has become easier. But, the identical phenotype makes it challenging to identify the disease from the neuro-images. Hence, a deep learning method to detect AD at the beginning stage is suggested in this work. The newly implemented "Enhanced Residual Attention with Bi-directional Long Short-Term Memory (Bi-LSTM) (ERABi-LNet)" is used in the detection phase to identify the AD from the MRI images. This model is used for enhancing the performance of the Alzheimer's detection in scale of 2-5%, minimizing the error rates, increasing the balance of the model, so that the multi-class problems are supported. At first, MRI images are given to "Residual Attention Network (RAN)", which is specially developed with three convolutional layers, namely atrous, dilated and Depth-Wise Separable (DWS), to obtain the relevant attributes. The most appropriate attributes are determined by these layers, and subjected to target-based fusion. Then the fused attributes are fed into the "Attention-based Bi-LSTM". The final outcome is obtained from this unit. The detection efficiency based on median is 26.37% and accuracy is 97.367% obtained by tuning the parameters in the ERABi-LNet with the help of Modified Search and Rescue Operations (MCDMR-SRO). The obtained results are compared with ROA-ERABi-LNet, EOO-ERABi-LNet, GTBO-ERABi-LNet and SRO-ERABi-LNet respectively. The ERABi_LNet thus provides enhanced accuracy and other performance metrics compared to such deep learning models. The proposed method has the better sensitivity, specificity, F1-Score and False Positive Rate compared with all the above mentioned competing models with values such as 97.49%.97.84%,97.74% and 2.616 respective;y. This ensures that the model has better learning capabilities and provides lesser false positives with balanced prediction.


Assuntos
Doença de Alzheimer , Imageamento por Ressonância Magnética , Humanos , Doença de Alzheimer/diagnóstico por imagem , Doença de Alzheimer/patologia , Imageamento por Ressonância Magnética/métodos , Aprendizado Profundo , Memória de Curto Prazo/fisiologia , Encéfalo/diagnóstico por imagem , Encéfalo/patologia , Redes Neurais de Computação , Idoso
7.
Cogn Neurodyn ; 18(4): 1445-1465, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-39104683

RESUMO

Estimating cognitive workload levels is an emerging research topic in the cognitive neuroscience domain, as participants' performance is highly influenced by cognitive overload or underload results. Different physiological measures such as Electroencephalography (EEG), Functional Magnetic Resonance Imaging, Functional near-infrared spectroscopy, respiratory activity, and eye activity are efficiently used to estimate workload levels with the help of machine learning or deep learning techniques. Some reviews focus only on EEG-based workload estimation using machine learning classifiers or multimodal fusion of different physiological measures for workload estimation. However, a detailed analysis of all physiological measures for estimating cognitive workload levels still needs to be discovered. Thus, this survey highlights the in-depth analysis of all the physiological measures for assessing cognitive workload. This survey emphasizes the basics of cognitive workload, open-access datasets, the experimental paradigm of cognitive tasks, and different measures for estimating workload levels. Lastly, we emphasize the significant findings from this review and identify the open challenges. In addition, we also specify future scopes for researchers to overcome those challenges.

8.
Sensors (Basel) ; 24(15)2024 Jul 26.
Artigo em Inglês | MEDLINE | ID: mdl-39123903

RESUMO

The manufacturing industry has been operating within a constantly evolving technological environment, underscoring the importance of maintaining the efficiency and reliability of manufacturing processes. Motor-related failures, especially bearing defects, are common and serious issues in manufacturing processes. Bearings provide accurate and smooth movements and play essential roles in mechanical equipment with shafts. Given their importance, bearing failure diagnosis has been extensively studied. However, the imbalance in failure data and the complexity of time series data make diagnosis challenging. Conventional AI models (convolutional neural networks (CNNs), long short-term memory (LSTM), support vector machine (SVM), and extreme gradient boosting (XGBoost)) face limitations in diagnosing such failures. To address this problem, this paper proposes a bearing failure diagnosis model using a graph convolution network (GCN)-based LSTM autoencoder with self-attention. The model was trained on data extracted from the Case Western Reserve University (CWRU) dataset and a fault simulator testbed. The proposed model achieved 97.3% accuracy on the CWRU dataset and 99.9% accuracy on the fault simulator dataset.

9.
Sensors (Basel) ; 24(15)2024 Aug 05.
Artigo em Inglês | MEDLINE | ID: mdl-39124102

RESUMO

The surface quality of milled blade-root grooves in industrial turbine blades significantly influences their mechanical properties. The surface texture reveals the interaction between the tool and the workpiece during the machining process, which plays a key role in determining the surface quality. In addition, there is a significant correlation between acoustic vibration signals and surface texture features. However, current research on surface quality is still relatively limited, and most considers only a single signal. In this paper, 160 sets of industrial field data were collected by multiple sensors to study the surface quality of a blade-root groove. A surface texture feature prediction method based on acoustic vibration signal fusion is proposed to evaluate the surface quality. Fast Fourier transform (FFT) is used to process the signal, and the clean and smooth features are extracted by combining wavelet denoising and multivariate smoothing denoising. At the same time, based on the gray-level co-occurrence matrix, the surface texture image features of different angles of the blade-root groove are extracted to describe the texture features. The fused acoustic vibration signal features are input, and the texture features are output to establish a texture feature prediction model. After predicting the texture features, the surface quality is evaluated by setting a threshold value. The threshold is selected based on all sample data, and the final judgment accuracy is 90%.

10.
Sci Rep ; 14(1): 18284, 2024 Aug 07.
Artigo em Inglês | MEDLINE | ID: mdl-39112684

RESUMO

Mine flooding accidents have occurred frequently in recent years, and the predicting of mine water inflow is one of the most crucial flood warning indicators. Further, the mine water inflow is characterized by non-linearity and instability, making it difficult to predict. Accordingly, we propose a time series prediction model based on the fusion of the Transformer algorithm, which relies on self-attention, and the LSTM algorithm, which captures long-term dependencies. In this paper, Baotailong mine water inflow in Heilongjiang Province is used as sample data, and the sample data is divided into different ratios of the training set and test set in order to obtain optimal prediction results. In this study, we demonstrate that the LSTM-Transformer model exhibits the highest training accuracy when the ratio is 7:3. To improve the efficiency of search, the combination of random search and Bayesian optimization is used to determine the network model parameters and regularization parameters. Finally, in order to verify the accuracy of the LSTM-Transformer model, the LSTM-Transformer model is compared with LSTM, CNN, Transformer and CNN-LSTM models. The results prove that LSTM-Transformer has the highest prediction accuracy, and all the indicators of its model are well improved.

11.
BMC Bioinformatics ; 25(1): 260, 2024 Aug 08.
Artigo em Inglês | MEDLINE | ID: mdl-39118043

RESUMO

Quantitative measurement of RNA expression levels through RNA-Seq is an ideal replacement for conventional cancer diagnosis via microscope examination. Currently, cancer-related RNA-Seq studies focus on two aspects: classifying the status and tissue of origin of a sample and discovering marker genes. Existing studies typically identify marker genes by statistically comparing healthy and cancer samples. However, this approach overlooks marker genes with low expression level differences and may be influenced by experimental results. This paper introduces "GENESO," a novel framework for pan-cancer classification and marker gene discovery using the occlusion method in conjunction with deep learning. we first trained a baseline deep LSTM neural network capable of distinguishing the origins and statuses of samples utilizing RNA-Seq data. Then, we propose a novel marker gene discovery method called "Symmetrical Occlusion (SO)". It collaborates with the baseline LSTM network, mimicking the "gain of function" and "loss of function" of genes to evaluate their importance in pan-cancer classification quantitatively. By identifying the genes of utmost importance, we then isolate them to train new neural networks, resulting in higher-performance LSTM models that utilize only a reduced set of highly relevant genes. The baseline neural network achieves an impressive validation accuracy of 96.59% in pan-cancer classification. With the help of SO, the accuracy of the second network reaches 98.30%, while using 67% fewer genes. Notably, our method excels in identifying marker genes that are not differentially expressed. Moreover, we assessed the feasibility of our method using single-cell RNA-Seq data, employing known marker genes as a validation test.


Assuntos
Aprendizado Profundo , Neoplasias , Humanos , Neoplasias/genética , Neoplasias/classificação , Redes Neurais de Computação , Biomarcadores Tumorais/genética , RNA-Seq/métodos
12.
Artigo em Inglês | MEDLINE | ID: mdl-39086252

RESUMO

Estimation of mental workload from electroencephalogram (EEG) signals aims to accurately measure the cognitive demands placed on an individual during multitasking mental activities. By analyzing the brain activity of the subject, we can determine the level of mental effort required to perform a task and optimize the workload to prevent cognitive overload or underload. This information can be used to enhance performance and productivity in various fields such as healthcare, education, and aviation. In this paper, we propose a method that uses EEG and deep neural networks to estimate the mental workload of human subjects during multitasking mental activities. Notably, our proposed method employs subject-independent classification. We use the "STEW" dataset, which consists of two tasks, namely "No task" and "simultaneous capacity (SIMKAP)-based multitasking activity". We estimate the different workload levels of two tasks using a composite framework consisting of brain connectivity and deep neural networks. After the initial preprocessing of EEG signals, an analysis of the relationships between the 14 EEG channels is conducted to evaluate effective brain connectivity. This assessment illustrates the information flow between various brain regions, utilizing the direct Directed Transfer Function (dDTF) method. Then, we propose a deep hybrid model based on pre-trained Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) for the classification of workload levels. The accuracy of the proposed deep model achieved 83.12% according to the subject-independent leave-subject-out (LSO) approach. The pre-trained CNN + LSTM approaches to EEG data have been found to be an accurate method for assessing the mental workload.

13.
Heliyon ; 10(15): e35183, 2024 Aug 15.
Artigo em Inglês | MEDLINE | ID: mdl-39170306

RESUMO

The battery's performance heavily influences the safety, dependability, and operational efficiency of electric vehicles (EVs). This paper introduces an innovative hybrid deep learning architecture that dramatically enhances the estimation of the state of charge (SoC) of lithium-ion (Li-ion) batteries, crucial for efficient EV operation. Our model uniquely integrates a convolutional neural network (CNN) with bidirectional long short-term memory (Bi-LSTM), optimized through evolutionary intelligence, enabling an advanced level of precision in SoC estimation. A novel aspect of this work is the application of the Group Learning Algorithm (GLA) to tune the hyperparameters of the CNN-Bi-LSTM network meticulously. This approach not only refines the model's accuracy but also significantly enhances its efficiency by optimizing each parameter to best capture and integrate both spatial and temporal information from the battery data. This is in stark contrast to conventional models that typically focus on either spatial or temporal data, but not both effectively. The model's robustness is further demonstrated through its training across six diverse datasets that represent a range of EV discharge profiles, including the Highway Fuel Economy Test (HWFET), the US06 test, the Beijing Dynamic Stress Test (BJDST), the dynamic stress test (DST), the federal urban driving schedule (FUDS), and the urban development driving schedule (UDDS). These tests are crucial for ensuring that the model can perform under various real-world conditions. Experimentally, our hybrid model not only surpasses the performance of existing LSTM and CNN frameworks in tracking SoC estimation but also achieves an impressively quick convergence to true SoC values, maintaining an average root mean square error (RMSE) of less than 1 %. Furthermore, the experimental outcomes suggest that this new deep learning methodology outstrips conventional approaches in both convergence speed and estimation accuracy, thus promising to significantly enhance battery life and overall EV efficiency.

14.
Stud Health Technol Inform ; 316: 863-867, 2024 Aug 22.
Artigo em Inglês | MEDLINE | ID: mdl-39176929

RESUMO

In the realm of ophthalmic surgeries, silicone oil is often utilized as a tamponade agent for repairing retinal detachments, but it necessitates subsequent removal. This study harnesses the power of machine learning to analyze the macular and optic disc perfusion changes pre and post-silicone oil removal, using Optical Coherence Tomography Angiography (OCTA) data. Building upon the foundational work of prior research, our investigation employs Gaussian Process Regression (GPR) and Long Short-Term Memory (LSTM) networks to create predictive models based on OCTA scans. We conducted a comparative analysis focusing on the flow in the outer retina and vessel density in the deep capillary plexus (superior-hemi and perifovea) to track perfusion changes across different time points. Our findings indicate that while machine learning models predict the flow in the outer retina with reasonable accuracy, predicting the vessel density in the deep capillary plexus (particularly in the superior-hemi and perifovea regions) remains challenging. These results underscore the potential of machine learning to contribute to personalized patient care in ophthalmology, despite the inherent complexities in predicting ocular perfusion changes.


Assuntos
Aprendizado de Máquina , Disco Óptico , Descolamento Retiniano , Óleos de Silicone , Tomografia de Coerência Óptica , Humanos , Descolamento Retiniano/cirurgia , Disco Óptico/irrigação sanguínea , Disco Óptico/diagnóstico por imagem , Macula Lutea/diagnóstico por imagem , Macula Lutea/irrigação sanguínea
15.
Front Neurosci ; 18: 1436619, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39139499

RESUMO

Background and objective: Epilepsy, which is associated with neuronal damage and functional decline, typically presents patients with numerous challenges in their daily lives. An early diagnosis plays a crucial role in managing the condition and alleviating the patients' suffering. Electroencephalogram (EEG)-based approaches are commonly employed for diagnosing epilepsy due to their effectiveness and non-invasiveness. In this study, a classification method is proposed that use fast Fourier Transform (FFT) extraction in conjunction with convolutional neural networks (CNN) and long short-term memory (LSTM) models. Methods: Most methods use traditional frameworks to classify epilepsy, we propose a new approach to this problem by extracting features from the source data and then feeding them into a network for training and recognition. It preprocesses the source data into training and validation data and then uses CNN and LSTM to classify the style of the data. Results: Upon analyzing a public test dataset, the top-performing features in the fully CNN nested LSTM model for epilepsy classification are FFT features among three types of features. Notably, all conducted experiments yielded high accuracy rates, with values exceeding 96% for accuracy, 93% for sensitivity, and 96% for specificity. These results are further benchmarked against current methodologies, showcasing consistent and robust performance across all trials. Our approach consistently achieves an accuracy rate surpassing 97.00%, with values ranging from 97.95 to 99.83% in individual experiments. Particularly noteworthy is the superior accuracy of our method in the AB versus (vs.) CDE comparison, registering at 99.06%. Conclusion: Our method exhibits precise classification abilities distinguishing between epileptic and non-epileptic individuals, irrespective of whether the participant's eyes are closed or open. Furthermore, our technique shows remarkable performance in effectively categorizing epilepsy type, distinguishing between epileptic ictal and interictal states versus non-epileptic conditions. An inherent advantage of our automated classification approach is its capability to disregard EEG data acquired during states of eye closure or eye-opening. Such innovation holds promise for real-world applications, potentially aiding medical professionals in diagnosing epilepsy more efficiently.

16.
PeerJ Comput Sci ; 10: e2192, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39145218

RESUMO

Background: For space object detection tasks, conventional optical cameras face various application challenges, including backlight issues and dim light conditions. As a novel optical camera, the event camera has the advantages of high temporal resolution and high dynamic range due to asynchronous output characteristics, which provides a new solution to the above challenges. However, the asynchronous output characteristic of event cameras makes them incompatible with conventional object detection methods designed for frame images. Methods: Asynchronous convolutional memory network (ACMNet) for processing event camera data is proposed to solve the problem of backlight and dim space object detection. The key idea of ACMNet is to first characterize the asynchronous event streams with the Event Spike Tensor (EST) voxel grid through the exponential kernel function, then extract spatial features using a feed-forward feature extraction network, and aggregate temporal features using a proposed convolutional spatiotemporal memory module ConvLSTM, and finally, the end-to-end object detection using continuous event streams is realized. Results: Comparison experiments among ACMNet and classical object detection methods are carried out on Event_DVS_space7, which is a large-scale space synthetic event dataset based on event cameras. The results show that the performance of ACMNet is superior to the others, and the mAP is improved by 12.7% while maintaining the processing speed. Moreover, event cameras still have a good performance in backlight and dim light conditions where conventional optical cameras fail. This research offers a novel possibility for detection under intricate lighting and motion conditions, emphasizing the superior benefits of event cameras in the realm of space object detection.

17.
PeerJ Comput Sci ; 10: e2124, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39145239

RESUMO

Pashtu is one of the most widely spoken languages in south-east Asia. Pashtu Numerics recognition poses challenges due to its cursive nature. Despite this, employing a machine learning-based optical character recognition (OCR) model can be an effective way to tackle this issue. The main aim of the study is to propose an optimized machine learning model which can efficiently identify Pashtu numerics from 0-9. The methodology includes data organizing into different directories each representing labels. After that, the data is preprocessed i.e., images are resized to 32 × 32 images, then they are normalized by dividing their pixel value by 255, and the data is reshaped for model input. The dataset was split in the ratio of 80:20. After this, optimized hyperparameters were selected for LSTM and CNN models with the help of trial-and-error technique. Models were evaluated by accuracy and loss graphs, classification report, and confusion matrix. The results indicate that the proposed LSTM model slightly outperforms the proposed CNN model with a macro-average of precision: 0.9877, recall: 0.9876, F1 score: 0.9876. Both models demonstrate remarkable performance in accurately recognizing Pashtu numerics, achieving an accuracy level of nearly 98%. Notably, the LSTM model exhibits a marginal advantage over the CNN model in this regard.

18.
Ecotoxicol Environ Saf ; 283: 116856, 2024 Aug 15.
Artigo em Inglês | MEDLINE | ID: mdl-39151373

RESUMO

Air pollution in industrial environments, particularly in the chrome plating process, poses significant health risks to workers due to high concentrations of hazardous pollutants. Exposure to substances like hexavalent chromium, volatile organic compounds (VOCs), and particulate matter can lead to severe health issues, including respiratory problems and lung cancer. Continuous monitoring and timely intervention are crucial to mitigate these risks. Traditional air quality monitoring methods often lack real-time data analysis and predictive capabilities, limiting their effectiveness in addressing pollution hazards proactively. This paper introduces a real-time air pollution monitoring and forecasting system specifically designed for the chrome plating industry. The system, supported by Internet of Things (IoT) sensors and AI approaches, detects a wide range of air pollutants, including NH3, CO, NO2, CH4, CO2, SO2, O3, PM2.5, and PM10, and provides real-time data on pollutant concentration levels. Data collected by the sensors are processed using LSTM, Random Forest, and Linear Regression models to predict pollution levels. The LSTM model achieved a coefficient of variation (R²) of 99 % and a mean absolute percentage error (MAE) of 0.33 for temperature and humidity forecasting. For PM2.5, the Random Forest model outperformed others, achieving an R² of 84 % and an MAE of 10.11. The system activates factory exhaust fans to circulate air when high pollution levels are predicted to occur in the next hours, allowing for proactive measures to improve air quality before issues arise. This innovative approach demonstrates significant advancements in industrial environmental monitoring, enabling dynamic responses to pollution and improving air quality in industrial settings.

19.
Sci Rep ; 14(1): 17968, 2024 Aug 02.
Artigo em Inglês | MEDLINE | ID: mdl-39095527

RESUMO

As Europe integrates more renewable energy resources, notably offshore wind power, into its super meshed grid, the demand for reliable long-distance High Voltage Direct Current (HVDC) transmission systems has surged. This paper addresses the intricacies of HVDC systems built upon Modular Multi-Level Converters (MMCs), especially concerning the rapid rise of DC fault currents. We propose a novel fault identification and classification for DC transmission lines only by employing Long Short-Term Memory (LSTM) networks integrated with Discrete Wavelet Transform (DWT) for feature extraction. Our LSTM-based algorithm operates effectively under challenging environmental conditions, ensuring high fault resistance detection. A unique three-level relay system with multiple time windows (1 ms, 1.5 ms, and 2 ms) ensures accurate fault detection over large distances. Bayesian Optimization is employed for hyperparameter tuning, streamlining the model's training process. The study shows that our proposed framework exhibits 100% resilience against external faults and disturbances, achieving an average recognition accuracy rate of 99.04% in diverse testing scenarios. Unlike traditional schemes that rely on multiple manual thresholds, our approach utilizes a single intelligently tuned model to detect faults up to 480 ohms, enhancing the efficiency and robustness of DC grid protection.

20.
Sci Rep ; 14(1): 17841, 2024 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-39090177

RESUMO

The precise forecasting of air quality is of great significance as an integral component of early warning systems. This remains a formidable challenge owing to the limited information of emission source and the considerable uncertainties inherent in dynamic processes. To improve the accuracy of air quality forecasting, this work proposes a new spatiotemporal hybrid deep learning model based on variational mode decomposition (VMD), graph attention networks (GAT) and bi-directional long short-term memory (BiLSTM), referred to as VMD-GAT-BiLSTM, for air quality forecasting. The proposed model initially employ a VMD to decompose original PM2.5 data into a series of relatively stable sub-sequences, thus reducing the influence of unknown factors on model prediction capabilities. For each sub-sequence, a GAT is then designed to explore deep spatial relationships among different monitoring stations. Next, a BiLSTM is utilized to learn the temporal features of each decomposed sub-sequence. Finally, the forecasting results of each decomposed sub-sequence are aggregated and summed as the final air quality prediction results. Experiment results on the collected Beijing air quality dataset show that the proposed model presents superior performance to other used methods on both short-term and long-term air quality forecasting tasks.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA