RESUMO
Mass-spectrometry coupled to liquid chromatography is an indispensable tool in the field of proteomics. In the last decades, more and more complex and diverse biochemical and biomedical questions have arisen. Problems to be solved involve protein identification, quantitative analysis, screening of low abundance modifications, handling matrix effect, and concentrations differing by orders of magnitude. This led the development of more tailored protocols and problem centered proteomics workflows, including advanced choice of experimental parameters. In the most widespread bottom-up approach, the choice of collision energy in tandem mass spectrometric experiments has outstanding role. This review presents the collision energy optimization strategies in the field of proteomics which can help fully exploit the potential of MS based proteomics techniques. A systematic collection of use case studies is then presented to serve as a starting point for related further scientific work. Finally, this article discusses the issue of comparing results from different studies or obtained on different instruments, and it gives some hints on methodology transfer between laboratories based on measurement of reference species.
Assuntos
Proteômica , Espectrometria de Massas em Tandem , Proteômica/métodos , Espectrometria de Massas em Tandem/métodos , Cromatografia LíquidaRESUMO
The cyclical variations in environmental temperature generated by natural rhythms constantly impact the wastewater treatment process through the aeration system. Engineering data show that fluctuations in environmental temperature cause the reactor temperature to drop at night, resulting in increased dissolved oxygen concentration and improved effluent wastewater quality. However, the impact of natural temperature variation on wastewater treatment systems and the energy-saving potential has yet to be fully recognized. Here, we conducted a comprehensive study, using a full-scale oxic-hydrolytic and denitrification-oxic (OHO) coking wastewater treatment process as a case and developed a dynamic aeration model integrating thermodynamics and kinetics to elucidate the energy-saving mechanisms of wastewater treatment systems in response to diurnal temperature variations. Our case study results indicate that natural diurnal temperature variations can cut the energy consumption of 660,980 kWh annually (up to 30%) for the aeration unit in the OHO system. Wastewater treatment facilities located in regions with significant environmental temperature variation stand to benefit more from this energy-saving mechanism. Methods such as flow dynamic control, load shifting, and process unit editing can be fitted into the new or retrofitted wastewater treatment engineering.
Assuntos
Temperatura , Eliminação de Resíduos Líquidos , Águas Residuárias , Águas Residuárias/química , Coque , Purificação da ÁguaRESUMO
BACKGROUND AND OBJECTIVES: Femtosecond laser trabeculotomy (FLT) creates aqueous humor outflow channels through the trabecular meshwork (TM) and is an emerging noninvasive treatment for open-angle glaucoma. The purpose of this study is to investigate the effect of pulse energy on outflow channel creation during FLT. MATERIALS AND METHODS: An FLT laser (ViaLase Inc.) was used to create outflow channels through the TM (500 µm wide by 200 µm high) in human cadaver eyes using pulse energies of 10, 15, and 20 µJ. Following treatment, tissues were fixed in 4% paraformaldehyde. The channels were imaged using optical coherence tomography (OCT) and assessed as full thickness, partial thickness, or not observable. RESULTS: Pulse energies of 15 and 20 µJ had a 100% success rate in creating full-thickness FLT channels as imaged by OCT. A pulse energy of 10 µJ resulted in no channels (n = 6), a partial-thickness channel (n = 2), and a full-thickness FLT channel (n = 2). There was a statistically significant difference in cutting widths between the 10 and 15 µJ groups (p < 0.0001), as well as between the 10 and 20 µJ groups (p < 0.0001). However, there was no statistically significant difference between the 15 and 20 µJ groups (p = 0.416). CONCLUSIONS: Fifteen microjoules is an adequate pulse energy to reliably create aqueous humor outflow channels during FLT in human cadaver eyes. OCT is a valuable tool when evaluating FLT.
Assuntos
Glaucoma de Ângulo Aberto , Trabeculectomia , Humanos , Trabeculectomia/métodos , Glaucoma de Ângulo Aberto/cirurgia , Pressão Intraocular , Lasers , CadáverRESUMO
Intrusion detection systems have proliferated with varying capabilities for data generation and learning towards detecting abnormal behavior. The goal of green intrusion detection systems is to design intrusion detection systems for energy efficiency, taking into account the resource constraints of embedded devices and analyzing energy-performance-security trade-offs. Towards this goal, we provide a comprehensive survey of existing green intrusion detection systems and analyze their effectiveness in terms of performance, overhead, and energy consumption for a wide variety of low-power embedded systems such as the Internet of Things (IoT) and cyber physical systems. Finally, we provide future directions that can be leveraged by existing systems towards building a secure and greener environment.
RESUMO
Wireless sensor networks (WSNs) are structured for monitoring an area with distributed sensors and built-in batteries. However, most of their battery energy is consumed during the data transmission process. In recent years, several methodologies, like routing optimization, topology control, and sleep scheduling algorithms, have been introduced to improve the energy efficiency of WSNs. This study introduces a novel method based on a deep learning approach that utilizes variational autoencoders (VAEs) to improve the energy efficiency of WSNs by compressing transmission data. The VAE approach is customized in this work for compressing WSN data by retaining its important features. This is achieved by analyzing the statistical structure of the sensor data rather than providing a fixed-size latent representation. The performance of the proposed model is verified using a MATLAB simulation platform, integrating a pre-trained variational autoencoder model with openly available wireless sensor data. The performance of the proposed model is found to be satisfactory in comparison to traditional methods, like the compressed sensing technique, lightweight temporal compression, and the autoencoder, in terms of having an average compression rate of 1.5572. The WSN simulation also indicates that the VAE-incorporated architecture attains a maximum network lifetime of 1491 s and suggests that VAE could be used for compression-based transmission using WSNs, as its reconstruction rate is 0.9902, which is better than results from all the other techniques.
RESUMO
Energy efficiency and data reliability are important indicators to measure network performance in wireless sensor networks. In existing research schemes of routing protocols, the impact of node coverage on the network is often ignored, and the possibility that multiple sensor nodes may sense the same spatial point is not taken into account, which results in a waste of network resources, especially in large-scale networks. Apart from that, the blindness of geographic routing in data transmission has been troubling researchers, which means that the nodes are unable to determine the validity of data transmission. In order to solve the above problems, this paper innovatively combines the routing protocol with the coverage control technique and proposes the node collaborative scheduling algorithm, which fully considers the correlation characteristics between sensor nodes to reduce the number of active working nodes and the number of packets generated, to further reduce energy consumption and network delay and improve packet delivery rate. In order to solve the problem of unreliability of geographic routing, a highly reliable link detection and repair scheme is proposed to check the communication link status and repair the invalid link, which can greatly improve the packet delivery rate and throughput of the network, and has good robustness. A large number of experiments demonstrate the effectiveness and superiority of our proposed scheme and algorithm.
RESUMO
Detection of abnormal situations in mobile systems not only provides predictions about risky situations but also has the potential to increase energy efficiency. In this study, two real-world drives of a battery electric vehicle and unsupervised hybrid anomaly detection approaches were developed. The anomaly detection performances of hybrid models created with the combination of Long Short-Term Memory (LSTM)-Autoencoder, the Local Outlier Factor (LOF), and the Mahalanobis distance were evaluated with the silhouette score, Davies-Bouldin index, and Calinski-Harabasz index, and the potential energy recovery rates were also determined. Two driving datasets were evaluated in terms of chaotic aspects using the Lyapunov exponent, Kolmogorov-Sinai entropy, and fractal dimension metrics. The developed hybrid models are superior to the sub-methods in anomaly detection. Hybrid Model-2 had 2.92% more successful results in anomaly detection compared to Hybrid Model-1. In terms of potential energy saving, Hybrid Model-1 provided 31.26% superiority, while Hybrid Model-2 provided 31.48%. It was also observed that there is a close relationship between anomaly and chaoticity. In the literature where cyber security and visual sources dominate in anomaly detection, a strategy was developed that provides energy efficiency-based anomaly detection and chaotic analysis from data obtained without additional sensor data.
RESUMO
Unmanned aerial vehicles (UAVs) have been widely considered to enhance the communication coverage, as well as the wireless power transfer (WPT) of energy-constrained communication networks to prolong their lifetime. However, the trajectory design of a UAV in such a system remains a key problem, especially considering the three-dimensional (3D) feature of the UAV. To address this issue, a UAV-assisted dual-user WPT system was investigated in this paper, where a UAV-mounted energy transmitter (ET) flies in the air to broadcast wireless energy to charge the energy receivers (ERs) on the ground. By optimizing the UAV's 3D trajectory toward a balanced tradeoff between energy consumption and WPT performance, the energy harvested by all ERs during a given mission period was maximized. The above goal was achieved through the following detailed designs. On the one hand, on the basis of previous research results, there is a one-to-one correspondence between the UAV's abscissa and height, so only the relationship between the height and time was focused on in this work to obtain the UAV's optimal 3D trajectory. On the other hand, the idea of calculus was employed to calculate the total harvested energy, leading to the proposed high-efficiency trajectory design. Finally, the simulation results demonstrated that this contribution is capable of enhancing the energy supply by carefully designing the 3D trajectory of the UAV, compared to its conventional counterpart. In general, the above-mentioned contribution could be a promising way for UAV-aided WPT in the future Internet of Things (IoT) and wireless sensor networks (WSNs).
RESUMO
Home appliances are considered to account for a large portion of smart homes' energy consumption. This is due to the abundant use of IoT devices. Various home appliances, such as heaters, dishwashers, and vacuum cleaners, are used every day. It is thought that proper control of these home appliances can reduce significant amounts of energy use. For this purpose, optimization techniques focusing mainly on energy reduction are used. Current optimization techniques somewhat reduce energy use but overlook user convenience, which was the main goal of introducing home appliances. Therefore, there is a need for an optimization method that effectively addresses the trade-off between energy saving and user convenience. Current optimization techniques should include weather metrics other than temperature and humidity to effectively optimize the energy cost of controlling the desired indoor setting of a smart home for the user. This research work involves an optimization technique that addresses the trade-off between energy saving and user convenience, including the use of air pressure, dew point, and wind speed. To test the optimization, a hybrid approach utilizing GWO and PSO was modeled. This work involved enabling proactive energy optimization using appliance energy prediction. An LSTM model was designed to test the appliances' energy predictions. Through predictions and optimized control, smart home appliances could be proactively and effectively controlled. First, we evaluated the RMSE score of the predictive model and found that the proposed model results in low RMSE values. Second, we conducted several simulations and found the proposed optimization results to provide energy cost savings used in appliance control to regulate the desired indoor setting of the smart home. Energy cost reduction goals using the optimization strategies were evaluated for seasonal and monthly patterns of data for result verification. Hence, the proposed work is considered a better candidate solution for proactively optimizing the energy of smart homes.
RESUMO
With the construction and development of modern and smart cities, people's lives are becoming more intelligent and diversified. Surveillance systems increasingly play an active role in target tracking, vehicle identification, traffic management, etc. In the 6G network environment, facing the massive and large-scale data information in the monitoring system, it is difficult for the ordinary processing platform to meet this computing demand. This paper provides a data governance solution based on a 6G environment. The shortcomings of critical technologies in wireless sensor networks are addressed through ZigBee energy optimization to address the shortage of energy supply and high energy consumption in the practical application of wireless sensor networks. At the same time, this improved routing algorithm is combined with embedded cloud computing to optimize the monitoring system and achieve efficient data processing. The ZigBee-optimized wireless sensor network consumes less energy in practice and also increases the service life of the network, as proven by research and experiments. This optimized data monitoring system ensures data security and reliability.
Assuntos
Computação em Nuvem , Tecnologia sem Fio , Humanos , Reprodutibilidade dos Testes , Algoritmos , Fenômenos FísicosRESUMO
In this paper, a data preprocessing methodology, EDA (Exploratory Data Analysis), is used for performing an exploration of the data captured from the sensors of a fluid bed dryer to reduce the energy consumption during the preheating phase. The objective of this process is the extraction of liquids such as water through the injection of dry and hot air. The time taken to dry a pharmaceutical product is typically uniform, independent of the product weight (Kg) or the type of product. However, the time it takes to heat up the equipment before drying can vary depending on different factors, such as the skill level of the person operating the machine. EDA (Exploratory Data Analysis) is a method of evaluating or comprehending sensor data to derive insights and key characteristics. EDA is a critical component of any data science or machine learning process. The exploration and analysis of the sensor data from experimental trials has facilitated the identification of an optimal configuration, with an average reduction in preheating time of one hour. For each processed batch of 150 kg in the fluid bed dryer, this translates into an energy saving of around 18.5 kWh, giving an annual energy saving of over 3.700 kWh.
RESUMO
Clean energy is urgently needed to realize mining projects' sustainable development (SD). This study aims to discuss the clean energy development path and the related issues of SD in the ecological environment driven by big data for mining projects. This study adopts a comprehensive research approach, including a literature review, case analysis, and model construction. Firstly, an in-depth literature review of the development status of clean energy is carried out, and the existing research results and technology applications are explored. Secondly, some typical mining projects are selected as cases to discuss the practice and effect of their clean energy application. Finally, the corresponding clean energy development path and the SD analysis model of the ecological environment are constructed based on big data technology to evaluate the feasibility and potential benefits of promoting and applying clean energy in mining projects. (1) It is observed that under different Gross Domestic Product (GDP) growth rates, the new and cumulative installed capacities of wind energy show an increasing trend. In 2022, under the low GDP growth rate, the cumulative installed capacity of global wind energy was 370.60 Gigawatt (GW), and the new installed capacity was 45 GW. With the high GDP growth rate, the cumulative and new installed capacities were 367.83 GW and 46 GW. As the economy grows, new wind energy capacity is expected to increase significantly by 2030. In 2046, 2047, and 2050, carbon dioxide (CO2) emissions reductions are projected to be 8183.35, 8539.22, and 9842.73 Million tons (Mt) (low scenario), 8750.68, 9087.16, and 10,468.75 Mt (medium scenario), and 9083.03, 9458.86, and 10,879.58 Mt (high scenario). By 2060, it is expected that CO2 emissions reduction will continue to increase. (2) The proposed clean energy development path model has achieved a good effect. Through this study, it is hoped to provide empirical support and decision-making reference for the development of mining projects in clean energy, and promote the SD of the mining industry, thus achieving a win-win situation of economic and ecological benefits. This is of great significance for protecting the ecological environment and realizing the sustainable utilization of resources.
Assuntos
Dióxido de Carbono , Desenvolvimento Sustentável , Big Data , Mineração , Desenvolvimento Econômico , Energia RenovávelRESUMO
Identification and characterization of N-glycopeptides from complex samples are usually based on tandem mass spectrometric measurements. Experimental settings, especially the collision energy selection method, fundamentally influence the obtained fragmentation pattern and hence the confidence of the database search results ("score"). Using standards of naturally occurring glycoproteins, we mapped the Byonic and pGlyco search engine scores of almost 200 individual N-glycopeptides as a function of collision energy settings on a quadrupole time of flight instrument. The resulting unprecedented amount of peptide-level information on such a large and diverse set of N-glycopeptides revealed that the peptide sequence heavily influences the energy for the highest score on top of an expected general linear trend with m/z. Search engine dependence may also be noteworthy. Based on the trends, we designed an experimental method and tested it on HeLa, blood plasma, and monoclonal antibody samples. As compared to the literature, these notably lower collision energies in our workflow led to 10-50% more identified N-glycopeptides, with higher scores. We recommend a simple approach based on a small set of reference N-glycopeptides easily accessible from glycoprotein standards to ease the precise determination of optimal methods on other instruments. Data sets can be accessed via the MassIVE repository (MSV000089657 and MSV000090218).
Assuntos
Glicopeptídeos , Proteômica , Glicopeptídeos/análise , Proteômica/métodos , Glicosilação , Espectrometria de Massas em Tandem/métodos , Glicoproteínas/química , PeptídeosRESUMO
Mobile applications are progressively becoming more sophisticated and complex, increasing their computational requirements. Traditional offloading approaches that use exclusively the Cloud infrastructure are now deemed unsuitable due to the inherent associated delay. Edge Computing can address most of the Cloud limitations at the cost of limited available resources. This bottleneck necessitates an efficient allocation of offloaded tasks from the mobile devices to the Edge. In this paper, we consider a task offloading setting with applications of different characteristics and requirements, and propose an optimal resource allocation framework leveraging the amalgamation of the edge resources. To balance the trade-off between retaining low total energy consumption, respecting end-to-end delay requirements and load balancing at the Edge, we additionally introduce a Markov Random Field based mechanism for the distribution of the excess workload. The proposed approach investigates a realistic scenario, including different categories of mobile applications, edge devices with different computational capabilities, and dynamic wireless conditions modeled by the dynamic behavior and mobility of the users. The framework is complemented with a prediction mechanism that facilitates the orchestration of the physical resources. The efficiency of the proposed scheme is evaluated via modeling and simulation and is shown to outperform a well-known task offloading solution, as well as a more recent one.
RESUMO
In Wireless Body Area Networks (BAN), energy consumption, energy harvesting, and data communication are the three most important issues. In this paper, we develop an optimal allocation algorithm (OAA) for sensor devices, which are carried by or implanted in human body, harvest energy from their surroundings, and are powered by batteries. Based on the optimal allocation algorithm that uses a two-timescale Lyapunov optimization approach, we design a framework for joint optimization of network service cost and network utility to study energy, communication, and allocation management at the network edge. Then, we formulate the utility maximization problem of network service cost management based on the framework. Specifically, we use OAA, which does not require prior knowledge of energy harvesting to decompose the problem into three subproblems: battery management, data collection amount control and transmission energy consumption control. We solve these through OAA to achieve three main goals: (1) balancing the cost of energy consumption and the cost of data transmission on the premise of minimizing the service cost of the devices; (2) keeping the balance of energy consumption and energy collection under the condition of stable queue; and (3) maximizing network utility of the device. The simulation results show that the proposed algorithm can actually optimize the network performance.
Assuntos
Fenômenos Fisiológicos , Humanos , Fenômenos Físicos , Algoritmos , Comunicação , Simulação por ComputadorRESUMO
In this paper, we investigate an unmanned aerial vehicle (UAV)-assisted wireless power transfer (WPT) system, in which a set of UAV-mounted mobile energy transmitters (ETs) are dispatched to broadcast wireless energy to an energy receiver (ER) on the ground. In particular, we aim to maximize the amount of energy transferred to the ER during a finite UAV's flight period, subject to the UAV's maximum speed and collision avoidance constraints. First, the basic one/two-UAV scenarios are researched in detail, which show that UAVs should hover at fixed locations during the whole charging period. Specifically, the Lagrange multiplier method is employed to solve the proposed optimization problem for the case of two UAV situation. Specifically, the general conclusions based on the theoretical analysis of one/two-UAV scenarios are drawn contribute to deducing the trajectory design of UAVs when the number of UAVs increases from three to seven. The obtained trajectory solution implies that UAVs should be evenly distributed on the circumference with point (0,0,H) as the center and UAVs' safe distance as the radius. Finally, numerical results are provided to validate the trajectory design algorithm for the multiple UAVs-enabled single-user WPT system.
RESUMO
Clustering is a promising technique for optimizing energy consumption in sensor-enabled Internet of Things (IoT) networks. Uneven distribution of cluster heads (CHs) across the network, repeatedly choosing the same IoT nodes as CHs and identifying cluster heads in the communication range of other CHs are the major problems leading to higher energy consumption in IoT networks. In this paper, using fuzzy logic, bio-inspired chicken swarm optimization (CSO) and a genetic algorithm, an optimal cluster formation is presented as a Hybrid Intelligent Optimization Algorithm (HIOA) to minimize overall energy consumption in an IoT network. In HIOA, the key idea for formation of IoT nodes as clusters depends on finding chromosomes having a minimum value fitness function with relevant network parameters. The fitness function includes minimization of inter- and intra-cluster distance to reduce the interface and minimum energy consumption over communication per round. The hierarchical order classification of CSO utilizes the crossover and mutation operation of the genetic approach to increase the population diversity that ultimately solves the uneven distribution of CHs and turnout to be balanced network load. The proposed HIOA algorithm is simulated over MATLAB2019A and its performance over CSO parameters is analyzed, and it is found that the best fitness value of the proposed algorithm HIOA is obtained though setting up the parameters popsize=60, number of rooster Nr=0.3, number of hen's Nh=0.6 and swarm updating frequency θ=10. Further, comparative results proved that HIOA is more effective than traditional bio-inspired algorithms in terms of node death percentage, average residual energy and network lifetime by 12%, 19% and 23%.
Assuntos
Internet das Coisas , Animais , Galinhas , Análise por Conglomerados , Comunicação , Redes de Comunicação de Computadores , Feminino , MasculinoRESUMO
In this paper, an intelligent data analysis method for modeling and optimizing energy efficiency in smart buildings through Data Analytics (DA) is proposed. The objective of this proposal is to provide a Decision Support System (DSS) able to support experts in quantifying and optimizing energy efficiency in smart buildings, as well as reveal insights that support the detection of anomalous behaviors in early stages. Firstly, historical data and Energy Efficiency Indicators (EEIs) of the building are analyzed to extract the knowledge from behavioral patterns of historical data of the building. Then, using this knowledge, a classification method to compare days with different features, seasons and other characteristics is proposed. The resulting clusters are further analyzed, inferring key features to predict and quantify energy efficiency on days with similar features but with potentially different behaviors. Finally, the results reveal some insights able to highlight inefficiencies and correlate anomalous behaviors with EE in the smart building. The approach proposed in this work was tested on the BlueNet building and also integrated with Eugene, a commercial EE tool for optimizing energy consumption in smart buildings.
Assuntos
Conservação de Recursos Energéticos , Ciência de Dados , Fenômenos FísicosRESUMO
This paper presents a quantized Kalman filter implemented using unreliable memories. We consider that both the quantization and the unreliable memories introduce errors in the computations, and we develop an error propagation model that takes into account these two sources of errors. In addition to providing updated Kalman filter equations, the proposed error model accurately predicts the covariance of the estimation error and gives a relation between the performance of the filter and its energy consumption, depending on the noise level in the memories. Then, since memories are responsible for a large part of the energy consumption of embedded systems, optimization methods are introduced to minimize the memory energy consumption under the desired estimation performance of the filter. The first method computes the optimal energy levels allocated to each memory bank individually, and the second one optimizes the energy allocation per groups of memory banks. Simulations show a close match between the theoretical analysis and experimental results. Furthermore, they demonstrate an important reduction in energy consumption of more than 50%.
RESUMO
As an efficient way to integrate multiple distributed energy resources (DERs) and the user side, a microgrid is mainly faced with the problems of small-scale volatility, uncertainty, intermittency and demand-side uncertainty of DERs. The traditional microgrid has a single form and cannot meet the flexible energy dispatch between the complex demand side and the microgrid. In response to this problem, the overall environment of wind power, thermostatically controlled loads (TCLs), energy storage systems (ESSs), price-responsive loads and the main grid is proposed. Secondly, the centralized control of the microgrid operation is convenient for the control of the reactive power and voltage of the distributed power supply and the adjustment of the grid frequency. However, there is a problem in that the flexible loads aggregate and generate peaks during the electricity price valley. The existing research takes into account the power constraints of the microgrid and fails to ensure a sufficient supply of electric energy for a single flexible load. This paper considers the response priority of each unit component of TCLs and ESSs on the basis of the overall environment operation of the microgrid so as to ensure the power supply of the flexible load of the microgrid and save the power input cost to the greatest extent. Finally, the simulation optimization of the environment can be expressed as a Markov decision process (MDP) process. It combines two stages of offline and online operations in the training process. The addition of multiple threads with the lack of historical data learning leads to low learning efficiency. The asynchronous advantage actor-critic (Memory A3C, M-A3C) with the experience replay pool memory library is added to solve the data correlation and nonstatic distribution problems during training. The multithreaded working feature of M-A3C can efficiently learn the resource priority allocation on the demand side of the microgrid and improve the flexible scheduling of the demand side of the microgrid, which greatly reduces the input cost. Comparison of the researched cost optimization results with the results obtained with the proximal policy optimization (PPO) algorithm reveals that the proposed algorithm has better performance in terms of convergence and optimization economics.