Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.270
Filtrar
1.
J Appl Lab Med ; 8(1): 180-193, 2023 Jan 04.
Artigo em Inglês | MEDLINE | ID: mdl-36610429

RESUMO

BACKGROUND: Clinical and anatomical pathology services are increasingly utilizing cloud information technology (IT) solutions to meet growing requirements for storage, computation, and other IT services. Cloud IT solutions are often considered on the promise of low cost of entry, durability and reliability, scalability, and features that are typically out of reach for small- or mid-sized IT organizations. However, use of cloud-based IT infrastructure also brings additional security and privacy risks to organizations, as unfamiliarity, public networks, and complex feature sets contribute to an increased surface area for attacks. CONTENT: In this best-practices guide, we aim to help both managers and IT professionals in healthcare environments understand the requirements and risks when using cloud-based IT infrastructure within the laboratory environment. We will describe how technical, operational, and organizational best practices that can help mitigate security, privacy, and other risks associated with the use of could infrastructure; furthermore, we identify how these best practices fit into healthcare regulatory frameworks.Among organizational best practices, we identify the need for specific hiring requirements, relationships with parent IT groups, mechanisms for reviewing and auditing security practices, and sound practices for onboarding and offboarding employees. Then, we highlight selected specific operational security, account security, and auditing/logging best practices. Finally, we describe how individual cloud technologies have specific resource-level security features. SUMMARY: We emphasize that laboratory directors, managers, and IT professionals must ensure that the fundamental organizational and process-based requirements are addressed first, to establish the groundwork for technical security solutions and successful implementation of cloud infrastructure.


Assuntos
Computação em Nuvem , Privacidade , Humanos , Reprodutibilidade dos Testes , Atenção à Saúde
2.
Sensors (Basel) ; 23(2)2023 Jan 05.
Artigo em Inglês | MEDLINE | ID: mdl-36679436

RESUMO

IoT environments are becoming increasingly heterogeneous in terms of their distributions and included entities by collaboratively involving not only data centers known from Cloud computing but also the different types of third-party entities that can provide computing resources. To transparently provide such resources and facilitate trust between the involved entities, it is necessary to develop and implement smart contracts. However, when developing smart contracts, developers face many challenges and concerns, such as security, contracts' correctness, a lack of documentation and/or design patterns, and others. To address this problem, we propose a new recommender system to facilitate the development and implementation of low-cost EVM-enabled smart contracts. The recommender system's algorithm provides the smart contract developer with smart contract templates that match their requirements and that are relevant to the typology of the fog architecture. It mainly relies on OpenZeppelin, a modular, reusable, and secure smart contract library that we use when classifying the smart contracts. The evaluation results indicate that by using our solution, the smart contracts' development times are overall reduced. Moreover, such smart contracts are sustainable for fog-computing IoT environments and applications in low-cost EVM-based ledgers. The recommender system has been successfully implemented in the ONTOCHAIN ecosystem, thus presenting its applicability.


Assuntos
Algoritmos , Ecossistema , Computação em Nuvem , Documentação , Confiança
3.
Sensors (Basel) ; 23(2)2023 Jan 15.
Artigo em Inglês | MEDLINE | ID: mdl-36679792

RESUMO

The well known cloud computing is being extended by the idea of fog with the computing nodes placed closer to end users to allow for task processing with tighter latency requirements. However, offloading of tasks (from end devices to either the cloud or to the fog nodes) should be designed taking energy consumption for both transmission and computation into account. The task allocation procedure can be challenging considering the high number of arriving tasks with various computational, communication and delay requirements, and the high number of computing nodes with various communication and computing capabilities. In this paper, we propose an optimal task allocation procedure, minimizing consumed energy for a set of users connected wirelessly to a network composed of FN located at AP and CN. We optimize the assignment of AP and computing nodes to offloaded tasks as well as the operating frequencies of FN. The considered problem is formulated as a Mixed-Integer Nonlinear Programming problem. The utilized energy consumption and delay models as well as their parameters, related to both the computation and communication costs, reflect the characteristics of real devices. The obtained results show that it is profitable to split the processing of tasks between multiple FNs and the cloud, often choosing different nodes for transmission and computation. The proposed algorithm manages to find the optimal allocations and outperforms all the considered alternative allocation strategies resulting in the lowest energy consumption and task rejection rate. Moreover, a heuristic algorithm that decouples the optimization of wireless transmission from implemented computations and wired transmission is proposed. It finds the optimal or close-to-optimal solutions for all of the studied scenarios.


Assuntos
Algoritmos , Comunicação , Computação em Nuvem , Nível de Saúde , Heurística
4.
Sensors (Basel) ; 23(2)2023 Jan 15.
Artigo em Inglês | MEDLINE | ID: mdl-36679795

RESUMO

In the terms of industry, the hand-scraping method is a key technology for achieving high precision in machine tools, and the quality of scraping workpieces directly affects the accuracy and service life of the machine tool. However, most of the quality evaluation of the scraping workpieces is carried out by the scraping worker's subjective judgment, which results in differences in the quality of the scraping workpieces and is time-consuming. Hence, in this research, an edge-cloud computing system was developed to obtain the relevant parameters, which are the percentage of point (POP) and the peak point per square inch (PPI), for evaluating the quality of scraping workpieces. On the cloud computing server-side, a novel network called cascaded segmentation U-Net is proposed to high-quality segment the height of points (HOP) (around 40 µm height) in favor of small datasets training and then carries out a post-processing algorithm that automatically calculates POP and PPI. This research emphasizes the architecture of the network itself instead. The design of the components of our network is based on the basic idea of identity function, which not only solves the problem of the misjudgment of the oil ditch and the residual pigment but also allows the network to be end-to-end trained effectively. At the head of the network, a cascaded multi-stage pixel-wise classification is designed for obtaining more accurate HOP borders. Furthermore, the "Cross-dimension Compression" stage is used to fuse high-dimensional semantic feature maps across the depth of the feature maps into low-dimensional feature maps, producing decipherable content for final pixel-wise classification. Our system can achieve an error rate of 3.7% and 0.9 points for POP and PPI. The novel network achieves an Intersection over Union (IoU) of 90.2%.


Assuntos
Algoritmos , Compressão de Dados , Computação em Nuvem , Indústrias , Julgamento , Processamento de Imagem Assistida por Computador
5.
Sensors (Basel) ; 23(2)2023 Jan 15.
Artigo em Inglês | MEDLINE | ID: mdl-36679800

RESUMO

This article investigates and discusses challenges in the telecommunication field from multiple perspectives, both academic and industry sides are catered for, surveying the main points of technological transformation toward edge-cloud continuum from the view of a telco operator to show the complete picture, including the evolution of cloud-native computing, Software-Defined Networking (SDN), and network automation platforms. The cultural shift in software development and management with DevOps enabled the development of significant technologies in the telecommunication world, including network equipment, application development, and system orchestration. The effect of the aforementioned cultural shift to the application area, especially from the IoT point of view, is investigated. The enormous change in service diversity and delivery capabilities to mass devices are also discussed. During the last two decades, desktop and server virtualization has played an active role in the Information Technology (IT) world. With the use of OpenFlow, SDN, and Network Functions Virtualization (NFV), the network revolution has got underway. The shift from monolithic application development and deployment to micro-services changed the whole picture. On the other hand, the data centers evolved in several generations where the control plane cannot cope with all the networks without an intelligent decision-making process, benefiting from the AI/ML techniques. AI also enables operators to forecast demand more accurately, anticipate network load, and adjust capacity and throughput automatically. Going one step further, zero-touch networking and service management (ZSM) is proposed to get high-level human intents to generate a low-level configuration for network elements with validated results, minimizing the ratio of faults caused by human intervention. Harmonizing all signs of progress in different communication technologies enabled the use of edge computing successfully. Low-powered (from both energy and processing perspectives) IoT networks have disrupted the customer and end-point demands within the sector, as such paved the path towards devising the edge computing concept, which finalized the whole picture of the edge-cloud continuum.


Assuntos
Computação em Nuvem , Tecnologia , Humanos , Automação , Indústrias , Tecnologia da Informação
6.
Sensors (Basel) ; 23(2)2023 Jan 16.
Artigo em Inglês | MEDLINE | ID: mdl-36679810

RESUMO

With the construction and development of modern and smart cities, people's lives are becoming more intelligent and diversified. Surveillance systems increasingly play an active role in target tracking, vehicle identification, traffic management, etc. In the 6G network environment, facing the massive and large-scale data information in the monitoring system, it is difficult for the ordinary processing platform to meet this computing demand. This paper provides a data governance solution based on a 6G environment. The shortcomings of critical technologies in wireless sensor networks are addressed through ZigBee energy optimization to address the shortage of energy supply and high energy consumption in the practical application of wireless sensor networks. At the same time, this improved routing algorithm is combined with embedded cloud computing to optimize the monitoring system and achieve efficient data processing. The ZigBee-optimized wireless sensor network consumes less energy in practice and also increases the service life of the network, as proven by research and experiments. This optimized data monitoring system ensures data security and reliability.


Assuntos
Computação em Nuvem , Tecnologia sem Fio , Humanos , Reprodutibilidade dos Testes , Algoritmos , Fenômenos Físicos
7.
Ann Fam Med ; 21(1): 85-87, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36690477

RESUMO

On October 31, 2021, I learned the electronic health record in my independent, solo practice had been attacked by a Russian syndicate who was holding our data and our practice management system for "ransom." An encryption key could be given to our cloud provider once $5,100,000 was delivered in bitcoin to the hacking entity. After 3 long months of negotiations, with us going back to a completely paper-based system in the interim, our cloud provider paid the Russian syndicate and access was restored. There were many lessons to be learned from our experience. We were fortunate, and through the help of many of our business associates we were able to survive and live to see another day.


Assuntos
Computação em Nuvem , Segurança Computacional , Humanos , Registros Eletrônicos de Saúde
8.
Sensors (Basel) ; 23(1)2023 Jan 02.
Artigo em Inglês | MEDLINE | ID: mdl-36617078

RESUMO

This article deals with a unique, new powertrain diagnostics platform at the level of a large number of EU25 inspection stations. Implemented method uses emission measurement data and additional data from significant sample of vehicles. An original technique using machine learning that uses 9 static testing points (defined by constant engine load and constant engine speed), volume of engine combustion chamber, EURO emission standard category, engine condition state coefficient and actual mileage is applied. An example for dysfunction detection using exhaust emission analyses is described in detail. The test setup is also described, along with the procedure for data collection using a Mindsphere cloud data processing platform. Mindsphere is a core of the new Platform as a Service (Paas) for data processing from multiple testing facilities. An evaluation on a fleet level which used quantile regression method is implemented. In this phase of the research, real data was used, as well as data defined on the basis of knowledge of the manifestation of internal combustion engine defects. As a result of the application of the platform and the evaluation method, it is possible to classify combustion engine dysfunctions. These are defects that cannot be detected by self-diagnostic procedures for cars up to the EURO 6 level.


Assuntos
Aprendizado de Máquina , Emissões de Veículos , Emissões de Veículos/análise , Computação em Nuvem , Gasolina/análise
9.
Sensors (Basel) ; 23(1)2023 Jan 02.
Artigo em Inglês | MEDLINE | ID: mdl-36617079

RESUMO

This paper describes the main results of the JUNO project, a proof of concept developed in the Region of Murcia in Spain, where a smart assistant robot with capabilities for smart navigation and natural human interaction has been developed and deployed, and it is being validated in an elderly institution with real elderly users. The robot is focused on helping people carry out cognitive stimulation exercises and other entertainment activities since it can detect and recognize people, safely navigate through the residence, and acquire information about attention while users are doing the mentioned exercises. All the information could be shared through the Cloud, if needed, and health professionals, caregivers and relatives could access such information by considering the highest standards of privacy required in these environments. Several tests have been performed to validate the system, which combines classic techniques and new Deep Learning-based methods to carry out the requested tasks, including semantic navigation, face detection and recognition, speech to text and text to speech translation, and natural language processing, working both in a local and Cloud-based environment, obtaining an economically affordable system. The paper also discusses the limitations of the platform and proposes several solutions to the detected drawbacks in this kind of complex environment, where the fragility of users should be also considered.


Assuntos
Procedimentos Cirúrgicos Robóticos , Robótica , Humanos , Idoso , Robótica/métodos , Computação em Nuvem , Processamento de Linguagem Natural , Exercício Físico
10.
Health Informatics J ; 29(1): 14604582231152792, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36645733

RESUMO

OBJECTIVES: Telehealth monitoring applications are latency-sensitive. The current fog-based telehealth monitoring models are mainly focused on the role of the fog computing in improving response time and latency. In this paper, we have introduced a new service called "priority queue" in fog layer, which is programmed to prioritize the events sent by different sources in different environments to assist the cloud layer with reducing response time and latency. MATERIAL AND METHODS: We analyzed the performance of the proposed model in a fog-enabled cloud environment with the IFogSim toolkit. To provide a comparison of cloud and fog computing environments, three parameters namely response time, latency, and network usage were used. We used the Pima Indian diabetes dataset to evaluate the model. RESULT: The fog layer proved to be very effective in improving the response time while handling emergencies using priority queues. The proposed model reduces response time by 25.8%, latency by 36.18%, bandwidth by 28.17%, and network usage time by 41.4% as compared to the cloud. CONCLUSION: By combining priority queues, and fog computing in this study, the network usage, latency time, bandwidth, and response time were significantly reduced as compared to cloud computing.


Assuntos
Sistemas de Apoio a Decisões Clínicas , Telemedicina , Humanos , Computação em Nuvem
11.
Sci Rep ; 13(1): 491, 2023 Jan 10.
Artigo em Inglês | MEDLINE | ID: mdl-36627353

RESUMO

The massive upsurge in cloud resource demand and inefficient load management stave off the sustainability of Cloud Data Centres (CDCs) resulting in high energy consumption, resource contention, excessive carbon emission, and security threats. In this context, a novel Sustainable and Secure Load Management (SaS-LM) Model is proposed to enhance the security for users with sustainability for CDCs. The model estimates and reserves the required resources viz., compute, network, and storage and dynamically adjust the load subject to maximum security and sustainability. An evolutionary optimization algorithm named Dual-Phase Black Hole Optimization (DPBHO) is proposed for optimizing a multi-layered feed-forward neural network and allowing the model to estimate resource usage and detect probable congestion. Further, DPBHO is extended to a Multi-objective DPBHO algorithm for a secure and sustainable VM allocation and management to minimize the number of active server machines, carbon emission, and resource wastage for greener CDCs. SaS-LM is implemented and evaluated using benchmark real-world Google Cluster VM traces. The proposed model is compared with state-of-the-arts which reveals its efficacy in terms of reduced carbon emission and energy consumption up to 46.9% and 43.9%, respectively with improved resource utilization up to 16.5%.


Assuntos
Algoritmos , Redes Neurais de Computação , Computação em Nuvem
12.
Pac Symp Biocomput ; 28: 536-540, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36541007

RESUMO

As biomedical research data grow, researchers need reliable and scalable solutions for storage and compute. There is also a need to build systems that encourage and support collaboration and data sharing, to result in greater reproducibility. This has led many researchers and organizations to use cloud computing [1]. The cloud not only enables scalable, on-demand resources for storage and compute, but also collaboration and continuity during virtual work, and can provide superior security and compliance features. Moving to or adding cloud resources, however, is not trivial or without cost, and may not be the best choice in every scenario. The goal of this workshop is to explore the benefits of using the cloud in biomedical and computational research, and considerations (pros and cons) for a range of scenarios including individual researchers, collaborative research teams, consortia research programs, and large biomedical research agencies / organizations.


Assuntos
Pesquisa Biomédica , Biologia Computacional , Humanos , Computação em Nuvem , Reprodutibilidade dos Testes , Disseminação de Informação
13.
PLoS One ; 17(12): e0278609, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36459531

RESUMO

Genetic information provides insights into the exome, genome, epigenetics and structural organisation of the organism. Given the enormous amount of genetic information, scientists are able to perform mammoth tasks to improve the standard of health care such as determining genetic influences on outcome of allogeneic transplantation. Cloud based computing has increasingly become a key choice for many scientists, engineers and institutions as it offers on-demand network access and users can conveniently rent rather than buy all required computing resources. With the positive advancements of cloud computing and nanopore sequencing data output, we were motivated to develop an automated and scalable analysis pipeline utilizing cloud infrastructure in Microsoft Azure to accelerate HLA genotyping service and improve the efficiency of the workflow at lower cost. In this study, we describe (i) the selection process for suitable virtual machine sizes for computing resources to balance between the best performance versus cost effectiveness; (ii) the building of Docker containers to include all tools in the cloud computational environment; (iii) the comparison of HLA genotype concordance between the in-house manual method and the automated cloud-based pipeline to assess data accuracy. In conclusion, the Microsoft Azure cloud based data analysis pipeline was shown to meet all the key imperatives for performance, cost, usability, simplicity and accuracy. Importantly, the pipeline allows for the on-going maintenance and testing of version changes before implementation. This pipeline is suitable for the data analysis from MinION sequencing platform and could be adopted for other data analysis application processes.


Assuntos
Mamutes , Sequenciamento por Nanoporos , Animais , Computação em Nuvem , Análise de Dados , Confiabilidade dos Dados
14.
Sensors (Basel) ; 22(23)2022 Dec 05.
Artigo em Inglês | MEDLINE | ID: mdl-36502189

RESUMO

With the objective of addressing the problem of the fixed convolutional kernel of a standard convolution neural network and the isotropy of features making 3D point cloud data ineffective in feature learning, this paper proposes a point cloud processing method based on graph convolution multilayer perceptron, named GC-MLP. Unlike traditional local aggregation operations, the algorithm generates an adaptive kernel through the dynamic learning features of points, so that it can dynamically adapt to the structure of the object, i.e., the algorithm first adaptively assigns different weights to adjacent points according to the different relationships between the different points captured. Furthermore, local information interaction is then performed with the convolutional layers through a weight-sharing multilayer perceptron. Experimental results show that, under different task benchmark datasets (including ModelNet40 dataset, ShapeNet Part dataset, S3DIS dataset), our proposed algorithm achieves state-of-the-art for both point cloud classification and segmentation tasks.


Assuntos
Algoritmos , Redes Neurais de Computação , Benchmarking , Computação em Nuvem , Aprendizagem
15.
Bull Environ Contam Toxicol ; 110(1): 7, 2022 Dec 13.
Artigo em Inglês | MEDLINE | ID: mdl-36512073

RESUMO

Presence of suspended particulate matter (SPM) in a waterbody or a river can be caused by multiple parameters such as other pollutants by the discharge of poorly maintained sewage, siltation, sedimentation, flood and even bacteria. In this study, remote sensing techniques were used to understand the effects of pandemic-induced lockdown on the SPM concentration in the lower Tapi reservoir or Ukai reservoir. The estimation was done using Landsat-8 OLI (Operational Land Imager) having radiometric resolution (12-bit) and a spatial resolution of 30 m. The Google Earth Engine (GEE) cloud computing platform was used in this study to generate the products. The GEE is a semi-automated workflow system using a robust approach designed for scientific analysis and visualization of geospatial datasets. An algorithm was deployed, and a time-series (2013-2020) analysis was done for the study area. It was found that the average mean value of SPM in Tapi River during 2020 is lowest than the last seven years at the same time.


Assuntos
COVID-19 , Material Particulado , Humanos , Material Particulado/análise , Computação em Nuvem , Ferramenta de Busca , Controle de Doenças Transmissíveis
16.
Int J Mol Sci ; 23(24)2022 Dec 13.
Artigo em Inglês | MEDLINE | ID: mdl-36555493

RESUMO

Long-read sequencing (LRS) has been adopted to meet a wide variety of research needs, ranging from the construction of novel transcriptome annotations to the rapid identification of emerging virus variants. Amongst other advantages, LRS preserves more information about RNA at the transcript level than conventional high-throughput sequencing, including far more accurate and quantitative records of splicing patterns. New studies with LRS datasets are being published at an exponential rate, generating a vast reservoir of information that can be leveraged to address a host of different research questions. However, mining such publicly available data in a tailored fashion is currently not easy, as the available software tools typically require familiarity with the command-line interface, which constitutes a significant obstacle to many researchers. Additionally, different research groups utilize different software packages to perform LRS analysis, which often prevents a direct comparison of published results across different studies. To address these challenges, we have developed the Long-Read Analysis Pipeline for Transcriptomics (L-RAPiT), a user-friendly, free pipeline requiring no dedicated computational resources or bioinformatics expertise. L-RAPiT can be implemented directly through Google Colaboratory, a system based on the open-source Jupyter notebook environment, and allows for the direct analysis of transcriptomic reads from Oxford Nanopore and PacBio LRS machines. This new pipeline enables the rapid, convenient, and standardized analysis of publicly available or newly generated LRS datasets.


Assuntos
Computação em Nuvem , RNA , RNA/genética , Perfilação da Expressão Gênica/métodos , Biologia Computacional/métodos , Software , Análise de Sequência de RNA , Sequenciamento de Nucleotídeos em Larga Escala/métodos
17.
Sensors (Basel) ; 22(24)2022 Dec 16.
Artigo em Inglês | MEDLINE | ID: mdl-36560272

RESUMO

Billions of Internet of Things (IoT) devices and sensors are expected to be supported by fifth-generation (5G) wireless cellular networks. This highly connected structure is predicted to attract different and unseen types of attacks on devices, sensors, and networks that require advanced mitigation strategies and the active monitoring of the system components. Therefore, a paradigm shift is needed, from traditional prevention and detection approaches toward resilience. This study proposes a trust-based defense framework to ensure resilient IoT services on 5G multi-access edge computing (MEC) systems. This defense framework is based on the trustability metric, which is an extension of the concept of reliability and measures how much a system can be trusted to keep a given level of performance under a specific successful attack vector. Furthermore, trustability is used as a trade-off with system cost to measure the net utility of the system. Systems using multiple sensors with different levels of redundancy were tested, and the framework was shown to measure the trustability of the entire system. Furthermore, different types of attacks were simulated on an edge cloud with multiple nodes, and the trustability was compared to the capabilities of dynamic node addition for the redundancy and removal of untrusted nodes. Finally, the defense framework measured the net utility of the service, comparing the two types of edge clouds with and without the node deactivation capability. Overall, the proposed defense framework based on trustability ensures a satisfactory level of resilience for IoT on 5G MEC systems, which serves as a trade-off with an accepted cost of redundant resources under various attacks.


Assuntos
Internet das Coisas , Computação em Nuvem , Reprodutibilidade dos Testes , Internet , Confiança
18.
PLoS One ; 17(12): e0279305, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36574391

RESUMO

Real-time data collection and pre-processing have enabled the recognition, realization, and prediction of diseases by extracting and analysing the important features of physiological data. In this research, an intelligent end-to-end system for anomaly detection and classification of raw, one-dimensional (1D) electrocardiogram (ECG) signals is given to assess cardiovascular activity automatically. The acquired raw ECG data is pre-processed carefully before storing it in the cloud, and then deeply analyzed for anomaly detection. A deep learning-based auto-encoder(AE) algorithm is applied for the anomaly detection of 1D ECG time-series signals. As a next step, the implemented system identifies it by a multi-label classification algorithm. To improve the classification accuracy and model robustness the improved feature-engineered parameters of the large and diverse datasets have been incorporated. The training has been done using the amazon web service (AWS) machine learning services and cloud-based storage for a unified solution. Multi-class classification of raw ECG signals is challenging due to a large number of possible label combinations and noise susceptibility. To overcome this problem, a performance comparison of a large set of machine algorithms in terms of classification accuracy is presented on an improved feature-engineered dataset. The proposed system reduces the raw signal size up to 95% using wavelet time scattering features to make it less compute-intensive. The results show that among several state-of-the-art techniques, the long short-term memory (LSTM) method has shown 100% classification accuracy, and an F1 score on the three-class test dataset. The ECG signal anomaly detection algorithm shows 98% accuracy using deep LSTM auto-encoders with a reconstructed error threshold of 0.02 in terms of absolute error loss. Our approach provides performance and predictive improvement with an average mean absolute error loss of 0.0072 for normal signals and 0.078 for anomalous signals.


Assuntos
Arritmias Cardíacas , Processamento de Sinais Assistido por Computador , Humanos , Arritmias Cardíacas/diagnóstico , Computação em Nuvem , Eletrocardiografia/métodos , Algoritmos , Atenção à Saúde
19.
PLoS One ; 17(12): e0279649, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36584089

RESUMO

Cloud Data Computing (CDC) is conducive to precise energy-saving management of user data centers based on the real-time energy consumption monitoring of Information Technology equipment. This work aims to obtain the most suitable energy-saving strategies to achieve safe, intelligent, and visualized energy management. First, the theory of Convolutional Neural Network (CNN) is discussed. Besides, an intelligent energy-saving model based on CNN is designed to ameliorate the variable energy consumption, load, and power consumption of the CDC data center. Then, the core idea of the policy gradient (PG) algorithm is introduced. In addition, a CDC task scheduling model is designed based on the PG algorithm, aiming at the uncertainty and volatility of the CDC scheduling tasks. Finally, the performance of different neural network models in the training process is analyzed from the perspective of total energy consumption and load optimization of the CDC center. At the same time, simulation is performed on the CDC task scheduling model based on the PG algorithm to analyze the task scheduling demand. The results demonstrate that the energy consumption of the CNN algorithm in the CDC energy-saving model is better than that of the Elman algorithm and the ecoCloud algorithm. Besides, the CNN algorithm reduces the number of virtual machine migrations in the CDC energy-saving model by 9.30% compared with the Elman algorithm. The Deep Deterministic Policy Gradient (DDPG) algorithm performs the best in task scheduling of the cloud data center, and the average response time of the DDPG algorithm is 141. In contrast, the Deep Q Network algorithm performs poorly. This paper proves that Deep Reinforcement Learning (DRL) and neural networks can reduce the energy consumption of CDC and improve the completion time of CDC tasks, offering a research reference for CDC resource scheduling.


Assuntos
Algoritmos , Redes Neurais de Computação , Simulação por Computador , Computação em Nuvem , Fenômenos Físicos
20.
Sensors (Basel) ; 22(23)2022 Nov 22.
Artigo em Inglês | MEDLINE | ID: mdl-36501737

RESUMO

With the advances in the IoT era, the number of wireless sensor devices has been growing rapidly. This increasing number gives rise to more complex networks where more complex tasks can be executed by utilizing more computational resources from the public clouds. Cloud service providers use various pricing models for their offered services. Some models are appropriate for the cloud service user's short-term requirements whereas the other models are appropriate for the long-term requirements of cloud service users. Reservation-based price models are suitable for long-term requirements of cloud service users. We used the pricing schemes with spot and reserved instances. Reserved instances support a hybrid cost model with fixed reservation costs that vary with contract duration and an hourly usage charge which is lower than the charge of the spot instances. Optimizing resources to be reserved requires sufficient research effort. Recent algorithms proposed for this problem are generally based on integer programming problems, so they do not have polynomial time complexity. In this work, heuristic-based polynomial time policies are proposed for this problem. It is exhibited that the cost for the cloud service user which uses our approach is comparable to optimal solutions, i.e., it is near-optimal.


Assuntos
Algoritmos , Computação em Nuvem , Políticas
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...