Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Comput Intell Neurosci ; 2022: 2613075, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36105637

RESUMO

An adaptive fuzzy sliding control (AFSMC) approach is adopted in this paper to address the problem of angular position control and vibration suppression of rotary flexible joint systems. AFSMC consists of fuzzy-based singleton control action and switching control law. By adjusting fuzzy parameters with the self-learning ability to discard system nonlinearities and uncertainties, singleton control based on fuzzy approximation theory can estimate the perfect control law of feedback linearization. To enhance robustness, an additional switching control law is incorporated to reduce the approximation error between the derived singleton control action and the perfect control law of feedback linearization. AFSMC's closed-loop stability will be demonstrated via sliding surface and Lyapunov function analysis of error function. In order to demonstrate the effectiveness of the AFSMC in tracking performance as well as its ability to respond to model uncertainties and external perturbations, simulations are carried out using Simulink and Matlab in order to demonstrate how well it adapts to these situations. Based on these results, it can be concluded that the AFSMC performs satisfactorily in terms of tracking.


Assuntos
Retroalimentação , Incerteza
2.
Sensors (Basel) ; 22(4)2022 Feb 14.
Artigo em Inglês | MEDLINE | ID: mdl-35214384

RESUMO

Fault tolerance, performance, and throughput have been major areas of research and development since the evolution of large-scale networks. Internet-based applications are rapidly growing, including large-scale computations, search engines, high-definition video streaming, e-commerce, and video on demand. In recent years, energy efficiency and fault tolerance have gained significant importance in data center networks and various studies directed the attention towards green computing. Data centers consume a huge amount of energy and various architectures and techniques have been proposed to improve the energy efficiency of data centers. However, there is a tradeoff between energy efficiency and fault tolerance. The objective of this study is to highlight a better tradeoff between the two extremes: (a) high energy efficiency and (b) ensuring high availability through fault tolerance and redundancy. The main objective of the proposed Energy-Aware Fault-Tolerant (EAFT) approach is to keep one level of redundancy for fault tolerance while scheduling resources for energy efficiency. The resultant energy-efficient data center network provides availability as well as fault tolerance at reduced operating cost. The main contributions of this article are: (a) we propose an Energy-Aware Fault-Tolerant (EAFT) data center network scheduler; (b) we compare EAFT with energy efficient resource scheduling techniques to provide analysis of parameters such as, workload distribution, average task per servers, and energy consumption; and (c) we highlight effects of energy efficiency techniques on the network performance of the data center.

3.
Sensors (Basel) ; 22(3)2022 Jan 31.
Artigo em Inglês | MEDLINE | ID: mdl-35161839

RESUMO

The COVID-19 pandemic has affected the world socially and economically changing behaviors towards medical facilities, public gatherings, workplaces, and education. Educational institutes have been shutdown sporadically across the globe forcing teachers and students to adopt distance learning techniques. Due to the closure of educational institutes, work and learn from home methods have burdened the network resources and considerably decreased a viewer's Quality of Experience (QoE). The situation calls for innovative techniques to handle the surging load of video traffic on cellular networks. In the scenario of distance learning, there is ample opportunity to realize multi-cast delivery instead of a conventional unicast. However, the existing 5G architecture does not support service-less multi-cast. In this article, we advance the case of Virtual Network Function (VNF) based service-less architecture for video multicast. Multicasting a video session for distance learning significantly lowers the burden on core and Radio Access Networks (RAN) as demonstrated by evaluation over a real-world dataset. We debate the role of Edge Intelligence (EI) for enabling multicast and edge caching for distance learning to complement the performance of the proposed VNF architecture. EI offers the determination of users that are part of a multicast session based on location, session, and cell information. Moreover, user preferences and network's contextual information can differentiate between live and cached access patterns optimizing edge caching decisions. While exploring the opportunities of EI-enabled distance learning, we demonstrate a significant reduction in network operator resource utilization and an increase in user QoE for VNF based multicast transmission.


Assuntos
COVID-19 , Educação a Distância , Humanos , Inteligência , Pandemias , SARS-CoV-2
4.
Appl Intell (Dordr) ; 51(3): 1296-1325, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34764552

RESUMO

In December 2019, a novel virus named COVID-19 emerged in the city of Wuhan, China. In early 2020, the COVID-19 virus spread in all continents of the world except Antarctica, causing widespread infections and deaths due to its contagious characteristics and no medically proven treatment. The COVID-19 pandemic has been termed as the most consequential global crisis since the World Wars. The first line of defense against the COVID-19 spread are the non-pharmaceutical measures like social distancing and personal hygiene. The great pandemic affecting billions of lives economically and socially has motivated the scientific community to come up with solutions based on computer-aided digital technologies for diagnosis, prevention, and estimation of COVID-19. Some of these efforts focus on statistical and Artificial Intelligence-based analysis of the available data concerning COVID-19. All of these scientific efforts necessitate that the data brought to service for the analysis should be open source to promote the extension, validation, and collaboration of the work in the fight against the global pandemic. Our survey is motivated by the open source efforts that can be mainly categorized as (a) COVID-19 diagnosis from CT scans, X-ray images, and cough sounds, (b) COVID-19 case reporting, transmission estimation, and prognosis from epidemiological, demographic, and mobility data, (c) COVID-19 emotional and sentiment analysis from social media, and (d) knowledge-based discovery and semantic analysis from the collection of scholarly articles covering COVID-19. We survey and compare research works in these directions that are accompanied by open source data and code. Future research directions for data-driven COVID-19 research are also debated. We hope that the article will provide the scientific community with an initiative to start open source extensible and transparent research in the collective fight against the COVID-19 pandemic.

5.
Sensors (Basel) ; 21(19)2021 Oct 07.
Artigo em Inglês | MEDLINE | ID: mdl-34640986

RESUMO

COVID-19 tracing applications have been launched in several countries to track and control the spread of viruses. Such applications utilize Bluetooth Low Energy (BLE) transmissions, which are short range and can be used to determine infected and susceptible persons near an infected person. The COVID-19 risk estimation depends on an epidemic model for the virus behavior and Machine Learning (ML) model to classify the risk based on time series distance of the nodes that may be infected. The BLE technology enabled smartphones continuously transmit beacons and the distance is inferred from the received signal strength indicators (RSSI). The educational activities have shifted to online teaching modes due to the contagious nature of COVID-19. The government policy makers decide on education mode (online, hybrid, or physical) with little technological insight on actual risk estimates. In this study, we analyze BLE technology to debate the COVID-19 risks in university block and indoor class environments. We utilize a sigmoid based epidemic model with varying thresholds of distance to label contact data with high risk or low risk based on features such as contact duration. Further, we train multiple ML classifiers to classify a person into high risk or low risk based on labeled data of RSSI and distance. We analyze the accuracy of the ML classifiers in terms of F-score, receiver operating characteristic (ROC) curve, and confusion matrix. Lastly, we debate future research directions and limitations of this study. We complement the study with open source code so that it can be validated and further investigated.


Assuntos
COVID-19 , Humanos , SARS-CoV-2 , Smartphone , Software , Tecnologia sem Fio
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA