Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Sensors (Basel) ; 24(8)2024 Apr 09.
Artigo em Inglês | MEDLINE | ID: mdl-38675998

RESUMO

IoT-based smart transportation monitors vehicles, cargo, and driver statuses for safe movement. Due to the limited computational capabilities of the sensors, the IoT devices require powerful remote servers to execute their tasks, and this phenomenon is called task offloading. Researchers have developed efficient task offloading and scheduling mechanisms for IoT devices to reduce energy consumption and response time. However, most research has not considered fault-tolerance-based job allocation for IoT logistics trucks, task and data-aware scheduling, priority-based task offloading, or multiple-parameter-based fog node selection. To overcome the limitations, we proposed a Multi-Objective Task-Aware Offloading and Scheduling Framework for IoT Logistics (MT-OSF). The proposed model prioritizes the tasks into delay-sensitive and computation-intensive tasks using a priority-based offloader and forwards the two lists to the Task-Aware Scheduler (TAS) for further processing on fog and cloud nodes. The Task-Aware Scheduler (TAS) uses a multi-criterion decision-making process, i.e., the analytical hierarchy process (AHP), to calculate the fog nodes' priority for task allocation and scheduling. The AHP decides the fog nodes' priority based on node energy, bandwidth, RAM, and MIPS power. Similarly, the TAS also calculates the shortest distance between the IoT-enabled vehicle and the fog node to which the IoT tasks are assigned for execution. A task-aware scheduler schedules delay-sensitive tasks on nearby fog nodes while allocating computation-intensive tasks to cloud data centers using the FCFS algorithm. Fault-tolerant manager is used to check task failure; if any task fails, the proposed system re-executes the tasks, and if any fog node fails, the proposed system allocates the tasks to another fog node to reduce the task failure ratio. The proposed model is simulated in iFogSim2 and demonstrates a 7% reduction in response time, 16% reduction in energy consumption, and 22% reduction in task failure ratio in comparison to Ant Colony Optimization and Round Robin.

2.
PeerJ Comput Sci ; 10: e1984, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38660189

RESUMO

Social background profiling of speakers is heavily used in areas, such as, speech forensics, and tuning speech recognition for accuracy improvement. This article provides a survey of recent research in speaker background profiling in terms of accent classification and analyses the datasets, speech features, and classification models used for the classification tasks. The aim is to provide a comprehensive overview of recent research related to speaker background profiling and to present a comparative analysis of the achieved performance measures. Comprehensive descriptions of the datasets, speech features, and classification models used in recent research for accent classification have been presented, with a comparative analysis made on the performance measures of the different methods. This analysis provides insights into the strengths and weaknesses of the different methods for accent classification. Subsequently, research gaps have been identified, which serve as a useful resource for researchers looking to advance the field.

3.
PeerJ Comput Sci ; 10: e1908, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38435610

RESUMO

The oil and gas industries (OGI) are the primary global energy source, with pipelines as vital components for OGI transportation. However, pipeline leaks pose significant risks, including fires, injuries, environmental harm, and property damage. Therefore, maintaining an effective pipeline maintenance system is critical for ensuring a safe and sustainable energy supply. The Internet of Things (IoT) has emerged as a cutting-edge technology for efficient OGI pipeline leak detection. However, deploying IoT in OGI monitoring faces significant challenges due to hazardous environments and limited communication infrastructure. Energy efficiency and fault tolerance, typical IoT concerns, gain heightened importance in the OGI context. In OGI monitoring, IoT devices are linearly deployed with no alternative communication mechanism available along OGI pipelines. Thus, the absence of both communication routes can disrupt crucial data transmission. Therefore, ensuring energy-efficient and fault-tolerant communication for OGI data is paramount. Critical data needs to reach the control center on time for faster actions to avoid loss. Low latency communication for critical data is another challenge of the OGI monitoring environment. Moreover, IoT devices gather a plethora of OGI parameter data including redundant values that hold no relevance for transmission to the control center. Thus, optimizing data transmission is essential to conserve energy in OGI monitoring. This article presents the Priority-Based, Energy-Efficient, and Optimal Data Routing Protocol (PO-IMRP) to tackle these challenges. The energy model and congestion control mechanism optimize data packets for an energy-efficient and congestion-free network. In PO-IMRP, nodes are aware of their energy status and communicate node's depletion status timely for network robustness. Priority-based routing selects low-latency routes for critical data to avoid OGI losses. Comparative analysis against linear LEACH highlights PO-IMRP's superior performance in terms of total packet transmission by completing fewer rounds with more packet's transmissions, attributed to the packet optimization technique implemented at each hop, which helps mitigate network congestion. MATLAB simulations affirm the effectiveness of the protocol in terms of energy efficiency, fault-tolerance, and low latency communication.

4.
Neural Comput Appl ; 35(8): 6115-6124, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36408287

RESUMO

Online medical consultation can significantly improve the efficiency of primary health care. Recently, many online medical question-answer services have been developed that connect the patients with relevant medical consultants based on their questions. Considering the linguistic variety in their question, social background identification of patients can improve the referral system by selecting a medical consultant with a similar social origin for efficient communication. This paper has proposed a novel fine-tuning strategy for the pre-trained transformers to identify the social origin of text authors. When fused with the existing adapter model, the proposed methods achieve an overall accuracy of 53.96% for the Arabic dialect identification task on the Nuanced Arabic Dialect Identification (NADI) dataset. The overall accuracy is 0.54% higher than the previous best for the same dataset, which establishes the utility of custom fine-tuning strategies for pre-trained transformer models.

5.
Front Psychol ; 13: 915596, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35874420

RESUMO

The internet of things (IoT) is an emerging paradigm of educational applications and innovative technology in the current era. While capabilities are increasing day by day, there are still many limitations and challenges to utilizing these technologies within E-Learning in higher educational institutes (HEIs). The IoT is well-implemented in the United States of America (USA), United Kingdom (UK), Japan, and China but not in developing countries, including Saudi Arabia, Malaysia, Pakistan, Bangladesh, etc. Few studies have investigated the adoption of IoT in E-Learning within developing countries. Therefore, this research aims to examine the factors influencing IoT adoption for E-Learning to be utilized in HEIs. Further, an adoption model is proposed for IoT-based E-Learning in the contexts of developing countries and provides recommendations for enhancing the IoT adoption for E-Learning in HEIs. The IoT-based E-Learning model categorizes these influencing factors into four groups: individual, organizational, environmental, and technological. Influencing factors are compared along with a detailed description in order to determine which factors should be prioritized for efficient IoT-based E-Learning in HEIs. We identify the privacy (27%), infrastructure readiness (24%), financial constraints (24%), ease of use (20%), support of faculty (18%), interaction (15%), attitude (14%), and network and data security (14%), as the significant E-Learning influencing factors on IoT adoption in HEIs. These findings from the researcher's perspective will show that the national culture has a significant role in the individual, organizational, technological, and environmental behavior toward using new technology in developing countries.

6.
Sensors (Basel) ; 22(11)2022 May 26.
Artigo em Inglês | MEDLINE | ID: mdl-35684640

RESUMO

In recent decades, Vehicular Ad Hoc Networks (VANET) have emerged as a promising field that provides real-time communication between vehicles for comfortable driving and human safety. However, the Internet of Vehicles (IoV) platform faces some serious problems in the deployment of robust authentication mechanisms in resource-constrained environments and directly affects the efficiency of existing VANET schemes. Moreover, the security of the information becomes a critical issue over an open wireless access medium. In this paper, an efficient and secure lightweight anonymous mutual authentication and key establishment (SELWAK) for IoT-based VANETs is proposed. The proposed scheme requires two types of mutual authentication: V2V and V2R. In addition, SELWAK maintains secret keys for secure communication between Roadside Units (RSUs). The performance evaluation of SELWAK affirms that it is lightweight in terms of computational cost and communication overhead because SELWAK uses a bitwise Exclusive-OR operation and one-way hash functions. The formal and informal security analysis of SELWAK shows that it is robust against man-in-the-middle attacks, replay attacks, stolen verifier attacks, stolen OBU attacks, untraceability, impersonation attacks, and anonymity. Moreover, a formal security analysis is presented using the Real-or-Random (RoR) model.


Assuntos
Redes de Comunicação de Computadores , Segurança Computacional , Comunicação , Humanos , Internet
7.
Sensors (Basel) ; 22(4)2022 Feb 14.
Artigo em Inglês | MEDLINE | ID: mdl-35214384

RESUMO

Fault tolerance, performance, and throughput have been major areas of research and development since the evolution of large-scale networks. Internet-based applications are rapidly growing, including large-scale computations, search engines, high-definition video streaming, e-commerce, and video on demand. In recent years, energy efficiency and fault tolerance have gained significant importance in data center networks and various studies directed the attention towards green computing. Data centers consume a huge amount of energy and various architectures and techniques have been proposed to improve the energy efficiency of data centers. However, there is a tradeoff between energy efficiency and fault tolerance. The objective of this study is to highlight a better tradeoff between the two extremes: (a) high energy efficiency and (b) ensuring high availability through fault tolerance and redundancy. The main objective of the proposed Energy-Aware Fault-Tolerant (EAFT) approach is to keep one level of redundancy for fault tolerance while scheduling resources for energy efficiency. The resultant energy-efficient data center network provides availability as well as fault tolerance at reduced operating cost. The main contributions of this article are: (a) we propose an Energy-Aware Fault-Tolerant (EAFT) data center network scheduler; (b) we compare EAFT with energy efficient resource scheduling techniques to provide analysis of parameters such as, workload distribution, average task per servers, and energy consumption; and (c) we highlight effects of energy efficiency techniques on the network performance of the data center.

8.
Sensors (Basel) ; 22(3)2022 Jan 31.
Artigo em Inglês | MEDLINE | ID: mdl-35161839

RESUMO

The COVID-19 pandemic has affected the world socially and economically changing behaviors towards medical facilities, public gatherings, workplaces, and education. Educational institutes have been shutdown sporadically across the globe forcing teachers and students to adopt distance learning techniques. Due to the closure of educational institutes, work and learn from home methods have burdened the network resources and considerably decreased a viewer's Quality of Experience (QoE). The situation calls for innovative techniques to handle the surging load of video traffic on cellular networks. In the scenario of distance learning, there is ample opportunity to realize multi-cast delivery instead of a conventional unicast. However, the existing 5G architecture does not support service-less multi-cast. In this article, we advance the case of Virtual Network Function (VNF) based service-less architecture for video multicast. Multicasting a video session for distance learning significantly lowers the burden on core and Radio Access Networks (RAN) as demonstrated by evaluation over a real-world dataset. We debate the role of Edge Intelligence (EI) for enabling multicast and edge caching for distance learning to complement the performance of the proposed VNF architecture. EI offers the determination of users that are part of a multicast session based on location, session, and cell information. Moreover, user preferences and network's contextual information can differentiate between live and cached access patterns optimizing edge caching decisions. While exploring the opportunities of EI-enabled distance learning, we demonstrate a significant reduction in network operator resource utilization and an increase in user QoE for VNF based multicast transmission.


Assuntos
COVID-19 , Educação a Distância , Humanos , Inteligência , Pandemias , SARS-CoV-2
9.
Appl Intell (Dordr) ; 51(3): 1296-1325, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34764552

RESUMO

In December 2019, a novel virus named COVID-19 emerged in the city of Wuhan, China. In early 2020, the COVID-19 virus spread in all continents of the world except Antarctica, causing widespread infections and deaths due to its contagious characteristics and no medically proven treatment. The COVID-19 pandemic has been termed as the most consequential global crisis since the World Wars. The first line of defense against the COVID-19 spread are the non-pharmaceutical measures like social distancing and personal hygiene. The great pandemic affecting billions of lives economically and socially has motivated the scientific community to come up with solutions based on computer-aided digital technologies for diagnosis, prevention, and estimation of COVID-19. Some of these efforts focus on statistical and Artificial Intelligence-based analysis of the available data concerning COVID-19. All of these scientific efforts necessitate that the data brought to service for the analysis should be open source to promote the extension, validation, and collaboration of the work in the fight against the global pandemic. Our survey is motivated by the open source efforts that can be mainly categorized as (a) COVID-19 diagnosis from CT scans, X-ray images, and cough sounds, (b) COVID-19 case reporting, transmission estimation, and prognosis from epidemiological, demographic, and mobility data, (c) COVID-19 emotional and sentiment analysis from social media, and (d) knowledge-based discovery and semantic analysis from the collection of scholarly articles covering COVID-19. We survey and compare research works in these directions that are accompanied by open source data and code. Future research directions for data-driven COVID-19 research are also debated. We hope that the article will provide the scientific community with an initiative to start open source extensible and transparent research in the collective fight against the COVID-19 pandemic.

10.
Sensors (Basel) ; 21(19)2021 Oct 07.
Artigo em Inglês | MEDLINE | ID: mdl-34640986

RESUMO

COVID-19 tracing applications have been launched in several countries to track and control the spread of viruses. Such applications utilize Bluetooth Low Energy (BLE) transmissions, which are short range and can be used to determine infected and susceptible persons near an infected person. The COVID-19 risk estimation depends on an epidemic model for the virus behavior and Machine Learning (ML) model to classify the risk based on time series distance of the nodes that may be infected. The BLE technology enabled smartphones continuously transmit beacons and the distance is inferred from the received signal strength indicators (RSSI). The educational activities have shifted to online teaching modes due to the contagious nature of COVID-19. The government policy makers decide on education mode (online, hybrid, or physical) with little technological insight on actual risk estimates. In this study, we analyze BLE technology to debate the COVID-19 risks in university block and indoor class environments. We utilize a sigmoid based epidemic model with varying thresholds of distance to label contact data with high risk or low risk based on features such as contact duration. Further, we train multiple ML classifiers to classify a person into high risk or low risk based on labeled data of RSSI and distance. We analyze the accuracy of the ML classifiers in terms of F-score, receiver operating characteristic (ROC) curve, and confusion matrix. Lastly, we debate future research directions and limitations of this study. We complement the study with open source code so that it can be validated and further investigated.


Assuntos
COVID-19 , Humanos , SARS-CoV-2 , Smartphone , Software , Tecnologia sem Fio
11.
Sensors (Basel) ; 15(2): 4430-69, 2015 Feb 13.
Artigo em Inglês | MEDLINE | ID: mdl-25688592

RESUMO

The staggering growth in smartphone and wearable device use has led to a massive scale generation of personal (user-specific) data. To explore, analyze, and extract useful information and knowledge from the deluge of personal data, one has to leverage these devices as the data-mining platforms in ubiquitous, pervasive, and big data environments. This study presents the personal ecosystem where all computational resources, communication facilities, storage and knowledge management systems are available in user proximity. An extensive review on recent literature has been conducted and a detailed taxonomy is presented. The performance evaluation metrics and their empirical evidences are sorted out in this paper. Finally, we have highlighted some future research directions and potentially emerging application areas for personal data mining using smartphones and wearable devices.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA