Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 8 de 8
1.
PLoS One ; 19(2): e0296392, 2024.
Article En | MEDLINE | ID: mdl-38408070

The quest for energy efficiency (EE) in multi-tier Heterogeneous Networks (HetNets) is observed within the context of surging high-speed data demands and the rapid proliferation of wireless devices. The analysis of existing literature underscores the need for more comprehensive strategies to realize genuinely energy-efficient HetNets. This research work contributes significantly by employing a systematic methodology, utilizing This model facilitates the assessment of network performance by considering the spatial distribution of network elements. The stochastic nature of the PPP allows for a realistic representation of the random spatial deployment of base stations and users in multi-tier HetNets. Additionally, an analytical framework for Quality of Service (QoS) provision based on D-DOSS simplifies the understanding of user-base station relationships and offers essential performance metrics. Moreover, an optimization problem formulation, considering coverage, energy maximization, and delay minimization constraints, aims to strike a balance between key network attributes. This research not only addresses crucial challenges in creating EE HetNets but also lays a foundation for future advancements in wireless network design, operation, and management, ultimately benefiting network operators and end-users alike amidst the growing demand for high-speed data and the increasing prevalence of wireless devices. The proposed D-DOSS approach not only offers insights for the systematic design and analysis of EE HetNets but also systematically outperforms other state-of-the-art techniques presented. The improvement in energy efficiency systematically ranges from 67% (min side) to 98% (max side), systematically demonstrating the effectiveness of the proposed strategy in achieving higher energy efficiency compared to existing strategies. This systematic research work establishes a strong foundation for the systematic evolution of energy-efficient HetNets. The systematic methodology employed ensures a comprehensive understanding of the complex interplay of network dynamics and user requirements in a multi-tiered environment.


Computer Communication Networks , Wireless Technology , Computer Simulation , Conservation of Energy Resources , Physical Phenomena
2.
Sensors (Basel) ; 23(4)2023 Feb 09.
Article En | MEDLINE | ID: mdl-36850563

Cloud computing (CC) benefits and opportunities are among the fastest growing technologies in the computer industry. Cloud computing's challenges include resource allocation, security, quality of service, availability, privacy, data management, performance compatibility, and fault tolerance. Fault tolerance (FT) refers to a system's ability to continue performing its intended task in the presence of defects. Fault-tolerance challenges include heterogeneity and a lack of standards, the need for automation, cloud downtime reliability, consideration for recovery point objects, recovery time objects, and cloud workload. The proposed research includes machine learning (ML) algorithms such as naïve Bayes (NB), library support vector machine (LibSVM), multinomial logistic regression (MLR), sequential minimal optimization (SMO), K-nearest neighbor (KNN), and random forest (RF) as well as a fault-tolerance method known as delta-checkpointing to achieve higher accuracy, lesser fault prediction error, and reliability. Furthermore, the secondary data were collected from the homonymous, experimental high-performance computing (HPC) system at the Swiss Federal Institute of Technology (ETH), Zurich, and the primary data were generated using virtual machines (VMs) to select the best machine learning classifier. In this article, the secondary and primary data were divided into two split ratios of 80/20 and 70/30, respectively, and cross-validation (5-fold) was used to identify more accuracy and less prediction of faults in terms of true, false, repair, and failure of virtual machines. Secondary data results show that naïve Bayes performed exceptionally well on CPU-Mem mono and multi blocks, and sequential minimal optimization performed very well on HDD mono and multi blocks in terms of accuracy and fault prediction. In the case of greater accuracy and less fault prediction, primary data results revealed that random forest performed very well in terms of accuracy and fault prediction but not with good time complexity. Sequential minimal optimization has good time complexity with minor differences in random forest accuracy and fault prediction. We decided to modify sequential minimal optimization. Finally, the modified sequential minimal optimization (MSMO) algorithm with the fault-tolerance delta-checkpointing (D-CP) method is proposed to improve accuracy, fault prediction error, and reliability in cloud computing.

3.
Sensors (Basel) ; 22(21)2022 Nov 07.
Article En | MEDLINE | ID: mdl-36366269

Rice is one of the vital foods consumed in most countries throughout the world. To estimate the yield, crop counting is used to indicate improper growth, identification of loam land, and control of weeds. It is becoming necessary to grow crops healthy, precisely, and proficiently as the demand increases for food supplies. Traditional counting methods have numerous disadvantages, such as long delay times and high sensitivity, and they are easily disturbed by noise. In this research, the detection and counting of rice plants using an unmanned aerial vehicle (UAV) and aerial images with a geographic information system (GIS) are used. The technique is implemented in the area of forty acres of rice crop in Tando Adam, Sindh, Pakistan. To validate the performance of the proposed system, the obtained results are compared with the standard plant count techniques as well as approved by the agronomist after testing soil and monitoring the rice crop count in each acre of land of rice crops. From the results, it is found that the proposed system is precise and detects rice crops accurately, differentiates from other objects, and estimates the soil health based on plant counting data; however, in the case of clusters, the counting is performed in semi-automated mode.


Oryza , Soil , Geographic Information Systems , Crops, Agricultural , Plant Weeds
4.
Comput Intell Neurosci ; 2022: 4936748, 2022.
Article En | MEDLINE | ID: mdl-35707203

In today's competitive world, software organizations are moving towards global software development (GSD). This became even more significant in times such as COVID-19 pandemic, where team members residing in different geographical locations and from different cultures had to work from home to carry on their tasks and responsibilities as travelling was restricted. These teams are distributed in nature and work on the same set of goals and objectives. Some of the key challenges which software practitioners face in GSD environment are cultural differences, communication issues, use of different software models, temporal and spatial distance, and risk factors. Risks can be considered as a biggest challenge of other challenges, but not many researchers have addressed risks related to time, cost, and resources. In this research paper, a comprehensive analysis of software project risk factors in GSD environment has been performed. Based on the literature review, 54 risk factors were identified in the context of software development. These were further classified by practitioners into three dimensions, i.e., time, cost, and resource. A Pareto analysis has been performed to discover the most important risk factors, which could have bad impact on software projects. Furthermore, a modified firefly algorithm has been designed and implemented to evaluate and prioritize the pertinent risk factors obtained after the Pareto analysis. All important risks have been prioritized according to the fitness values of individual risks. The top three risks are "failure to provide resources," "cultural differences of participants," and "inadequately trained development team members."


COVID-19 , Pandemics , Algorithms , Humans , Risk Factors , Software
5.
Front Public Health ; 10: 862497, 2022.
Article En | MEDLINE | ID: mdl-35493354

Background and Objective: Viral hepatitis is a major public health concern on a global scale. It predominantly affects the world's least developed countries. The most endemic regions are resource constrained, with a low human development index. Chronic hepatitis can lead to cirrhosis, liver failure, cancer and eventually death. Early diagnosis and treatment of hepatitis infection can help to reduce disease burden and transmission to those at risk of infection or reinfection. Screening is critical for meeting the WHO's 2030 targets. Consequently, automated systems for the reliable prediction of hepatitis illness. When applied to the prediction of hepatitis using imbalanced datasets from testing, machine learning (ML) classifiers and known methodologies for encoding categorical data have demonstrated a wide range of unexpected results. Early research also made use of an artificial neural network to identify features without first gaining a thorough understanding of the sequence data. Methods: To help in accurate binary classification of diagnosis (survivability or mortality) in patients with severe hepatitis, this paper suggests a deep learning-based decision support system (DSS) that makes use of bidirectional long/short-term memory (BiLSTM). Balanced data was utilized to predict hepatitis using the BiLSTM model. Results: In contrast to previous investigations, the trial results of this suggested model were encouraging: 95.08% accuracy, 94% precision, 93% recall, and a 93% F1-score. Conclusions: In the field of hepatitis detection, the use of a BiLSTM model for classification is better than current methods by a significant margin in terms of improved accuracy.


Algorithms , Hepatitis , Humans , Machine Learning , Neural Networks, Computer , Public Health
6.
Comput Math Methods Med ; 2022: 8691646, 2022.
Article En | MEDLINE | ID: mdl-35126641

Task scheduling in parallel multiple sequence alignment (MSA) through improved dynamic programming optimization speeds up alignment processing. The increased importance of multiple matching sequences also needs the utilization of parallel processor systems. This dynamic algorithm proposes improved task scheduling in case of parallel MSA. Specifically, the alignment of several tertiary structured proteins is computationally complex than simple word-based MSA. Parallel task processing is computationally more efficient for protein-structured based superposition. The basic condition for the application of dynamic programming is also fulfilled, because the task scheduling problem has multiple possible solutions or options. Search space reduction for speedy processing of this algorithm is carried out through greedy strategy. Performance in terms of better results is ensured through computationally expensive recursive and iterative greedy approaches. Any optimal scheduling schemes show better performance in heterogeneous resources using CPU or GPU.


Algorithms , Computational Biology/methods , Sequence Alignment/methods , Computational Biology/statistics & numerical data , Humans , Sequence Alignment/statistics & numerical data , Software
7.
Sensors (Basel) ; 21(19)2021 Oct 02.
Article En | MEDLINE | ID: mdl-34640908

Fifth-generation (5G) communication technology is intended to offer higher data rates, outstanding user exposure, lower power consumption, and extremely short latency. Such cellular networks will implement a diverse multi-layer model comprising device-to-device networks, macro-cells, and different categories of small cells to assist customers with desired quality-of-service (QoS). This multi-layer model affects several studies that confront utilizing interference management and resource allocation in 5G networks. With the growing need for cellular service and the limited resources to provide it, capably handling network traffic and operation has become a problem of resource distribution. One of the utmost serious problems is to alleviate the jamming in the network in support of having a better QoS. However, although a limited number of review papers have been written on resource distribution, no review papers have been written specifically on 5G resource allocation. Hence, this article analyzes the issue of resource allocation by classifying the various resource allocation schemes in 5G that have been reported in the literature and assessing their ability to enhance service quality. This survey bases its discussion on the metrics that are used to evaluate network performance. After consideration of the current evidence on resource allocation methods in 5G, the review hopes to empower scholars by suggesting future research areas on which to focus.


Resource Allocation , Wireless Technology
8.
Comput Intell Neurosci ; 2021: 2922728, 2021.
Article En | MEDLINE | ID: mdl-35198017

The demand for global software development is growing. The nonavailability of software experts at one place or a country is the reason for the increase in the scope of global software development. Software developers who are located in different parts of the world with diversified skills necessary for a successful completion of a project play a critical role in the field of software development. Using the skills and expertise of software developers around the world, one could get any component developed or any IT-related issue resolved. The best software skills and tools are dispersed across the globe, but to integrate these skills and tools together and make them work for solving real world problems is a challenging task. The discipline of risk management gives the alternative strategies to manage risks that the software experts are facing in today's world of competitiveness. This research is an effort to predict risks related to time, cost, and resources those are faced by distributed teams in global software development environment. To examine the relative effect of these factors, in this research, neural network approaches like Levenberg-Marquardt, Bayesian Regularization, and Scaled Conjugate Gradient have been implemented to predict the responses of risks related to project time, cost, and resources involved in global software development. Comparative analysis of these three algorithms is also performed to determine the highest accuracy algorithms. The findings of this study proved that Bayesian Regularization performed very well in terms of the MSE (validation) criterion as compared with the Levenberg-Marquardt and Scaled Conjugate Gradient approaches.


Neural Networks, Computer , Software , Algorithms , Bayes Theorem
...