Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 14 de 14
Filter
1.
PeerJ Comput Sci ; 10: e1853, 2024.
Article in English | MEDLINE | ID: mdl-38855208

ABSTRACT

Background: Concrete, a fundamental construction material, stands as a significant consumer of virgin resources, including sand, gravel, crushed stone, and fresh water. It exerts an immense demand, accounting for approximately 1.6 billion metric tons of Portland and modified Portland cement annually. Moreover, addressing extreme conditions with exceptionally nonlinear behavior necessitates a laborious calibration procedure in structural analysis and design methodologies. These methods are also difficult to execute in practice. To reduce time and effort, ML might be a viable option. Material and Methods: A set of keywords are designed to perform the search PubMed search engine with filters to not search the studies below the year 2015. Furthermore, using PRISMA guidelines, studies were selected and after screening, a total of 42 studies were summarized. The PRISMA guidelines provide a structured framework to ensure transparency, accuracy, and completeness in reporting the methods and results of systematic reviews and meta-analyses. The ability to methodically and accurately connect disparate parts of the literature is often lacking in review research. Some of the trickiest parts of original research include knowledge mapping, co-citation, and co-occurrence. Using this data, we were able to determine which locations were most active in researching machine learning applications for concrete, where the most influential authors were in terms of both output and citations and which articles garnered the most citations overall. Conclusion: ML has become a viable prediction method for a wide variety of structural industrial applications, and hence it may serve as a potential successor for routinely used empirical model in the design of concrete structures. The non-ML structural engineering community may use this overview of ML methods, fundamental principles, access codes, ML libraries, and gathered datasets to construct their own ML models for useful uses. Structural engineering practitioners and researchers may benefit from this article's incorporation of concrete ML studies as well as structural engineering datasets. The construction industry stands to benefit from the use of machine learning in terms of cost savings, time savings, and labor intensity. The statistical and graphical representation of contributing authors and participants in this work might facilitate future collaborations and the sharing of novel ideas and approaches among researchers and industry professionals. The limitation of this systematic review is that it is only PubMed based which means it includes studies included in the PubMed database.

2.
PLoS One ; 19(2): e0296392, 2024.
Article in English | MEDLINE | ID: mdl-38408070

ABSTRACT

The quest for energy efficiency (EE) in multi-tier Heterogeneous Networks (HetNets) is observed within the context of surging high-speed data demands and the rapid proliferation of wireless devices. The analysis of existing literature underscores the need for more comprehensive strategies to realize genuinely energy-efficient HetNets. This research work contributes significantly by employing a systematic methodology, utilizing This model facilitates the assessment of network performance by considering the spatial distribution of network elements. The stochastic nature of the PPP allows for a realistic representation of the random spatial deployment of base stations and users in multi-tier HetNets. Additionally, an analytical framework for Quality of Service (QoS) provision based on D-DOSS simplifies the understanding of user-base station relationships and offers essential performance metrics. Moreover, an optimization problem formulation, considering coverage, energy maximization, and delay minimization constraints, aims to strike a balance between key network attributes. This research not only addresses crucial challenges in creating EE HetNets but also lays a foundation for future advancements in wireless network design, operation, and management, ultimately benefiting network operators and end-users alike amidst the growing demand for high-speed data and the increasing prevalence of wireless devices. The proposed D-DOSS approach not only offers insights for the systematic design and analysis of EE HetNets but also systematically outperforms other state-of-the-art techniques presented. The improvement in energy efficiency systematically ranges from 67% (min side) to 98% (max side), systematically demonstrating the effectiveness of the proposed strategy in achieving higher energy efficiency compared to existing strategies. This systematic research work establishes a strong foundation for the systematic evolution of energy-efficient HetNets. The systematic methodology employed ensures a comprehensive understanding of the complex interplay of network dynamics and user requirements in a multi-tiered environment.


Subject(s)
Computer Communication Networks , Wireless Technology , Computer Simulation , Conservation of Energy Resources , Physical Phenomena
3.
Humanit Soc Sci Commun ; 10(1): 311, 2023.
Article in English | MEDLINE | ID: mdl-37325188

ABSTRACT

This study examines the impact of artificial intelligence (AI) on loss in decision-making, laziness, and privacy concerns among university students in Pakistan and China. Like other sectors, education also adopts AI technologies to address modern-day challenges. AI investment will grow to USD 253.82 million from 2021 to 2025. However, worryingly, researchers and institutions across the globe are praising the positive role of AI but ignoring its concerns. This study is based on qualitative methodology using PLS-Smart for the data analysis. Primary data was collected from 285 students from different universities in Pakistan and China. The purposive Sampling technique was used to draw the sample from the population. The data analysis findings show that AI significantly impacts the loss of human decision-making and makes humans lazy. It also impacts security and privacy. The findings show that 68.9% of laziness in humans, 68.6% in personal privacy and security issues, and 27.7% in the loss of decision-making are due to the impact of artificial intelligence in Pakistani and Chinese society. From this, it was observed that human laziness is the most affected area due to AI. However, this study argues that significant preventive measures are necessary before implementing AI technology in education. Accepting AI without addressing the major human concerns would be like summoning the devils. Concentrating on justified designing and deploying and using AI for education is recommended to address the issue.

4.
PeerJ Comput Sci ; 9: e1167, 2023.
Article in English | MEDLINE | ID: mdl-37346729

ABSTRACT

Background: Agriculture plays a vital role in the country's economy and human society. Rice production is mainly focused on financial improvements as it is demanding worldwide. Protecting the rice field from pests during seedling and after production is becoming a challenging research problem. Identifying the pest at the right time is crucial so that the measures to prevent rice crops from pests can be taken by considering its stage. In this article, a new deep learning-based pest detection model is proposed. The proposed system can detect two types of rice pests (stem borer and Hispa) using an unmanned aerial vehicle (UAV). Methodology: The image is captured in real time by a camera mounted on the UAV and then processed by filtering, labeling, and segmentation-based technique of color thresholding to convert the image into greyscale for extracting the region of interest. This article provides a rice pests dataset and a comparative analysis of existing pre-trained models. The proposed approach YO-CNN recommended in this study considers the results of the previous model because a smaller network was regarded to be better than a bigger one. Using additional layers has the advantage of preventing memorization, and it provides more precise results than existing techniques. Results: The main contribution of the research is implementing a new modified deep learning model named Yolo-convolution neural network (YO-CNN) to obtain a precise output of up to 0.980 accuracies. It can be used to reduce rice wastage during production by monitoring the pests regularly. This technique can be used further for target spraying that saves applicators (fertilizer water and pesticide) and reduces the adverse effect of improper use of applicators on the environment and human beings.

5.
PLoS One ; 18(4): e0284209, 2023.
Article in English | MEDLINE | ID: mdl-37053173

ABSTRACT

The benefits and opportunities offered by cloud computing are among the fastest-growing technologies in the computer industry. Additionally, it addresses the difficulties and issues that make more users more likely to accept and use the technology. The proposed research comprised of machine learning (ML) algorithms is Naïve Bayes (NB), Library Support Vector Machine (LibSVM), Multinomial Logistic Regression (MLR), Sequential Minimal Optimization (SMO), K Nearest Neighbor (KNN), and Random Forest (RF) to compare the classifier gives better results in accuracy and less fault prediction. In this research, the secondary data results (CPU-Mem Mono) give the highest percentage of accuracy and less fault prediction on the NB classifier in terms of 80/20 (77.01%), 70/30 (76.05%), and 5 folds cross-validation (74.88%), and (CPU-Mem Multi) in terms of 80/20 (89.72%), 70/30 (90.28%), and 5 folds cross-validation (92.83%). Furthermore, on (HDD Mono) the SMO classifier gives the highest percentage of accuracy and less fault prediction fault in terms of 80/20 (87.72%), 70/30 (89.41%), and 5 folds cross-validation (88.38%), and (HDD-Multi) in terms of 80/20 (93.64%), 70/30 (90.91%), and 5 folds cross-validation (88.20%). Whereas, primary data results found RF classifier gives the highest percentage of accuracy and less fault prediction in terms of 80/20 (97.14%), 70/30 (96.19%), and 5 folds cross-validation (95.85%) in the primary data results, but the algorithm complexity (0.17 seconds) is not good. In terms of 80/20 (95.71%), 70/30 (95.71%), and 5 folds cross-validation (95.71%), SMO has the second highest accuracy and less fault prediction, but the algorithm complexity is good (0.3 seconds). The difference in accuracy and less fault prediction between RF and SMO is only (.13%), and the difference in time complexity is (14 seconds). We have decided that we will modify SMO. Finally, the Modified Sequential Minimal Optimization (MSMO) Algorithm method has been proposed to get the highest accuracy & less fault prediction errors in terms of 80/20 (96.42%), 70/30 (96.42%), & 5 fold cross validation (96.50%).


Subject(s)
Algorithms , Machine Learning , Bayes Theorem , Random Forest , Support Vector Machine
6.
Sensors (Basel) ; 23(4)2023 Feb 09.
Article in English | MEDLINE | ID: mdl-36850563

ABSTRACT

Cloud computing (CC) benefits and opportunities are among the fastest growing technologies in the computer industry. Cloud computing's challenges include resource allocation, security, quality of service, availability, privacy, data management, performance compatibility, and fault tolerance. Fault tolerance (FT) refers to a system's ability to continue performing its intended task in the presence of defects. Fault-tolerance challenges include heterogeneity and a lack of standards, the need for automation, cloud downtime reliability, consideration for recovery point objects, recovery time objects, and cloud workload. The proposed research includes machine learning (ML) algorithms such as naïve Bayes (NB), library support vector machine (LibSVM), multinomial logistic regression (MLR), sequential minimal optimization (SMO), K-nearest neighbor (KNN), and random forest (RF) as well as a fault-tolerance method known as delta-checkpointing to achieve higher accuracy, lesser fault prediction error, and reliability. Furthermore, the secondary data were collected from the homonymous, experimental high-performance computing (HPC) system at the Swiss Federal Institute of Technology (ETH), Zurich, and the primary data were generated using virtual machines (VMs) to select the best machine learning classifier. In this article, the secondary and primary data were divided into two split ratios of 80/20 and 70/30, respectively, and cross-validation (5-fold) was used to identify more accuracy and less prediction of faults in terms of true, false, repair, and failure of virtual machines. Secondary data results show that naïve Bayes performed exceptionally well on CPU-Mem mono and multi blocks, and sequential minimal optimization performed very well on HDD mono and multi blocks in terms of accuracy and fault prediction. In the case of greater accuracy and less fault prediction, primary data results revealed that random forest performed very well in terms of accuracy and fault prediction but not with good time complexity. Sequential minimal optimization has good time complexity with minor differences in random forest accuracy and fault prediction. We decided to modify sequential minimal optimization. Finally, the modified sequential minimal optimization (MSMO) algorithm with the fault-tolerance delta-checkpointing (D-CP) method is proposed to improve accuracy, fault prediction error, and reliability in cloud computing.

7.
Comput Intell Neurosci ; 2023: 5183062, 2023.
Article in English | MEDLINE | ID: mdl-36654727

ABSTRACT

LoRa is an ISM-band based LPWAN communication protocol. Despite their wide network penetration of approximately 20 kilometers or higher using lower than 14 decibels transmitting power, it has been extensively documented and used in academia and industry. Although LoRa connectivity defines a public platform and enables users to create independent low-power wireless connections while relying on external architecture, it has gained considerable interest from scholars and the market. The two fundamental components of this platform are LoRaWAN and LoRa PHY. The consumer LoRaWAN component of the technology describes the network model, connectivity procedures, ability to operate the frequency range, and the types of interlinked gadgets. In contrast, the LoRa PHY component is patentable and provides information on the modulation strategy which is being utilized and its attributes. There are now several LoRa platforms available. To create usable LoRa systems, there are presently several technical difficulties to be overcome, such as connection management, allocation of resources, consistent communications, and security. This study presents a thorough overview of LoRa networking, covering the technological difficulties in setting up LoRa infrastructures and current solutions. Several outstanding challenges of LoRa communication are presented depending on our thorough research of the available solutions. The research report aims to stimulate additional research toward enhancing the LoRa Network capacity and allowing more realistic installations.


Subject(s)
Communication , Industry , Technology
8.
Sensors (Basel) ; 22(21)2022 Nov 07.
Article in English | MEDLINE | ID: mdl-36366269

ABSTRACT

Rice is one of the vital foods consumed in most countries throughout the world. To estimate the yield, crop counting is used to indicate improper growth, identification of loam land, and control of weeds. It is becoming necessary to grow crops healthy, precisely, and proficiently as the demand increases for food supplies. Traditional counting methods have numerous disadvantages, such as long delay times and high sensitivity, and they are easily disturbed by noise. In this research, the detection and counting of rice plants using an unmanned aerial vehicle (UAV) and aerial images with a geographic information system (GIS) are used. The technique is implemented in the area of forty acres of rice crop in Tando Adam, Sindh, Pakistan. To validate the performance of the proposed system, the obtained results are compared with the standard plant count techniques as well as approved by the agronomist after testing soil and monitoring the rice crop count in each acre of land of rice crops. From the results, it is found that the proposed system is precise and detects rice crops accurately, differentiates from other objects, and estimates the soil health based on plant counting data; however, in the case of clusters, the counting is performed in semi-automated mode.


Subject(s)
Oryza , Soil , Geographic Information Systems , Crops, Agricultural , Plant Weeds
9.
Diagnostics (Basel) ; 12(11)2022 Oct 26.
Article in English | MEDLINE | ID: mdl-36359438

ABSTRACT

Cardiovascular disease includes coronary artery diseases (CAD), which include angina and myocardial infarction (commonly known as a heart attack), and coronary heart diseases (CHD), which are marked by the buildup of a waxy material called plaque inside the coronary arteries. Heart attacks are still the main cause of death worldwide, and if not treated right they have the potential to cause major health problems, such as diabetes. If ignored, diabetes can result in a variety of health problems, including heart disease, stroke, blindness, and kidney failure. Machine learning methods can be used to identify and diagnose diabetes and other illnesses. Diabetes and cardiovascular disease both can be diagnosed using several classifier types. Naive Bayes, K-Nearest neighbor (KNN), linear regression, decision trees (DT), and support vector machines (SVM) were among the classifiers employed, although all of these models had poor accuracy. Therefore, due to a lack of significant effort and poor accuracy, new research is required to diagnose diabetes and cardiovascular disease. This study developed an ensemble approach called "Stacking Classifier" in order to improve the performance of integrated flexible individual classifiers and decrease the likelihood of misclassifying a single instance. Naive Bayes, KNN, Linear Discriminant Analysis (LDA), and Decision Tree (DT) are just a few of the classifiers used in this study. As a meta-classifier, Random Forest and SVM are used. The suggested stacking classifier obtains a superior accuracy of 0.9735 percent when compared to current models for diagnosing diabetes, such as Naive Bayes, KNN, DT, and LDA, which are 0.7646 percent, 0.7460 percent, 0.7857 percent, and 0.7735 percent, respectively. Furthermore, for cardiovascular disease, when compared to current models such as KNN, NB, DT, LDA, and SVM, which are 0.8377 percent, 0.8256 percent, 0.8426 percent, 0.8523 percent, and 0.8472 percent, respectively, the suggested stacking classifier performed better and obtained a higher accuracy of 0.8871 percent.

10.
Front Comput Neurosci ; 16: 1001803, 2022.
Article in English | MEDLINE | ID: mdl-36405784

ABSTRACT

Cancer is one of the most prevalent diseases worldwide. The most prevalent condition in women when aberrant cells develop out of control is breast cancer. Breast cancer detection and classification are exceedingly difficult tasks. As a result, several computational techniques, including k-nearest neighbor (KNN), support vector machine (SVM), multilayer perceptron (MLP), decision tree (DT), and genetic algorithms, have been applied in the current computing world for the diagnosis and classification of breast cancer. However, each method has its own limitations to how accurately it can be utilized. A novel convolutional neural network (CNN) model based on the Visual Geometry Group network (VGGNet) was also suggested in this study. The 16 layers in the current VGGNet-16 model lead to overfitting on the training and test data. We, thus, propose the VGGNet-12 model for breast cancer classification. The VGGNet-16 model has the problem of overfitting the breast cancer classification dataset. Based on the overfitting issues in the existing model, this research reduced the number of different layers in the VGGNet-16 model to solve the overfitting problem in this model. Because various models of the VGGNet, such as VGGNet-13 and VGGNet-19, were developed, this study proposed a new version of the VGGNet model, that is, the VGGNet-12 model. The performance of this model is checked using the breast cancer dataset, as compared to the CNN and LeNet models. From the simulation result, it can be seen that the proposed VGGNet-12 model enhances the simulation result as compared to the model used in this study. Overall, the experimental findings indicate that the suggested VGGNet-12 model did well in classifying breast cancer in terms of several characteristics.

11.
Front Public Health ; 10: 862497, 2022.
Article in English | MEDLINE | ID: mdl-35493354

ABSTRACT

Background and Objective: Viral hepatitis is a major public health concern on a global scale. It predominantly affects the world's least developed countries. The most endemic regions are resource constrained, with a low human development index. Chronic hepatitis can lead to cirrhosis, liver failure, cancer and eventually death. Early diagnosis and treatment of hepatitis infection can help to reduce disease burden and transmission to those at risk of infection or reinfection. Screening is critical for meeting the WHO's 2030 targets. Consequently, automated systems for the reliable prediction of hepatitis illness. When applied to the prediction of hepatitis using imbalanced datasets from testing, machine learning (ML) classifiers and known methodologies for encoding categorical data have demonstrated a wide range of unexpected results. Early research also made use of an artificial neural network to identify features without first gaining a thorough understanding of the sequence data. Methods: To help in accurate binary classification of diagnosis (survivability or mortality) in patients with severe hepatitis, this paper suggests a deep learning-based decision support system (DSS) that makes use of bidirectional long/short-term memory (BiLSTM). Balanced data was utilized to predict hepatitis using the BiLSTM model. Results: In contrast to previous investigations, the trial results of this suggested model were encouraging: 95.08% accuracy, 94% precision, 93% recall, and a 93% F1-score. Conclusions: In the field of hepatitis detection, the use of a BiLSTM model for classification is better than current methods by a significant margin in terms of improved accuracy.


Subject(s)
Algorithms , Hepatitis , Humans , Machine Learning , Neural Networks, Computer , Public Health
12.
Front Public Health ; 10: 855254, 2022.
Article in English | MEDLINE | ID: mdl-35321193

ABSTRACT

Deep neural networks have made tremendous strides in the categorization of facial photos in the last several years. Due to the complexity of features, the enormous size of the picture/frame, and the severe inhomogeneity of image data, efficient face image classification using deep convolutional neural networks remains a challenge. Therefore, as data volumes continue to grow, the effective categorization of face photos in a mobile context utilizing advanced deep learning techniques is becoming increasingly important. In the recent past, some Deep Learning (DL) approaches for learning to identify face images have been designed; many of them use convolutional neural networks (CNNs). To address the problem of face mask recognition in facial images, we propose to use a Depthwise Separable Convolution Neural Network based on MobileNet (DWS-based MobileNet). The proposed network utilizes depth-wise separable convolution layers instead of 2D convolution layers. With limited datasets, the DWS-based MobileNet performs exceptionally well. DWS-based MobileNet decreases the number of trainable parameters while enhancing learning performance by adopting a lightweight network. Our technique outperformed the existing state of the art when tested on benchmark datasets. When compared to Full Convolution MobileNet and baseline methods, the results of this study reveal that adopting Depthwise Separable Convolution-based MobileNet significantly improves performance (Acc. = 93.14, Pre. = 92, recall = 92, F-score = 92).


Subject(s)
COVID-19 , Humans , Neural Networks, Computer , Pandemics
13.
Comput Math Methods Med ; 2022: 8691646, 2022.
Article in English | MEDLINE | ID: mdl-35126641

ABSTRACT

Task scheduling in parallel multiple sequence alignment (MSA) through improved dynamic programming optimization speeds up alignment processing. The increased importance of multiple matching sequences also needs the utilization of parallel processor systems. This dynamic algorithm proposes improved task scheduling in case of parallel MSA. Specifically, the alignment of several tertiary structured proteins is computationally complex than simple word-based MSA. Parallel task processing is computationally more efficient for protein-structured based superposition. The basic condition for the application of dynamic programming is also fulfilled, because the task scheduling problem has multiple possible solutions or options. Search space reduction for speedy processing of this algorithm is carried out through greedy strategy. Performance in terms of better results is ensured through computationally expensive recursive and iterative greedy approaches. Any optimal scheduling schemes show better performance in heterogeneous resources using CPU or GPU.


Subject(s)
Algorithms , Computational Biology/methods , Sequence Alignment/methods , Computational Biology/statistics & numerical data , Humans , Sequence Alignment/statistics & numerical data , Software
14.
Sensors (Basel) ; 21(19)2021 Oct 02.
Article in English | MEDLINE | ID: mdl-34640908

ABSTRACT

Fifth-generation (5G) communication technology is intended to offer higher data rates, outstanding user exposure, lower power consumption, and extremely short latency. Such cellular networks will implement a diverse multi-layer model comprising device-to-device networks, macro-cells, and different categories of small cells to assist customers with desired quality-of-service (QoS). This multi-layer model affects several studies that confront utilizing interference management and resource allocation in 5G networks. With the growing need for cellular service and the limited resources to provide it, capably handling network traffic and operation has become a problem of resource distribution. One of the utmost serious problems is to alleviate the jamming in the network in support of having a better QoS. However, although a limited number of review papers have been written on resource distribution, no review papers have been written specifically on 5G resource allocation. Hence, this article analyzes the issue of resource allocation by classifying the various resource allocation schemes in 5G that have been reported in the literature and assessing their ability to enhance service quality. This survey bases its discussion on the metrics that are used to evaluate network performance. After consideration of the current evidence on resource allocation methods in 5G, the review hopes to empower scholars by suggesting future research areas on which to focus.


Subject(s)
Resource Allocation , Wireless Technology
SELECTION OF CITATIONS
SEARCH DETAIL