Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 29
Filtrar
1.
Cancers (Basel) ; 16(17)2024 Aug 23.
Artículo en Inglés | MEDLINE | ID: mdl-39272805

RESUMEN

Accurate skin diagnosis through end-user applications is important for early detection and cure of severe skin diseases. However, the low quality of dermoscopic images hampers this mission, especially with the presence of hair on these kinds of images. This paper introduces DM-AHR, a novel, self-supervised conditional diffusion model designed specifically for the automatic generation of hairless dermoscopic images to improve the quality of skin diagnosis applications. The current research contributes in three significant ways to the field of dermatologic imaging. First, we develop a customized diffusion model that adeptly differentiates between hair and skin features. Second, we pioneer a novel self-supervised learning strategy that is specifically tailored to optimize performance for hairless imaging. Third, we introduce a new dataset, named DERMAHAIR (DERMatologic Automatic HAIR Removal Dataset), that is designed to advance and benchmark research in this specialized domain. These contributions significantly enhance the clarity of dermoscopic images, improving the accuracy of skin diagnosis procedures. We elaborate on the architecture of DM-AHR and demonstrate its effective performance in removing hair while preserving critical details of skin lesions. Our results show an enhancement in the accuracy of skin lesion analysis when compared to existing techniques. Given its robust performance, DM-AHR holds considerable promise for broader application in medical image enhancement.

2.
PLoS One ; 19(6): e0299666, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38905163

RESUMEN

Computer networks face vulnerability to numerous attacks, which pose significant threats to our data security and the freedom of communication. This paper introduces a novel intrusion detection technique that diverges from traditional methods by leveraging Recurrent Neural Networks (RNNs) for both data preprocessing and feature extraction. The proposed process is based on the following steps: (1) training the data using RNNs, (2) extracting features from their hidden layers, and (3) applying various classification algorithms. This methodology offers significant advantages and greatly differs from existing intrusion detection practices. The effectiveness of our method is demonstrated through trials on the Network Security Laboratory (NSL) and Canadian Institute for Cybersecurity (CIC) 2017 datasets, where the application of RNNs for intrusion detection shows substantial practical implications. Specifically, we achieved accuracy scores of 99.6% with Decision Tree, Random Forest, and CatBoost classifiers on the NSL dataset, and 99.8% and 99.9%, respectively, on the CIC 2017 dataset. By reversing the conventional sequence of training data with RNNs and then extracting features before applying classification algorithms, our approach provides a major shift in intrusion detection methodologies. This modification in the pipeline underscores the benefits of utilizing RNNs for feature extraction and data preprocessing, meeting the critical need to safeguard data security and communication freedom against ever-evolving network threats.


Asunto(s)
Algoritmos , Seguridad Computacional , Redes Neurales de la Computación , Humanos , Redes de Comunicación de Computadores
3.
Sci Rep ; 14(1): 10898, 2024 May 13.
Artículo en Inglés | MEDLINE | ID: mdl-38740843

RESUMEN

Distributed denial-of-service (DDoS) attacks persistently proliferate, impacting individuals and Internet Service Providers (ISPs). Deep learning (DL) models are paving the way to address these challenges and the dynamic nature of potential threats. Traditional detection systems, relying on signature-based techniques, are susceptible to next-generation malware. Integrating DL approaches in cloud-edge/federated servers enhances the resilience of these systems. In the Internet of Things (IoT) and autonomous networks, DL, particularly federated learning, has gained prominence for attack detection. Unlike conventional models (centralized and localized DL), federated learning does not require access to users' private data for attack detection. This approach is gaining much interest in academia and industry due to its deployment on local and global cloud-edge models. Recent advancements in DL enable training a quality cloud-edge model across various users (collaborators) without exchanging personal information. Federated learning, emphasizing privacy preservation at the cloud-edge terminal, holds significant potential for facilitating privacy-aware learning among collaborators. This paper addresses: (1) The deployment of an optimized deep neural network for network traffic classification. (2) The coordination of federated server model parameters with training across devices in IoT domains. A federated flowchart is proposed for training and aggregating local model updates. (3) The generation of a global model at the cloud-edge terminal after multiple rounds between domains and servers. (4) Experimental validation on the BoT-IoT dataset demonstrates that the federated learning model can reliably detect attacks with efficient classification, privacy, and confidentiality. Additionally, it requires minimal memory space for storing training data, resulting in minimal network delay. Consequently, the proposed framework outperforms both centralized and localized DL models, achieving superior performance.

4.
Heliyon ; 10(8): e29396, 2024 Apr 30.
Artículo en Inglés | MEDLINE | ID: mdl-38665569

RESUMEN

Semantic segmentation of Remote Sensing (RS) images involves the classification of each pixel in a satellite image into distinct and non-overlapping regions or segments. This task is crucial in various domains, including land cover classification, autonomous driving, and scene understanding. While deep learning has shown promising results, there is limited research that specifically addresses the challenge of processing fine details in RS images while also considering the high computational demands. To tackle this issue, we propose a novel approach that combines convolutional and transformer architectures. Our design incorporates convolutional layers with a low receptive field to generate fine-grained feature maps for small objects in very high-resolution images. On the other hand, transformer blocks are utilized to capture contextual information from the input. By leveraging convolution and self-attention in this manner, we reduce the need for extensive downsampling and enable the network to work with full-resolution features, which is particularly beneficial for handling small objects. Additionally, our approach eliminates the requirement for vast datasets, which is often necessary for purely transformer-based networks. In our experimental results, we demonstrate the effectiveness of our method in generating local and contextual features using convolutional and transformer layers, respectively. Our approach achieves a mean dice score of 80.41%, outperforming other well-known techniques such as UNet, Fully-Connected Network (FCN), Pyramid Scene Parsing Network (PSP Net), and the recent Convolutional vision Transformer (CvT) model, which achieved mean dice scores of 78.57%, 74.57%, 73.45%, and 62.97% respectively, under the same training conditions and using the same training dataset.

5.
Heliyon ; 9(11): e21624, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-37954270

RESUMEN

Since the release of ChatGPT, numerous studies have highlighted the remarkable performance of ChatGPT, which often rivals or even surpasses human capabilities in various tasks and domains. However, this paper presents a contrasting perspective by demonstrating an instance where human performance excels in typical tasks suited for ChatGPT, specifically in the domain of computer programming. We utilize the IEEExtreme Challenge competition as a benchmark-a prestigious, annual international programming contest encompassing a wide range of problems with different complexities. To conduct a thorough evaluation, we selected and executed a diverse set of 102 challenges, drawn from five distinct IEEExtreme editions, using three major programming languages: Python, Java, and C++. Our empirical analysis provides evidence that contrary to popular belief, human programmers maintain a competitive edge over ChatGPT in certain aspects of problem-solving within the programming context. In fact, we found that the average score obtained by ChatGPT on the set of IEEExtreme programming problems is 3.9 to 5.8 times lower than the average human score, depending on the programming language. This paper elaborates on these findings, offering critical insights into the limitations and potential areas of improvement for AI-based language models like ChatGPT.

6.
Sensors (Basel) ; 23(21)2023 Oct 30.
Artículo en Inglés | MEDLINE | ID: mdl-37960536

RESUMEN

Wireless Sensor Networks (WSNs) and the Internet of Things (IoT) have emerged as transforming technologies, bringing the potential to revolutionize a wide range of industries such as environmental monitoring, agriculture, manufacturing, smart health, home automation, wildlife monitoring, and surveillance. Population expansion, changes in the climate, and resource constraints all offer problems to modern IoT applications. To solve these issues, the integration of Wireless Sensor Networks (WSNs) and the Internet of Things (IoT) has come forth as a game-changing solution. For example, in agricultural environment, IoT-based WSN has been utilized to monitor yield conditions and automate agriculture precision through different sensors. These sensors are used in agriculture environments to boost productivity through intelligent agricultural decisions and to collect data on crop health, soil moisture, temperature monitoring, and irrigation. However, sensors have finite and non-rechargeable batteries, and memory capabilities, which might have a negative impact on network performance. When a network is distributed over a vast area, the performance of WSN-assisted IoT suffers. As a result, building a stable and energy-efficient routing infrastructure is quite challenging in order to extend network lifetime. To address energy-related issues in scalable WSN-IoT environments for future IoT applications, this research proposes EEDC: An Energy Efficient Data Communication scheme by utilizing "Region based Hierarchical Clustering for Efficient Routing (RHCER)"-a multi-tier clustering framework for energy-aware routing decisions. The sensors deployed for IoT application data collection acquire important data and select cluster heads based on a multi-criteria decision function. Further, to ensure efficient long-distance communication along with even load distribution across all network nodes, a subdivision technique was employed in each tier of the proposed framework. The proposed routing protocol aims to provide network load balancing and convert communicating over long distances into shortened multi-hop distance communications, hence enhancing network lifetime.The performance of EEDC is compared to that of some existing energy-efficient protocols for various parameters. The simulation results show that the suggested methodology reduces energy usage by almost 31% in sensor nodes and provides almost 38% improved packet drop ratio.

7.
Sensors (Basel) ; 23(20)2023 Oct 10.
Artículo en Inglés | MEDLINE | ID: mdl-37896456

RESUMEN

Intrusion detection systems, also known as IDSs, are widely regarded as one of the most essential components of an organization's network security. This is because IDSs serve as the organization's first line of defense against several cyberattacks and are accountable for accurately detecting any possible network intrusions. Several implementations of IDSs accomplish the detection of potential threats throughout flow-based network traffic analysis. Traditional IDSs frequently struggle to provide accurate real-time intrusion detection while keeping up with the changing landscape of threat. Innovative methods used to improve IDSs' performance in network traffic analysis are urgently needed to overcome these drawbacks. In this study, we introduced a model called a deep neural decision forest (DNDF), which allows the enhancement of classification trees with the power of deep networks to learn data representations. We essentially utilized the CICIDS 2017 dataset for network traffic analysis and extended our experiments to evaluate the DNDF model's performance on two additional datasets: CICIDS 2018 and a custom network traffic dataset. Our findings showed that DNDF, a combination of deep neural networks and decision forests, outperformed reference approaches with a remarkable precision of 99.96% by using the CICIDS 2017 dataset while creating latent representations in deep layers. This success can be attributed to improved feature representation, model optimization, and resilience to noisy and unbalanced input data, emphasizing DNDF's capabilities in intrusion detection and network security solutions.

8.
Artículo en Inglés | MEDLINE | ID: mdl-37792659

RESUMEN

In the Internet of Medical Things (IoMT), de novo peptide sequencing prediction is one of the most important techniques for the fields of disease prediction, diagnosis, and treatment. Recently, deep-learning-based peptide sequencing prediction has been a new trend. However, most popular deep learning models for peptide sequencing prediction suffer from poor interpretability and poor ability to capture long-range dependencies. To solve these issues, we propose a model named SeqNovo, which has the encoding-decoding structure of sequence to sequence (Seq2Seq), the highly nonlinear properties of multilayer perceptron (MLP), and the ability of the attention mechanism to capture long-range dependencies. SeqNovo use MLP to improve the feature extraction and utilize the attention mechanism to discover key information. A series of experiments have been conducted to show that the SeqNovo is superior to the Seq2Seq benchmark model, DeepNovo. SeqNovo improves both the accuracy and interpretability of the predictions, which will be expected to support more related research.

9.
Sensors (Basel) ; 23(19)2023 Oct 09.
Artículo en Inglés | MEDLINE | ID: mdl-37837162

RESUMEN

The comparison of low-rank-based learning models for multi-label categorization of attacks for intrusion detection datasets is presented in this work. In particular, we investigate the performance of three low-rank-based machine learning (LR-SVM) and deep learning models (LR-CNN), (LR-CNN-MLP) for classifying intrusion detection data: Low Rank Representation (LRR) and Non-negative Low Rank Representation (NLR). We also look into how these models' performance is affected by hyperparameter tweaking by using Guassian Bayes Optimization. The tests has been run on merging two intrusion detection datasets that are available to the public such as BoT-IoT and UNSW- NB15 and assess the models' performance in terms of key evaluation criteria, including precision, recall, F1 score, and accuracy. Nevertheless, all three models perform noticeably better after hyperparameter modification. The selection of low-rank-based learning models and the significance of the hyperparameter tuning log for multi-label classification of intrusion detection data have been discussed in this work. A hybrid security dataset is used with low rank factorization in addition to SVM, CNN and CNN-MLP. The desired multilabel results have been obtained by considering binary and multi-class attack classification as well. Low rank CNN-MLP achieved suitable results in multilabel classification of attacks. Also, a Gaussian-based Bayesian optimization algorithm is used with CNN-MLP for hyperparametric tuning and the desired results have been achieved using c and γ for SVM and α and ß for CNN and CNN-MLP on a hybrid dataset. The results show the label UDP is shared among analysis, DoS and shellcode. The accuracy of classifying UDP among three classes is 98.54%.

10.
Sensors (Basel) ; 23(10)2023 May 11.
Artículo en Inglés | MEDLINE | ID: mdl-37430585

RESUMEN

Having access to safe water and using it properly is crucial for human well-being, sustainable development, and environmental conservation. Nonetheless, the increasing disparity between human demands and natural freshwater resources is causing water scarcity, negatively impacting agricultural and industrial efficiency, and giving rise to numerous social and economic issues. Understanding and managing the causes of water scarcity and water quality degradation are essential steps toward more sustainable water management and use. In this context, continuous Internet of Things (IoT)-based water measurements are becoming increasingly crucial in environmental monitoring. However, these measurements are plagued by uncertainty issues that, if not handled correctly, can introduce bias and inaccuracy into our analysis, decision-making processes, and results. To cope with uncertainty issues related to sensed water data, we propose combining network representation learning with uncertainty handling methods to ensure rigorous and efficient modeling management of water resources. The proposed approach involves accounting for uncertainties in the water information system by leveraging probabilistic techniques and network representation learning. It creates a probabilistic embedding of the network, enabling the classification of uncertain representations of water information entities, and applies evidence theory to enable decision making that is aware of uncertainties, ultimately choosing appropriate management strategies for affected water areas.

11.
Sensors (Basel) ; 23(5)2023 Feb 21.
Artículo en Inglés | MEDLINE | ID: mdl-36904589

RESUMEN

The Vision Transformer (ViT) architecture has been remarkably successful in image restoration. For a while, Convolutional Neural Networks (CNN) predominated in most computer vision tasks. Now, both CNN and ViT are efficient approaches that demonstrate powerful capabilities to restore a better version of an image given in a low-quality format. In this study, the efficiency of ViT in image restoration is studied extensively. The ViT architectures are classified for every task of image restoration. Seven image restoration tasks are considered: Image Super-Resolution, Image Denoising, General Image Enhancement, JPEG Compression Artifact Reduction, Image Deblurring, Removing Adverse Weather Conditions, and Image Dehazing. The outcomes, the advantages, the limitations, and the possible areas for future research are detailed. Overall, it is noted that incorporating ViT in the new architectures for image restoration is becoming a rule. This is due to some advantages compared to CNN, such as better efficiency, especially when more data are fed to the network, robustness in feature extraction, and a better feature learning approach that sees better the variances and characteristics of the input. Nevertheless, some drawbacks exist, such as the need for more data to show the benefits of ViT over CNN, the increased computational cost due to the complexity of the self-attention block, a more challenging training process, and the lack of interpretability. These drawbacks represent the future research direction that should be targeted to increase the efficiency of ViT in the image restoration domain.

12.
Sensors (Basel) ; 23(5)2023 Feb 22.
Artículo en Inglés | MEDLINE | ID: mdl-36904630

RESUMEN

In applications of the Internet of Things (IoT), where many devices are connected for a specific purpose, data is continuously collected, communicated, processed, and stored between the nodes. However, all connected nodes have strict constraints, such as battery usage, communication throughput, processing power, processing business, and storage limitations. The high number of constraints and nodes makes the standard methods to regulate them useless. Hence, using machine learning approaches to manage them better is attractive. In this study, a new framework for data management of IoT applications is designed and implemented. The framework is called MLADCF (Machine Learning Analytics-based Data Classification Framework). It is a two-stage framework that combines a regression model and a Hybrid Resource Constrained KNN (HRCKNN). It learns from the analytics of real scenarios of the IoT application. The description of the Framework parameters, the training procedure, and the application in real scenarios are detailed. MLADCF has shown proven efficiency by testing on four different datasets compared to existing approaches. Moreover, it reduced the global energy consumption of the network, leading to an extended battery life of the connected nodes.

13.
Sensors (Basel) ; 23(4)2023 Feb 13.
Artículo en Inglés | MEDLINE | ID: mdl-36850714

RESUMEN

Video streaming-based real-time vehicle identification and license plate recognition systems are challenging to design and deploy in terms of real-time processing on edge, dealing with low image resolution, high noise, and identification. This paper addresses these issues by introducing a novel multi-stage, real-time, deep learning-based vehicle identification and license plate recognition system. The system is based on a set of algorithms that efficiently integrate two object detectors, an image classifier, and a multi-object tracker to recognize car models and license plates. The information redundancy of Saudi license plates' Arabic and English characters is leveraged to boost the license plate recognition accuracy while satisfying real-time inference performance. The system optimally achieves real-time performance on edge GPU devices and maximizes models' accuracy by taking advantage of the temporally redundant information of the video stream's frames. The edge device sends a notification of the detected vehicle and its license plate only once to the cloud after completing the processing. The system was experimentally evaluated on vehicles and license plates in real-world unconstrained environments at several parking entrance gates. It achieves 17.1 FPS on a Jetson Xavier AGX edge device with no delay. The comparison between the accuracy on the videos and on static images extracted from them shows that the processing of video streams using this proposed system enhances the relative accuracy of the car model and license plate recognition by 13% and 40%, respectively. This research work has won two awards in 2021 and 2022.

14.
Complex Intell Systems ; 9(1): 1027-1058, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-35668731

RESUMEN

Extensive research has been conducted on healthcare technology and service advancements during the last decade. The Internet of Medical Things (IoMT) has demonstrated the ability to connect various medical apparatus, sensors, and healthcare specialists to ensure the best medical treatment in a distant location. Patient safety has improved, healthcare prices have decreased dramatically, healthcare services have become more approachable, and the operational efficiency of the healthcare industry has increased. This research paper offers a recent review of current and future healthcare applications, security, market trends, and IoMT-based technology implementation. This research paper analyses the advancement of IoMT implementation in addressing various healthcare concerns from the perspectives of enabling technologies, healthcare applications, and services. The potential obstacles and issues of the IoMT system are also discussed. Finally, the survey includes a comprehensive overview of different disciplines of IoMT to empower future researchers who are eager to work on and make advances in the field to obtain a better understanding of the domain.

16.
J Healthc Eng ; 2022: 7745132, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36397885

RESUMEN

With the advancement of camera and wireless technologies, surveillance camera-based occupancy has received ample attention from the research community. However, camera-based occupancy monitoring and wireless channels, especially Wi-Fi hotspot, pose serious privacy concerns and cybersecurity threats. Eavesdroppers can easily access confidential multimedia information and the privacy of individuals can be compromised. As a solution, novel encryption techniques for the multimedia data concealing have been proposed by the cryptographers. Due to the bandwidth limitations and computational complexity, traditional encryption methods are not applicable to multimedia data. In traditional encryption methods such as Advanced Encryption Standard (AES) and Data Encryption Standard (DES), once multimedia data are compressed during encryption, correct decryption is a challenging task. In order to utilize the available bandwidth in an efficient way, a novel secure video occupancy monitoring method in conjunction with encryption-compression has been developed and reported in this paper. The interesting properties of Chebyshev map, intertwining map, logistic map, and orthogonal matrix are exploited during block permutation, substitution, and diffusion processes, respectively. Real-time simulation and performance results of the proposed system show that the proposed scheme is highly sensitive to the initial seed parameters. In comparison to other traditional schemes, the proposed encryption system is secure, efficient, and robust for data encryption. Security parameters such as correlation coefficient, entropy, contrast, energy, and higher key space prove the robustness and efficiency of the proposed solution.


Asunto(s)
Algoritmos , Compresión de Datos , Humanos , Compresión de Datos/métodos , Seguridad Computacional , Confidencialidad , Tecnología Inalámbrica
17.
Sensors (Basel) ; 22(19)2022 Oct 02.
Artículo en Inglés | MEDLINE | ID: mdl-36236573

RESUMEN

Academics and the health community are paying much attention to developing smart remote patient monitoring, sensors, and healthcare technology. For the analysis of medical scans, various studies integrate sophisticated deep learning strategies. A smart monitoring system is needed as a proactive diagnostic solution that may be employed in an epidemiological scenario such as COVID-19. Consequently, this work offers an intelligent medicare system that is an IoT-empowered, deep learning-based decision support system (DSS) for the automated detection and categorization of infectious diseases (COVID-19 and pneumothorax). The proposed DSS system was evaluated using three independent standard-based chest X-ray scans. The suggested DSS predictor has been used to identify and classify areas on whole X-ray scans with abnormalities thought to be attributable to COVID-19, reaching an identification and classification accuracy rate of 89.58% for normal images and 89.13% for COVID-19 and pneumothorax. With the suggested DSS system, a judgment depending on individual chest X-ray scans may be made in approximately 0.01 s. As a result, the DSS system described in this study can forecast at a pace of 95 frames per second (FPS) for both models, which is near to real-time.


Asunto(s)
COVID-19 , Neumotórax , Anciano , COVID-19/diagnóstico por imagen , Prueba de COVID-19 , Humanos , Pulmón , Medicare , Estados Unidos , Rayos X
18.
Entropy (Basel) ; 24(6)2022 Jun 08.
Artículo en Inglés | MEDLINE | ID: mdl-35741521

RESUMEN

A brain tumour is one of the major reasons for death in humans, and it is the tenth most common type of tumour that affects people of all ages. However, if detected early, it is one of the most treatable types of tumours. Brain tumours are classified using biopsy, which is not usually performed before definitive brain surgery. An image classification technique for tumour diseases is important for accelerating the treatment process and avoiding surgery and errors from manual diagnosis by radiologists. The advancement of technology and machine learning (ML) can assist radiologists in tumour diagnostics using magnetic resonance imaging (MRI) images without invasive procedures. This work introduced a new hybrid CNN-based architecture to classify three brain tumour types through MRI images. The method suggested in this paper uses hybrid deep learning classification based on CNN with two methods. The first method combines a pre-trained Google-Net model of the CNN algorithm for feature extraction with SVM for pattern classification. The second method integrates a finely tuned Google-Net with a soft-max classifier. The proposed approach was evaluated using MRI brain images that contain a total of 1426 glioma images, 708 meningioma images, 930 pituitary tumour images, and 396 normal brain images. The reported results showed that an accuracy of 93.1% was achieved from the finely tuned Google-Net model. However, the synergy of Google-Net as a feature extractor with an SVM classifier improved recognition accuracy to 98.1%.

19.
Entropy (Basel) ; 24(4)2022 Apr 10.
Artículo en Inglés | MEDLINE | ID: mdl-35455196

RESUMEN

Pristine and trustworthy data are required for efficient computer modelling for medical decision-making, yet data in medical care is frequently missing. As a result, missing values may occur not just in training data but also in testing data that might contain a single undiagnosed episode or a participant. This study evaluates different imputation and regression procedures identified based on regressor performance and computational expense to fix the issues of missing values in both training and testing datasets. In the context of healthcare, several procedures are introduced for dealing with missing values. However, there is still a discussion concerning which imputation strategies are better in specific cases. This research proposes an ensemble imputation model that is educated to use a combination of simple mean imputation, k-nearest neighbour imputation, and iterative imputation methods, and then leverages them in a manner where the ideal imputation strategy is opted among them based on attribute correlations on missing value features. We introduce a unique Ensemble Strategy for Missing Value to analyse healthcare data with considerable missing values to identify unbiased and accurate prediction statistical modelling. The performance metrics have been generated using the eXtreme gradient boosting regressor, random forest regressor, and support vector regressor. The current study uses real-world healthcare data to conduct experiments and simulations of data with varying feature-wise missing frequencies indicating that the proposed technique surpasses standard missing value imputation approaches as well as the approach of dropping records holding missing values in terms of accuracy.

20.
Front Public Health ; 10: 860536, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35372217

RESUMEN

Internet of Things (IoT) involves a set of devices that aids in achieving a smart environment. Healthcare systems, which are IoT-oriented, provide monitoring services of patients' data and help take immediate steps in an emergency. Currently, machine learning-based techniques are adopted to ensure security and other non-functional requirements in smart health care systems. However, no attention is given to classifying the non-functional requirements from requirement documents. The manual process of classifying the non-functional requirements from documents is erroneous and laborious. Missing non-functional requirements in the Requirement Engineering (RE) phase results in IoT oriented healthcare system with compromised security and performance. In this research, an experiment is performed where non-functional requirements are classified from the IoT-oriented healthcare system's requirement document. The machine learning algorithms considered for classification are Logistic Regression (LR), Support Vector Machine (SVM), Multinomial Naive Bayes (MNB), K-Nearest Neighbors (KNN), ensemble, Random Forest (RF), and hybrid KNN rule-based machine learning (ML) algorithms. The results show that our novel hybrid KNN rule-based machine learning algorithm outperforms others by showing an average classification accuracy of 75.9% in classifying non-functional requirements from IoT-oriented healthcare requirement documents. This research is not only novel in its concept of using a machine learning approach for classification of non-functional requirements from IoT-oriented healthcare system requirement documents, but it also proposes a novel hybrid KNN-rule based machine learning algorithm for classification with better accuracy. A new dataset is also created for classification purposes, comprising requirements related to IoT-oriented healthcare systems. However, since this dataset is small and consists of only 104 requirements, this might affect the generalizability of the results of this research.


Asunto(s)
Documentación/normas , Internet de las Cosas , Teorema de Bayes , Atención a la Salud , Humanos , Aprendizaje Automático
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA