RESUMO
Software reliability is prioritised as the most critical quality attribute. Reliability prediction models participate in the prevention of software failures which can cause vital events and disastrous consequences in safety-critical applications or even in businesses. Predicting reliability during design allows software developers to avoid potential design problems, which can otherwise result in reconstructing an entire system when discovered at later stages of the software development life-cycle. Several reliability models have been built to predict reliability during software development. However, several issues still exist in these models. Current models suffer from a scalability issue referred to as the modeling of large systems. The scalability solutions usually come at a high computational cost, requiring solutions. Secondly, consideration of the nature of concurrent applications in reliability prediction is another issue. We propose a reliability prediction model that enhances scalability by introducing a system-level scenario synthesis mechanism that mitigates complexity. Additionally, the proposed model supports modeling of the nature of concurrent applications through adaption of formal statistical distribution toward scenario combination. The proposed model was evaluated using sensors-based case studies. The experimental results show the effectiveness of the proposed model from the view of computational cost reduction compared to similar models. This reduction is the main parameter for scalability enhancement. In addition, the presented work can enable system developers to know up to which load their system will be reliable via observation of the reliability value in several running scenarios.
Assuntos
Software , Estudos de Casos e Controles , Reprodutibilidade dos TestesRESUMO
The Internet of Things (IoT) is defined as interconnected digital and mechanical devices with intelligent and interactive data transmission features over a defined network. The ability of the IoT to collect, analyze and mine data into information and knowledge motivates the integration of IoT with grid and cloud computing. New job scheduling techniques are crucial for the effective integration and management of IoT with grid computing as they provide optimal computational solutions. The computational grid is a modern technology that enables distributed computing to take advantage of a organization's resources in order to handle complex computational problems. However, the scheduling process is considered an NP-hard problem due to the heterogeneity of resources and management systems in the IoT grid. This paper proposed a Greedy Firefly Algorithm (GFA) for jobs scheduling in the grid environment. In the proposed greedy firefly algorithm, a greedy method is utilized as a local search mechanism to enhance the rate of convergence and efficiency of schedules produced by the standard firefly algorithm. Several experiments were conducted using the GridSim toolkit to evaluate the proposed greedy firefly algorithm's performance. The study measured several sizes of real grid computing workload traces, starting with lightweight traces with only 500 jobs, then typical with 3000 to 7000 jobs, and finally heavy load containing 8000 to 10,000 jobs. The experiment results revealed that the greedy firefly algorithm could insignificantly reduce the makespan makespan and execution times of the IoT grid scheduling process as compared to other evaluated scheduling methods. Furthermore, the proposed greedy firefly algorithm converges on large search spacefaster , making it suitable for large-scale IoT grid environments.
RESUMO
High end-to-end delay is a significant challenge in the data collection process in the underwater environment. Autonomous Underwater Vehicles (AUVs) are a considerably reliable source of data collection if they have significant trajectory movement. Therefore, in this paper, a new routing algorithm known as Elliptical Shaped Efficient Data Gathering (ESEDG) is introduced for the AUV movement. ESEDG is divided into two phases: first, an elliptical trajectory has been designed for the horizontal movement of the AUV. In the second phase, the AUV gathers data from Gateway Nodes (GNs) which are associated with Member Nodes (MNs). For their association, an end-to-end delay model is also presented in ESEDG. The hierarchy of data collection is as follows: MNs send data to GNs, the AUV receives data from GNs, and forwards it to the sink node. Furthermore, the ESEDG was evaluated on the network simulator NS-3 version 3.35, and the results were compared to existing data collection routing protocols DSG-DGA, AEEDCO, AEEDCO-A, ALP, SEDG, and AEDG. In terms of network throughput, end-to-end delay, lifetime, path loss, and energy consumption, the results showed that ESEDG outperformed the baseline routing protocols.
RESUMO
Privacy-preserving techniques allow private information to be used without compromising privacy. Most encryption algorithms, such as the Advanced Encryption Standard (AES) algorithm, cannot perform computational operations on encrypted data without first applying the decryption process. Homomorphic encryption algorithms provide innovative solutions to support computations on encrypted data while preserving the content of private information. However, these algorithms have some limitations, such as computational cost as well as the need for modifications for each case study. In this paper, we present a comprehensive overview of various homomorphic encryption tools for Big Data analysis and their applications. We also discuss a security framework for Big Data analysis while preserving privacy using homomorphic encryption algorithms. We highlight the fundamental features and tradeoffs that should be considered when choosing the right approach for Big Data applications in practice. We then present a comparison of popular current homomorphic encryption tools with respect to these identified characteristics. We examine the implementation results of various homomorphic encryption toolkits and compare their performances. Finally, we highlight some important issues and research opportunities. We aim to anticipate how homomorphic encryption technology will be useful for secure Big Data processing, especially to improve the utility and performance of privacy-preserving machine learning.
RESUMO
The abnormal heart conduction, known as arrhythmia, can contribute to cardiac diseases that carry the risk of fatal consequences. Healthcare professionals typically use electrocardiogram (ECG) signals and certain preliminary tests to identify abnormal patterns in a patient's cardiac activity. To assess the overall cardiac health condition, cardiac specialists monitor these activities separately. This procedure may be arduous and time-intensive, potentially impacting the patient's well-being. This study automates and introduces a novel solution for predicting the cardiac health conditions, specifically identifying cardiac morbidity and arrhythmia in patients by using invasive and non-invasive measurements. The experimental analyses conducted in medical studies entail extremely sensitive data and any partial or biased diagnoses in this field are deemed unacceptable. Therefore, this research aims to introduce a new concept of determining the uncertainty level of machine learning algorithms using information entropy. To assess the effectiveness of machine learning algorithms information entropy can be considered as a unique performance evaluator of the machine learning algorithm which is not selected previously any studies within the realm of bio-computational research. This experiment was conducted on arrhythmia and heart disease datasets collected from Massachusetts Institute of Technology-Berth Israel Hospital-arrhythmia (DB-1) and Cleveland Heart Disease (DB-2), respectively. Our framework consists of four significant steps: 1) Data acquisition, 2) Feature preprocessing approach, 3) Implementation of learning algorithms, and 4) Information Entropy. The results demonstrate the average performance in terms of accuracy achieved by the classification algorithms: Neural Network (NN) achieved 99.74%, K-Nearest Neighbor (KNN) 98.98%, Support Vector Machine (SVM) 99.37%, Random Forest (RF) 99.76 % and Naïve Bayes (NB) 98.66% respectively. We believe that this study paves the way for further research, offering a framework for identifying cardiac health conditions through machine learning techniques.
Assuntos
Arritmias Cardíacas , Eletrocardiografia , Aprendizado de Máquina , Humanos , Eletrocardiografia/métodos , Arritmias Cardíacas/diagnóstico , Algoritmos , Monitorização Fisiológica/métodos , Cardiopatias/diagnósticoRESUMO
Brain tumor has become one of the fatal causes of death worldwide in recent years, affecting many individuals annually and resulting in loss of lives. Brain tumors are characterized by the abnormal or irregular growth of brain tissues that can spread to nearby tissues and eventually throughout the brain. Although several traditional machine learning and deep learning techniques have been developed for detecting and classifying brain tumors, they do not always provide an accurate and timely diagnosis. This study proposes a conditional generative adversarial network (CGAN) that leverages the fine-tuning of a convolutional neural network (CNN) to achieve more precise detection of brain tumors. The CGAN comprises two parts, a generator and a discriminator, whose outputs are used as inputs for fine-tuning the CNN model. The publicly available dataset of brain tumor MRI images on Kaggle was used to conduct experiments for Datasets 1 and 2. Statistical values such as precision, specificity, sensitivity, F1-score, and accuracy were used to evaluate the results. Compared to existing techniques, our proposed CGAN model achieved an accuracy value of 0.93 for Dataset 1 and 0.97 for Dataset 2.
RESUMO
Nowadays, brain tumors have become a leading cause of mortality worldwide. The brain cells in the tumor grow abnormally and badly affect the surrounding brain cells. These cells could be either cancerous or non-cancerous types, and their symptoms can vary depending on their location, size, and type. Due to its complex and varying structure, detecting and classifying the brain tumor accurately at the initial stages to avoid maximum death loss is challenging. This research proposes an improved fine-tuned model based on CNN with ResNet50 and U-Net to solve this problem. This model works on the publicly available dataset known as TCGA-LGG and TCIA. The dataset consists of 120 patients. The proposed CNN and fine-tuned ResNet50 model are used to detect and classify the tumor or no-tumor images. Furthermore, the U-Net model is integrated for the segmentation of the tumor regions correctly. The model performance evaluation metrics are accuracy, intersection over union, dice similarity coefficient, and similarity index. The results from fine-tuned ResNet50 model are IoU: 0.91, DSC: 0.95, SI: 0.95. In contrast, U-Net with ResNet50 outperforms all other models and correctly classified and segmented the tumor region.
RESUMO
A brain tumor is a significant health concern that directly or indirectly affects thousands of people worldwide. The early and accurate detection of brain tumors is vital to the successful treatment of brain tumors and the improved quality of life of the patient. There are several imaging techniques used for brain tumor detection. Among these techniques, the most common are MRI and CT scans. To overcome the limitations associated with these traditional techniques, computer-aided analysis of brain images has gained attention in recent years as a promising approach for accurate and reliable brain tumor detection. In this study, we proposed a fine-tuned vision transformer model that uses advanced image processing and deep learning techniques to accurately identify the presence of brain tumors in the input data images. The proposed model FT-ViT involves several stages, including the processing of data, patch processing, concatenation, feature selection and learning, and fine tuning. Upon training the model on the CE-MRI dataset containing 5712 brain tumor images, the model could accurately identify the tumors. The FT-Vit model achieved an accuracy of 98.13%. The proposed method offers high accuracy and can significantly reduce the workload of radiologists, making it a practical approach in medical science. However, further research can be conducted to diagnose more complex and rare types of tumors with more accuracy and reliability.
RESUMO
The mental and physical well-being of healthcare workers is being affected by global COVID-19. The pandemic has impacted the mental health of medical staff in numerous ways. However, most studies have examined sleep disorders, depression, anxiety, and post-traumatic problems in healthcare workers during and after the outbreak. The study's objective is to evaluate COVID-19's psychological effects on healthcare professionals of Saudi Arabia. Healthcare professionals from tertiary teaching hospitals were invited to participate in the survey. Almost 610 people participated in the survey, of whom 74.3% were female, and 25.7% were male. The survey included the ratio of Saudi and non-Saudi participants. The study has utilized multiple machine learning algorithms and techniques such as Decision Tree (DT), Random Forest (RF), K Nearest Neighbor (KNN), Gradient Boosting (GB), Extreme Gradient Boosting (XGBoost), and Light Gradient Boosting Machine (LightGBM). The machine learning models offer 99% accuracy for the credentials added to the dataset. The dataset covers several aspects of medical workers, such as profession, working area, years of experience, nationalities, and sleeping patterns. The study concluded that most of the participants who belonged to the medical department faced varying degrees of anxiety and depression. The results reveal considerable rates of anxiety and depression in Saudi frontline workers.