RESUMO
The healthcare systems are a prime target for cyber-attacks due to the sensitive nature of the information combined with the essential need for continuity of care. Medical laboratories are particularly vulnerable to cyber-attacks for a number of reasons, including the high level of information technology (IT), computerization and digitization. Based on reliable and widespread evidence that medical laboratories may be inadequately prepared for cyber-terrorism, a panel of experts of the Task Force Preparation of Labs for Emergencies (TF-PLE) of the European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) has recognized the need to provide some general guidance that could help medical laboratories to be less vulnerable and better prepared for the dramatic circumstance of a disruptive cyber-attack, issuing a number of consensus recommendations, which are summarized and described in this opinion paper.
RESUMO
An Adaptive activation Functions with Deep Kronecker Neural Network optimized with Bear Smell Search Algorithm (BSSA) (ADKNN-BSSA-CSMANET) is proposed for preventing MANET Cyber security attacks. The mobile users are enrolled with Trusted Authority Using a Crypto Hash Signature (SHA-256). Every mobile user uploads their finger vein biometric, user ID, latitude and longitude for confirmation. The packet analyser checks if any attack patterns are identified. It is implemented using adaptive density-based spatial clustering (ADSC) that deems information from packet header. Geodesic filtering (GF) is used as a pre-processing method for eradicating the unsolicited content and filtering pertinent data. Group Teaching Algorithm (GTA)-based feature selection is utilized for ideal collection of features and Adaptive Activation Functions along Deep Kronecker Neural Network (ADKNN) is used to categorizing normal and attack packets (DoS, Probe, U2R, and R2L). Then BSSA is utilized for optimizing the weight parameters of ADKNN classifier for optimal classification. The proposed technique is executed in python and its efficiency is evaluated by several performances metrics, such as Accuracy, Attack Detection Rate, Detection Delay, Packet Delivery Ratio, Throughput, and Energy Consumption. The proposed technique provides 36.64%, 33.06%, and 33.98% lower Detection Delay on NSL-KDD dataset compared with the existing methods.
RESUMO
As cyber-attacks increase in unencrypted communication environments such as the traditional Internet, protected communication channels based on cryptographic protocols, such as transport layer security (TLS), have been introduced to the Internet. Accordingly, attackers have been carrying out cyber-attacks by hiding themselves in protected communication channels. However, the nature of channels protected by cryptographic protocols makes it difficult to distinguish between normal and malicious network traffic behaviors. This means that traditional anomaly detection models with features from packets extracted a deep packet inspection (DPI) have been neutralized. Recently, studies on anomaly detection using artificial intelligence (AI) and statistical characteristics of traffic have been proposed as an alternative. In this review, we provide a systematic review for AI-based anomaly detection techniques over encrypted traffic. We set several research questions on the review topic and collected research according to eligibility criteria. Through the screening process and quality assessment, 30 research articles were selected with high suitability to be included in the review from the collected literature. We reviewed the selected research in terms of dataset, feature extraction, feature selection, preprocessing, anomaly detection algorithm, and performance indicators. As a result of the literature review, it was confirmed that various techniques used for AI-based anomaly detection over encrypted traffic were used. Some techniques are similar to those used for AI-based anomaly detection over unencrypted traffic, but some technologies are different from those used for unencrypted traffic.
RESUMO
The Internet's default inter-domain routing system, the Border Gateway Protocol (BGP), remains insecure. Detection techniques are dominated by approaches that involve large numbers of features, parameters, domain-specific tuning, and training, often contributing to an unacceptable computational cost. Efforts to detect anomalous activity in the BGP have been almost exclusively focused on single observable monitoring points and Autonomous Systems (ASs). BGP attacks can exploit and evade these limitations. In this paper, we review and evaluate categories of BGP attacks based on their complexity. Previously identified next-generation BGP detection techniques remain incapable of detecting advanced attacks that exploit single observable detection approaches and those designed to evade public routing monitor infrastructures. Advanced BGP attack detection requires lightweight, rapid capabilities with the capacity to quantify group-level multi-viewpoint interactions, dynamics, and information. We term this approach advanced BGP anomaly detection. This survey evaluates 178 anomaly detection techniques and identifies which are candidates for advanced attack anomaly detection. Preliminary findings from an exploratory investigation of advanced BGP attack candidates are also reported.
RESUMO
Cyber-security challenges are growing globally and are specifically targeting critical infrastructure. Conventional countermeasure practices are insufficient to provide proactive threat hunting. In this study, random forest (RF), support vector machine (SVM), multi-layer perceptron (MLP), AdaBoost, and hybrid models were applied for proactive threat hunting. By automating detection, the hybrid machine learning-based method improves threat hunting and frees up time to concentrate on high-risk warnings. These models are implemented on approach devices, access, and principal servers. The efficacy of several models, including hybrid approaches, is assessed. The findings of these studies are that the AdaBoost model provides the highest efficiency, with a 0.98 ROC area and 95.7% accuracy, detecting 146 threats with 29 false positives. Similarly, the random forest model achieved a 0.98 area under the ROC curve and a 95% overall accuracy, accurately identifying 132 threats and reducing false positives to 31. The hybrid model exhibited promise with a 0.89 ROC area and 94.9% accuracy, though it requires further refinement to lower its false positive rate. This research emphasizes the role of machine learning in improving cyber-security, particularly for critical infrastructure. Advanced ML techniques enhance threat detection and response times, and their continuous learning ability ensures adaptability to new threats.
RESUMO
Early detection of ransomware attacks is critical for minimizing the potential damage caused by these malicious attacks. Feature selection plays a significant role in the development of an efficient and accurate ransomware early detection model. In this paper, we propose an enhanced Mutual Information Feature Selection (eMIFS) technique that incorporates a normalized hyperbolic function for ransomware early detection models. The normalized hyperbolic function is utilized to address the challenge of perceiving common characteristics among features, particularly when there are insufficient attack patterns contained in the dataset. The Term Frequency-Inverse Document Frequency (TF-IDF) was used to represent the features in numerical form, making it ready for the feature selection and modeling. By integrating the normalized hyperbolic function, we improve the estimation of redundancy coefficients and effectively adapt the MIFS technique for early ransomware detection, i.e., before encryption takes place. Our proposed method, eMIFS, involves evaluating candidate features individually using the hyperbolic tangent function (tanh), which provides a suitable representation of the features' relevance and redundancy. Our approach enhances the performance of existing MIFS techniques by considering the individual characteristics of features rather than relying solely on their collective properties. The experimental evaluation of the eMIFS method demonstrates its efficacy in detecting ransomware attacks at an early stage, providing a more robust and accurate ransomware detection model compared to traditional MIFS techniques. Moreover, our results indicate that the integration of the normalized hyperbolic function significantly improves the feature selection process and ultimately enhances ransomware early detection performance.
RESUMO
In Internet of Things-based smart grids, smart meters record and report a massive number of power consumption data at certain intervals to the data center of the utility for load monitoring and energy management. Energy theft is a big problem for smart meters and causes non-technical losses. Energy theft attacks can be launched by malicious consumers by compromising the smart meters to report manipulated consumption data for less billing. It is a global issue causing technical and financial damage to governments and operators. Deep learning-based techniques can effectively identify consumers involved in energy theft through power consumption data. In this study, a hybrid convolutional neural network (CNN)-based energy-theft-detection system is proposed to detect data-tampering cyber-attack vectors. CNN is a commonly employed method that automates the extraction of features and the classification process. We employed CNN for feature extraction and traditional machine learning algorithms for classification. In this work, honest data were obtained from a real dataset. Six attack vectors causing data tampering were utilized. Tampered data were synthetically generated through these attack vectors. Six separate datasets were created for each attack vector to design a specialized detector tailored for that specific attack. Additionally, a dataset containing all attack vectors was also generated for the purpose of designing a general detector. Furthermore, the imbalanced dataset problem was addressed through the application of the generative adversarial network (GAN) method. GAN was chosen due to its ability to generate new data closely resembling real data, and its application in this field has not been extensively explored. The data generated with GAN ensured better training for the hybrid CNN-based detector on honest and malicious consumption patterns. Finally, the results indicate that the proposed general detector could classify both honest and malicious users with satisfactory accuracy.
RESUMO
The number of connected devices or Internet of Things (IoT) devices has rapidly increased. According to the latest available statistics, in 2023, there were approximately 17.2 billion connected IoT devices; this is expected to reach 25.4 billion IoT devices by 2030 and grow year over year for the foreseeable future. IoT devices share, collect, and exchange data via the internet, wireless networks, or other networks with one another. IoT interconnection technology improves and facilitates people's lives but, at the same time, poses a real threat to their security. Denial-of-Service (DoS) and Distributed Denial-of-Service (DDoS) attacks are considered the most common and threatening attacks that strike IoT devices' security. These are considered to be an increasing trend, and it will be a major challenge to reduce risk, especially in the future. In this context, this paper presents an improved framework (SDN-ML-IoT) that works as an Intrusion and Prevention Detection System (IDPS) that could help to detect DDoS attacks with more efficiency and mitigate them in real time. This SDN-ML-IoT uses a Machine Learning (ML) method in a Software-Defined Networking (SDN) environment in order to protect smart home IoT devices from DDoS attacks. We employed an ML method based on Random Forest (RF), Logistic Regression (LR), k-Nearest Neighbors (kNN), and Naive Bayes (NB) with a One-versus-Rest (OvR) strategy and then compared our work to other related works. Based on the performance metrics, such as confusion matrix, training time, prediction time, accuracy, and Area Under the Receiver Operating Characteristic curve (AUC-ROC), it was established that SDN-ML-IoT, when applied to RF, outperforms other ML algorithms, as well as similar approaches related to our work. It had an impressive accuracy of 99.99%, and it could mitigate DDoS attacks in less than 3 s. We conducted a comparative analysis of various models and algorithms used in the related works. The results indicated that our proposed approach outperforms others, showcasing its effectiveness in both detecting and mitigating DDoS attacks within SDNs. Based on these promising results, we have opted to deploy SDN-ML-IoT within the SDN. This implementation ensures the safeguarding of IoT devices in smart homes against DDoS attacks within the network traffic.
RESUMO
Cybercriminals have become an imperative threat because they target the most valuable resource on earth, data. Organizations prepare against cyber attacks by creating Cyber Security Incident Response Teams (CSIRTs) that use various technologies to monitor and detect threats and to help perform forensics on machines and networks. Testing the limits of defense technologies and the skill of a CSIRT can be performed through adversary emulation performed by so-called "red teams". The red team's work is primarily manual and requires high skill. We propose SpecRep, a system to ease the testing of the detection capabilities of defenses in complex, heterogeneous infrastructures. SpecRep uses previously known attack specifications to construct attack scenarios based on attacker objectives instead of the traditional attack graphs or a list of actions. We create a metalanguage to describe objectives to be achieved in an attack together with a compiler that can build multiple attack scenarios that achieve the objectives. We use text processing tools aided by large language models to extract information from freely available white papers and convert them to plausible attack specifications that can then be emulated by SpecRep. We show how our system can emulate attacks against a smart home, a large enterprise, and an industrial control system.
RESUMO
Increased levels of digitalisation present major opportunities for efficiency in the oil and gas industry but can also contribute to new risks and vulnerabilities. Based on developments in the industry, the Norwegian Ocean Industry Authority (HAVTIL) has in recent years pursued targeted knowledge development and follow-up of company's digitalisation initiatives. This paper explores data collected through HAVTIL's audits of the development and use of automated systems within well operations. The analysis of the data resulted in the identification of five main topics related to the implementation of digital technologies. The five main topics were organisational complexity, follow-up and implementation of technology, analysis and documentation, user-interface and alarms and competence and training. Overall, the results support research findings within human factors and technology development, pointing out that there is a lack of focus on human factors in both development projects and in operations. In addition, this paper provides insight into how digitalisation initiatives are followed-up and explores the results from the analysis in light of the current developments in the industry.
To investigate automated operations and human performance, three audits were performed by the Norwegian Ocean Industry Authority (HAVTIL). These audits have been used as case studies and the basis for this paper. Results from the analysis support research findings within the field of human factors and technology development, pointing out that there is a lack of focus on human factors in both development projects and in operations.
Assuntos
Automação , Indústria de Petróleo e Gás , Humanos , Noruega , Ergonomia , Tecnologia DigitalRESUMO
Increasing numbers of healthcare data breaches highlight the need for structured organisational responses to protect patients, trainees and psychiatrists against identity theft and blackmail. Evidence-based guidance that is informed by the COVID-19 pandemic response includes: timely and reliable information tailored to users' safety, encouragement to take protective action, and access to practical and psychological support. For healthcare organisations which have suffered a data breach, insurance essentially improves access to funded cyber security responses, risk communication and public relations. Patients, trainees and psychiatrists need specific advice on protective measures. Healthcare data security legislative reform is urgently needed.
Assuntos
COVID-19 , Segurança Computacional , Pessoal de Saúde , Serviços de Saúde Mental , Humanos , COVID-19/prevenção & controle , Segurança Computacional/normas , Serviços de Saúde Mental/normas , Serviços de Saúde Mental/organização & administração , Comunicação , Confidencialidade/normas , SARS-CoV-2RESUMO
Integrating IoT devices in SCADA systems has provided efficient and improved data collection and transmission technologies. This enhancement comes with significant security challenges, exposing traditionally isolated systems to the public internet. Effective and highly reliable security devices, such as intrusion detection system (IDSs) and intrusion prevention systems (IPS), are critical. Countless studies used deep learning algorithms to design an efficient IDS; however, the fundamental issue of imbalanced datasets was not fully addressed. In our research, we examined the impact of data imbalance on developing an effective SCADA-based IDS. To investigate the impact of various data balancing techniques, we chose two unbalanced datasets, the Morris power dataset, and CICIDS2017 dataset, including random sampling, one-sided selection (OSS), near-miss, SMOTE, and ADASYN. For binary classification, convolutional neural networks were coupled with long short-term memory (CNN-LSTM). The system's effectiveness was determined by the confusion matrix, which includes evaluation metrics, such as accuracy, precision, detection rate, and F1-score. Four experiments on the two datasets demonstrate the impact of the data imbalance. This research aims to help security researchers in understanding imbalanced datasets and their impact on DL SCADA-IDS.
Assuntos
Algoritmos , Benchmarking , Coleta de Dados , Internet , Memória de Longo PrazoRESUMO
Future engineering systems with new capabilities that far exceed today's levels of autonomy, functionality, usability, dependability, and cyber security are predicted to be designed and developed using cyber-physical systems (CPSs). In this paper, the security of CPSs is investigated through a case study of a smart grid by using a reinforcement learning (RL) augmented attack graph to effectively highlight the subsystems' weaknesses. In particular, the state action reward state action (SARSA) RL technique is used, in which the agent is taken to be the attacker, and an attack graph created for the system is built to resemble the environment. SARSA uses rewards and penalties to identify the worst-case attack scenario; with the most cumulative reward, an attacker may carry out the most harm to the system with the fewest available actions. Results showed successfully the worst-case attack scenario with a total reward of 26.9 and identified the most severely damaged subsystems.
RESUMO
A large volume of security events, generally collected by distributed monitoring sensors, overwhelms human analysts at security operations centers and raises an alert fatigue problem. Machine learning is expected to mitigate this problem by automatically distinguishing between true alerts, or attacks, and falsely reported ones. Machine learning models should first be trained on datasets having correct labels, but the labeling process itself requires considerable human resources. In this paper, we present a new selective sampling scheme for efficient data labeling via unsupervised clustering. The new scheme transforms the byte sequence of an event into a fixed-size vector through content-defined chunking and feature hashing. Then, a clustering algorithm is applied to the vectors, and only a few samples from each cluster are selected for manual labeling. The experimental results demonstrate that the new scheme can select only 2% of the data for labeling without degrading the F1-score of the machine learning model. Two datasets, a private dataset from a real security operations center and a public dataset from the Internet for experimental reproducibility, are used.
Assuntos
Algoritmos , Internet , Humanos , Reprodutibilidade dos Testes , Análise por Conglomerados , Aprendizado de MáquinaRESUMO
This work is concerned with the vulnerability of a network industrial control system to cyber-attacks, which is a critical issue nowadays. This is because an attack on a controlled process can damage or destroy it. These attacks use long short-term memory (LSTM) neural networks, which model dynamical processes. This means that the attacker may not know the physical nature of the process; an LSTM network is sufficient to mislead the process operator. Our experimental studies were conducted in an industrial control network containing a magnetic levitation process. The model training, evaluation, and structure selection are described. The chosen LSTM network very well mimicked the considered process. Finally, based on the obtained results, we formulated possible protection methods against the considered types of cyber-attack.
RESUMO
The prediction of cyber security situation plays an important role in early warning against cyber security attacks. The first-order accumulative grey model has achieved remarkable results in many prediction scenarios. Since recent events have a greater impact on future decisions, new information should be given more weight. The disadvantage of first-order accumulative grey models is that with the first-order accumulative method, equal weight is given to the original data. In this paper, a fractional-order cumulative grey model (FAGM) is used to establish the prediction model, and an intelligent optimization algorithm known as particle swarm optimization (PSO) combined with a genetic algorithm (GA) is used to determine the optimal order. The model discussed in this paper is used for the prediction of Internet cyber security situations. The results of a comparison with the traditional grey model GM(1,1), the grey model GM(1,n), and the fractional discrete grey seasonal model FDGSM(1,1) show that our model is suitable for cases with insufficient data and irregular sample sizes, and the prediction accuracy and stability of the model are better than those of the other three models.
Assuntos
Algoritmos , PrevisõesRESUMO
During the COVID-19 pandemic, most organizations were forced to implement a work-from-home policy, and in many cases, employees have not been expected to return to the office on a full-time basis. This sudden shift in the work culture was accompanied by an increase in the number of information security-related threats which organizations were unprepared for. The ability to effectively address these threats relies on a comprehensive threat analysis and risk assessment and the creation of relevant asset and threat taxonomies for the new work-from-home culture. In response to this need, we built the required taxonomies and performed a thorough analysis of the threats associated with this new work culture. In this paper, we present our taxonomies and the results of our analysis. We also examine the impact of each threat, indicate when it is expected to occur, describe the various prevention methods available commercially or proposed in academic research, and present specific use cases.
Assuntos
COVID-19 , Pandemias , Humanos , Pandemias/prevenção & controle , Segurança Computacional , Medição de RiscoRESUMO
Intrusion detection systems (IDS) play a crucial role in securing networks and identifying malicious activity. This is a critical problem in cyber security. In recent years, metaheuristic optimization algorithms and deep learning techniques have been applied to IDS to improve their accuracy and efficiency. Generally, optimization algorithms can be used to boost the performance of IDS models. Deep learning methods, such as convolutional neural networks, have also been used to improve the ability of IDS to detect and classify intrusions. In this paper, we propose a new IDS model based on the combination of deep learning and optimization methods. First, a feature extraction method based on CNNs is developed. Then, a new feature selection method is used based on a modified version of Growth Optimizer (GO), called MGO. We use the Whale Optimization Algorithm (WOA) to boost the search process of the GO. Extensive evaluation and comparisons have been conducted to assess the quality of the suggested method using public datasets of cloud and Internet of Things (IoT) environments. The applied techniques have shown promising results in identifying previously unknown attacks with high accuracy rates. The MGO performed better than several previous methods in all experimental comparisons.
RESUMO
The small-drone technology domain is the outcome of a breakthrough in technological advancement for drones. The Internet of Things (IoT) is used by drones to provide inter-location services for navigation. But, due to issues related to their architecture and design, drones are not immune to threats related to security and privacy. Establishing a secure and reliable network is essential to obtaining optimal performance from drones. While small drones offer promising avenues for growth in civil and defense industries, they are prone to attacks on safety, security, and privacy. The current architecture of small drones necessitates modifications to their data transformation and privacy mechanisms to align with domain requirements. This research paper investigates the latest trends in safety, security, and privacy related to drones, and the Internet of Drones (IoD), highlighting the importance of secure drone networks that are impervious to interceptions and intrusions. To mitigate cyber-security threats, the proposed framework incorporates intelligent machine learning models into the design and structure of IoT-aided drones, rendering adaptable and secure technology. Furthermore, in this work, a new dataset is constructed, a merged dataset comprising a drone dataset and two benchmark datasets. The proposed strategy outperforms the previous algorithms and achieves 99.89% accuracy on the drone dataset and 91.64% on the merged dataset. Overall, this intelligent framework gives a potential approach to improving the security and resilience of cyber-physical satellite systems, and IoT-aided aerial vehicle systems, addressing the rising security challenges in an interconnected world.
RESUMO
Industrial automation systems are undergoing a revolutionary change with the use of Internet-connected operating equipment and the adoption of cutting-edge advanced technology such as AI, IoT, cloud computing, and deep learning within business organizations. These innovative and additional solutions are facilitating Industry 4.0. However, the emergence of these technological advances and the quality solutions that they enable will also introduce unique security challenges whose consequence needs to be identified. This research presents a hybrid intrusion detection model (HIDM) that uses OCNN-LSTM and transfer learning (TL) for Industry 4.0. The proposed model utilizes an optimized CNN by using enhanced parameters of the CNN via the grey wolf optimizer (GWO) method, which fine-tunes the CNN parameters and helps to improve the model's prediction accuracy. The transfer learning model helps to train the model, and it transfers the knowledge to the OCNN-LSTM model. The TL method enhances the training process, acquiring the necessary knowledge from the OCNN-LSTM model and utilizing it in each next cycle, which helps to improve detection accuracy. To measure the performance of the proposed model, we conducted a multi-class classification analysis on various online industrial IDS datasets, i.e., ToN-IoT and UNW-NB15. We have conducted two experiments for these two datasets, and various performance-measuring parameters, i.e., precision, F-measure, recall, accuracy, and detection rate, were calculated for the OCNN-LSTM model with and without TL and also for the CNN and LSTM models. For the ToN-IoT dataset, the OCNN-LSTM with TL model achieved a precision of 92.7%; for the UNW-NB15 dataset, the precision was 94.25%, which is higher than OCNN-LSTM without TL.