RESUMO
In the Internet of Things (IoT) era, the surge in Machine-Type Devices (MTDs) has introduced Massive IoT (MIoT), opening new horizons in the world of connected devices. However, such proliferation presents challenges, especially in storing and analyzing massive, heterogeneous data streams in real time. In order to manage Massive IoT data streams, we utilize analytical database software such as Apache Druid version 28.0.0 that excels in real-time data processing. Our approach relies on a publish/subscribe mechanism, where device-generated data are relayed to a dedicated broker, effectively functioning as a separate server. This broker enables any application to subscribe to the dataset, promoting a dynamic and responsive data ecosystem. At the core of our data transmission infrastructure lies Apache Kafka version 3.6.1, renowned for its exceptional data flow management performance. Kafka efficiently bridges the gap between MIoT sensors and brokers, enabling parallel clusters of brokers that lead to more scalability. In our pursuit of uninterrupted connectivity, we incorporate a fail-safe mechanism with two Software-Defined Radios (SDR) called Nutaq PicoLTE Release 1.5 within our model. This strategic redundancy enhances data transmission availability, safeguarding against connectivity disruptions. Furthermore, to enhance the data repository security, we utilize blockchain technology, specifically Hyperledger Fabric, known for its high-performance attributes, ensuring data integrity, immutability, and security. Our latency results demonstrate that our platform effectively reduces latency for 100,000 devices, qualifying as an MIoT, to less than 25 milliseconds. Furthermore, our findings on blockchain performance underscore our model as a secure platform, achieving over 800 Transactions Per Second in a dataset comprising 14,000 transactions, thereby demonstrating its high efficiency.
RESUMO
Message Queuing Telemetry Transport (MQTT) is a lightweight publish/subscribe protocol, which is currently one of the most popular application protocols in Internet of Things (IoT) thanks to its simplicity in use and its scalability. The secured version, MQTTS, which combines MQTT with the Transport Layer Security (TLS) protocol, has several shortcomings. It only offers one-to-one security, supports a limited number of security features and has high computation and communication costs. In this paper, we propose a flexible and lightweight security solution to be integrated in MQTT, addressing many-to-many communication, which reduces the communication overhead by 80% and the computational overhead by 40% for the setup of a secure connection on the client side.
Assuntos
Comunicação , Telemetria , HumanosRESUMO
In the Industry 4.0 era, with the continuous integration of industrial field systems and upper-layer facilities, interconnection between industrial wireless sensor networks (IWSNs) and industrial Internet networks is becoming increasingly pivotal. However, when deployed in real industrial scenarios, IWSNs are often connected to legacy control systems, through some wired industrial network protocols via gateways. Complex protocol translation is required in these gateways, and semantic interoperability is lacking between IWSNs and the industrial Internet. To fill this gap, our study focuses on realizing the interconnection and interoperability between an IWSN and the industrial Internet. The Open Platform Communications Unified Architecture (OPC UA) and joint publish/subscribe (pub/sub) communication between the two networks are used to achieve efficient transmission. Taking the Wireless Networks for Industrial Automation Process Automation (WIA-PA), a typical technology in IWSNs, as an example, we develop a communication architecture that adopts OPC UA as a communication bridge to integrate the WIA-PA network into the industrial Internet. A WIA-PA virtualization method for OPC UA pub/sub data sources is designed to solve the data mapping problem between WIA-PA and OPC UA. Then, the WIA-PA/OPC UA joint pub/sub transmission mechanism and the corresponding configuration mechanism are designed. Finally, a laboratory-level verification system is implemented to validate the proposed architecture, and the experimental results demonstrate its promising feasibility and capability.
RESUMO
Today, data is being actively generated by a variety of devices, services, and applications. Such data is important not only for the information that it contains, but also for its relationships to other data and to interested users. Most existing Big Data systems focus on passively answering queries from users, rather than actively collecting data, processing it, and serving it to users. To satisfy both passive and active requests at scale, application developers need either to heavily customize an existing passive Big Data system or to glue one together with systems like Streaming Engines and Pub-sub services. Either choice requires significant effort and incurs additional overhead. In this paper, we present the BAD (Big Active Data) system as an end-to-end, out-of-the-box solution for this challenge. It is designed to preserve the merits of passive Big Data systems and introduces new features for actively serving Big Data to users at scale. We show the design and implementation of the BAD system, demonstrate how BAD facilitates providing both passive and active data services, investigate the BAD system's performance at scale, and illustrate the complexities that would result from instead providing BAD-like services with a "glued" system.
RESUMO
IEC 61850 is one of the most prominent communication standards adopted by the smart grid community due to its high scalability, multi-vendor interoperability, and support for several input/output devices. Generic Object-Oriented Substation Events (GOOSE), which is a widely used communication protocol defined in IEC 61850, provides reliable and fast transmission of events for the electrical substation system. This paper investigates the security vulnerabilities of this protocol and analyzes the potential impact on the smart grid by rigorously analyzing the security of the GOOSE protocol using an automated process and identifying vulnerabilities in the context of smart grid communication. The vulnerabilities are tested using a real-time simulation and industry standard hardware-in-the-loop emulation. An in-depth experimental analysis is performed to demonstrate and verify the security weakness of the GOOSE publish-subscribe protocol towards the substation protection within the smart grid setup. It is observed that an adversary who might have familiarity with the substation network architecture can create falsified attack scenarios that can affect the physical operation of the power system. Extensive experiments using the real-time testbed validate the theoretical analysis, and the obtained experimental results prove that the GOOSE-based IEC 61850 compliant substation system is vulnerable to attacks from malicious intruders.
RESUMO
Large-scale IoT applications with dozens of thousands of geo-distributed IoT devices creating enormous volumes of data pose a big challenge for designing communication systems that provide data delivery with low latency and high scalability. In this paper, we investigate a hierarchical Edge-Cloud publish/subscribe brokers model using an efficient two-tier routing scheme to alleviate these issues when transmitting event notifications in wide-scale IoT systems. In this model, IoT devices take advantage of proximate edge brokers strategically deployed in edge networks for data delivery services in order to reduce latency. To deliver data more efficiently, we propose a proactive mechanism that applies collaborative filtering techniques to efficiently cluster edge brokers with geographic proximity that publish and/or subscribe to similar topics. This allows brokers in the same cluster to exchange data directly with each other to further reduce data delivery latency. In addition, we devise a coordinative scheme to help brokers discover and bridge similar topic channels in the whole system, informing other brokers for data delivery in an efficient manner. Extensive simulation results prove that our model can adeptly support event notifications in terms of low latency, small amounts of relay traffic, and high scalability for large-scale, delay-sensitive IoT applications. Specifically, in comparison with other similar Edge-Cloud approaches, our proposal achieves the best in terms of relay traffic among brokers, about 7.77% on average. In addition, our model's average delivery latency is approximately 66% of PubSubCoord-alike's one.
RESUMO
Continuing the evolution towards Industry 4.0, the industrial communication protocols represent a significant topic of interest, as real-time data exchange between multiple devices constitute the pillar of Industrial Internet of Things (IIoT) scenarios. Although the legacy protocols are still persistent in the industry, the transition was initiated by the key Industry 4.0 facilitating protocol, the Open Platform Communication Unified Architecture (OPC UA). OPC UA has to reach the envisioned applicability, and it therefore has to consider coexistence with other emerging real-time oriented protocols in the production lines. The Data Distribution Service (DDS) will certainly be present in future architectures in some areas as robots, co-bots, and compact units. The current paper proposes a solution to evaluate the real-time coexistence of OPC UA and DDS protocols, functioning in parallel and in a gateway context. The purpose is to confirm the compatibility and feasibility between the two protocols alongside a general definition of criteria and expectations from an architectural point of view, pointing out advantages and disadvantages in a neutral manner, shaping a comprehensive view of the possibilities. The researched architecture is meant to comply with both performance comparison scenarios and interaction scenarios over a gateway application. Considering the industrial tendencies, the developed solution is applied using non-ideal infrastructures to provide a more feasible and faster applicability in the production lines.
Assuntos
Internet das Coisas , Comunicação , Indústrias , RegistrosRESUMO
The Open Platform Communication Unified Architecture (OPC UA) protocol is a key enabler of Industry 4.0 and Industrial Internet of Things (IIoT). OPC UA is already accepted by the industry and its presence is expected to reach more and more fields, applications, and hierarchical levels. Advances within the latest specifications are providing the opportunity to extend the capabilities and the applicability of the protocol, targeting better performances in terms of data volumes, speed, availability, footprint, and security. Continuing previous researches focusing on the publish-subscribe (pub/sub) mechanism and real-time constraints, the current study aims to consider higher data-volumes, approach the multi-channel User Datagram Protocol (UDP)-based communication, and analyze the robustness of the developed mechanism in the context of long-term data transmission. Consequently, the research proposes to extend the applicability of the OPC UA in the context of image transmission. Although highly needed, the image transmission after processing is currently beyond the reach of OPC UA or other legacy industrial protocols, being considered as a separate fraction in the industrial environment. The concept and developments are applied considering both the end-of-line industrial manufacturing process in the automotive sector and the car-to-infrastructure communication. Without special hardware constraints, the obtained results are proven to be appreciable, opening various future perspectives for image transmission using OPC UA.
RESUMO
With the recent advances in the area of OPC UA interfacing and the continuously growing requirements of the industrial automation world, combined with the more and more complex configurations of ECUs inside vehicles and services associated to car to infrastructure and even car to car communications, the gap between the two domains must be analyzed and filled. This gap occurred mainly because of the rigidness and lack of transparency of the software-hardware part of the automotive sector and the new demands for car to infrastructure communications. The issues are related to protocols as well as to conceptual views regarding requirements and already adopted individual directions. The industrial world is in the Industry 4.0 era, and in the Industrial Internet of Things context, its key interfacing enabler is OPC UA. Mainly to accommodate requirements related, among others, to high volumes, transfer rates, larger numbers of nodes, improved coordination and services, OPC UA enhances within its specifications the Publish-Subscribe mechanism and the TSN technology. In the OPC UA context, together with the VSOME/IP Notify-Subscribe mechanism, the current work is stepping toward a better understanding of the current relation between the needs of the industry and the suitable technologies, providing in-depth analysis on the most recent paradigms developed for data transmission, taking in consideration the real-time capabilities and use-cases of high concern in automation and automotive domains, and toward obtaining a VSOME/IP-OPC UA Gateway that includes the necessary characteristics and services in order to fill the protocol-related gap between the above mentioned fields. The developed case study results are proving the efficiency of the concept and are providing a better understanding regarding the impact between ongoing solutions and future requirements.
RESUMO
Communication protocols are evolving continuously as the interfacing and interoperability requirements are the foundation of Industry 4.0 and Industrial Internet of Things (IIoT), and the Open Platform Communication Unified Architecture (OPC UA) protocol is a major enabling technology. OPC UA was adopted by the industry, and research is continuously carried out to extend and to improve its capabilities, to fulfil the growing requirements of specific industries and hierarchical levels. Consistent issues that have to be approached are related to the latest specifications and the real-time context that could extend the applicability of the protocol and bring significant benefits in terms of speed, data volumes, footprint, security. The real-time context is essential in the automotive sector and it is highly developed within some specific protocols. The current work approaches first the conceptual analysis to improve the OPC UA interfacing using the Publish-Subscribe mechanism, focusing on real-time constraints and role distribution between entities, and considering some well-founded interfacing strategies from the automotive sector. The conceptual analysis is materialized into a solution that takes OPC UA Publish-Subscribe over User Datagram Protocol (UDP) mechanism to the next level by developing a synchronization algorithm and a multithreading broker application to obtain real time responsiveness and increased efficiency by lowering the publisher and the subscriber footprint and computational effort, reducing the difficulty of sending larger volumes of data for various subscribers and the charge on the network and services in terms of polling and filtering. The proof of concept is evaluated and the results prove the efficiency of the approach and the solution.
RESUMO
The publish/subscribe model has gained prominence in the Internet of things (IoT) network, and both Message Queue Telemetry Transport (MQTT) and Constrained Application Protocol (CoAP) support it. However, existing coverage-based fuzzers may miss some paths when fuzzing such publish/subscribe protocols, because they implicitly assume that there are only two parties in a protocol, which is not true now since there are three parties, i.e., the publisher, the subscriber and the broker. In this paper, we propose MultiFuzz, a new coverage-based multiparty-protocol fuzzer. First, it embeds multiple-connection information in a single input. Second, it uses a message mutation algorithm to stimulate protocol state transitions, without the need of protocol specifications. Third, it uses a new desockmulti module to feed the network messages into the program under test. desockmulti is similar to desock (Preeny), a tool widely used by the community, but it is specially designed for fuzzing and is 10x faster. We implement MultiFuzz based on AFL, and use it to fuzz two popular projects Eclipse Mosquitto and libCoAP. We reported discovered problems to the projects. In addition, we compare MultiFuzz with AFL and two state-of-the-art fuzzers, MOPT and AFLNET, and find it discovering more paths and crashes.
RESUMO
At present, most publish/subscribe middlewares suppose that there are equal Quality of Service (QoS) requirements for all users. However, in many real-world Internet of Things (IoT) service scenarios, different users may have different delay requirements. How to provide reliable differentiated services has become an urgent problem. The rise of Software-Defined Networking (SDN) provides endless possibilities to improve the QoS of publish/subscribe middlewares due to its greater programmability. We can encode event topics and priorities into flow entries of SDN switches directly to meet customized requirements. In this paper, we first propose an SDN-like publish/subscribe middleware architecture and describe how to use this architecture and priority queues supported by OpenFlow switches to realize differentiated services. Then we present a machine learning method using the eXtreme Gradient Boosting (XGBoost) model to solve the difficult issue of getting the queuing delay of switches accurately. Finally, we propose a reliable differentiated services guarantee mechanism according to the queuing delay and the programmability of SDN to improve QoS, namely, a two-layer queue management mechanism. Experimental evaluations show that the delay predicted by the XGBoost method is closer to the real value; our mechanism can save end-to-end delay, reduce packet loss rate, and allocate bandwidth more reasonably.
RESUMO
IoT sensors use the publish/subscribe model for communication to benefit from its decoupled nature with respect to space, time, and synchronization. Because of the heterogeneity of communicating parties, semantic decoupling is added as a fourth dimension. The added semantic decoupling complicates the matching process and reduces its efficiency. Our proposed algorithm clusters subscriptions and events according to topic and performs the matching process within these clusters, which increases the throughput by reducing the matching time from the range of 16-18 ms to 2-4 ms. Moreover, the accuracy of matching is improved when subscriptions must be fully approximated, as demonstrated by an over 40% increase in F-score results. This work shows the benefit of clustering, as well as the improvement in the matching accuracy and efficiency achieved using this approach.
RESUMO
In this paper, we report an algorithm that is designed to leverage the cloud as infrastructure to support Internet of Things (IoT) by elastically scaling in/out so that IoT-based service users never stop receiving sensors' data. This algorithm is able to provide an uninterrupted service to end users even during the scaling operation since its internal state repartitioning is transparent for publishers or subscribers; its scaling operation is time-bounded and depends only on the dimension of the state partitions to be transmitted to the different nodes. We describe its implementation in E-SilboPS, an elastic content-based publish/subscribe (CBPS) system specifically designed to support context-aware sensing and communication in IoT-based services. E-SilboPS is a key internal asset of the FIWARE IoT services enablement platform, which offers an architecture of components specifically designed to capture data from, or act upon, IoT devices as easily as reading/changing the value of attributes linked to context entities. In addition, we discuss the quantitative measurements used to evaluate the scale-out process, as well as the results of this evaluation. This new feature rounds out the context-aware content-based features of E-SilboPS by providing, for example, the necessary middleware for constructing dashboards and monitoring panels that are capable of dynamically changing queries and continuously handling data in IoT-based services.
RESUMO
Monitoring the data sources for possible changes is an important consumption requirement for applications running in interaction with the Web of Data. In this article, MonARCh which is an architecture for monitoring the result changes of registered SPARQL queries in the Linked Data environment, is proposed. MonARCh can be comprehended as a publish/subscribe system in the general sense. However, it differs in how communication with the data sources is realized. Data sources in the Linked Data environment do not publish the changes in the data. MonARCh provides the necessary communication infrastructure between the data sources and the consumers for the notification of changes. Users subscribe SPARQL queries to the system which are then converted to federated queries. MonARCh periodically checks for updates by re-executing SERVICE clauses and notifying users in case of any result change. In addition, to provide scalability, MonARCh takes the advantage of concurrent computation of the actor model. The parallel join algorithm utilized speeds up query execution and result generation processes. The design science methodology is used during the design, implementation and evaluation of the architecture. When compared to the literature MonARCh meets all the sufficient requirements from the linked data monitoring and state of the art perspectives while having many outstanding features from both points of view. The evaluation results show that even while working under the limited two-node cluster setting MonARCh could reach from 300 to 25,000 query monitoring capacity according to the diverse query selectivities executed within our test bench.
RESUMO
Real-time data processing and distributed messaging are problems that have been worked on for a long time. As the amount of spatial data being produced has increased, coupled with increasingly complex software solutions being developed, there is a need for platforms that address these needs. In this paper, we present a distributed and light streaming system for combating pandemics and give a case study on spatial analysis of the COVID-19 geo-tagged Twitter dataset. In this system, three of the major components are the translation of tweets matching with user-defined bounding boxes, name entity recognition in tweets, and skyline queries. Apache Pulsar addresses all these components in this paper. With the proposed system, end-users have the capability of getting COVID-19 related information within foreign regions, filtering/searching location, organization, person, and miscellaneous based tweets, and performing skyline based queries. The evaluation of the proposed system is done based on certain characteristics and performance metrics. The study differs greatly from other studies in terms of using distributed computing and big data technologies on spatial data to combat COVID-19. It is concluded that Pulsar is designed to handle large amounts of long-term on disk persistence.
RESUMO
Middlewares are standard tools for modern software development in many areas, especially in robotics. Although such have become common for high-level applications, there is little support for real-time systems and low-level control. Therefore, µRT provides a lightweight solution for resource-constrained embedded systems, such as microcontrollers. It features publish-subscribe communication and remote procedure calls (RPCs) and can validate timing constraints at runtime. In contrast to other middlewares, µRT does not rely on specific transports for communication but can be used with any technology. Empirical results prove the small memory footprint, consistent temporal behavior, and predominantly linear scaling. The usability of µRT was found to be competitive with state-of-the-art solutions by means of a study.
RESUMO
Putting real-time medical data processing applications into practice comes with some challenges such as scalability and performance. Processing medical images from different collaborators is an example of such applications, in which chest X-ray data are processed to extract knowledge. It is not easy to process data and get the required information in real time using central processing techniques when data get very large in size. In this paper, real-time data are filtered and forwarded to the right processing node by using the proposed topic-based hierarchical publish/subscribe messaging middleware in the distributed scalable network of collaborating computation nodes instead of classical approaches of centralized computation. This enables processing streaming medical data in near real time and makes a warning system possible. End users have the capability of filtering/searching. The returned search results can be images (COVID-19 or non-COVID-19) and their meta-data are gender and age. Here, COVID-19 is detected using a novel capsule network-based model from chest X-ray images. This middleware allows for a smaller search space as well as shorter times for obtaining search results.
RESUMO
BACKGROUND: The Internet of Things (IoT) enables the development of innovative applications in various domains such as healthcare, transportation, and Industry 4.0. Publish-subscribe systems enable IoT devices to communicate with the cloud platform. However, IoT applications need context-aware messages to translate the data into contextual information, allowing the applications to act cognitively. Besides, end-to-end security of publish-subscribe messages on both ends (devices and cloud) is essential. However, achieving security on constrained IoT devices with memory, payload, and energy restrictions is a challenge. CONTRIBUTION: Messages in IoT need to achieve both energy efficiency and secure delivery. Thus, the main contribution of this paper refers to a performance evaluation of a message structure that standardizes the publish-subscribe topic and payload used by the cloud platform and the IoT devices. We also propose a standardization for the topic and payload for publish-subscribe systems. CONCLUSION: The messages promote energy efficiency, enabling ultra-low-power and high-capacity devices and reducing the bytes transmitted in the IoT domain. The performance evaluation demonstrates that publish-subscribe systems (namely, AMQP, DDS, and MQTT) can use our proposed energy-efficient message structure on IoT. Additionally, the message system provides end-to-end confidentiality, integrity, and authenticity between IoT devices and the cloud platform.
RESUMO
BACKGROUND AND OBJECTIVE: Multiple medical specialties rely on image data, typically following the Digital Imaging and Communications in Medicine (DICOM) ISO 12052 standard, to support diagnosis through telemedicine. Remote analysis by different physicians requires the same image to be transmitted simultaneously to different destinations in real-time. This scenario poses a need for a large number of resources to store and transmit DICOM images in real-time, which has been explored using some cloud-based solutions. However, these solutions lack strategies to improve the performance through the cloud elasticity feature. In this context, this article proposes a cloud-based publish/subscribe (PubSub) model, called PS2DICOM, which employs multilevel resource elasticity to improve the performance of DICOM data transmissions. METHODS: A prototype is implemented to evaluate PS2DICOM. A PubSub communication model is adopted, considering the coexistence of two classes of users: (i) image data producers (publishers); and (ii) image data consumers (subscribers). PS2DICOM employs a cloud infrastructure to guarantee service availability and performance through resource elasticity in two levels of the cloud: (i) brokers and (ii) data storage. In addition, images are compressed prior to the transmission to reduce the demand for network resources using one of three different algorithms: (i) DEFLATE, (ii) LZMA, and (iii) BZIP2. PS2DICOM employs dynamic data compression levels at the client side to improve network performance according to the current available network throughput. RESULTS: Results indicate that PS2DICOM can improve transmission quality, storage capabilities, querying, and retrieving of DICOM images. The general efficiency gain is approximately 35% in data sending and receiving operations. This gain is resultant from the two levels of elasticity, allowing resources to be scaled up or down automatically in a transparent manner. CONCLUSIONS: The contributions of PS2DICOM are twofold: (i) multilevel cloud elasticity to adapt the computing resources on demand; (ii) adaptive data compression to meet the network quality and optimize data transmission. Results suggest that the use of compression in medical image data using PS2DICOM can improve the transmission efficiency, allowing the team of specialists to communicate in real-time, even when they are geographically distant.