RESUMO
The sense of touch enables us to safely interact and control our contacts with our surroundings. Many technical systems and applications could profit from a similar type of sense. Yet, despite the emergence of e-skin systems covering more extensive areas, large-area realizations of e-skin effectively boosting applications are still rare. Recent advancements have improved the deployability and robustness of e-skin systems laying the basis for their scalability. However, the upscaling of e-skin systems introduces yet another challenge-the challenge of handling a large amount of heterogeneous tactile information with complex spatial relations between sensing points. We targeted this challenge and proposed an event-driven approach for large-area skin systems. While our previous works focused on the implementation and the experimental validation of the approach, this work now provides the consolidated foundations for realizing, designing, and understanding large-area event-driven e-skin systems for effective applications. This work homogenizes the different perspectives on event-driven systems and assesses the applicability of existing event-driven implementations in large-area skin systems. Additionally, we provide novel guidelines for tuning the novelty-threshold of event generators. Overall, this work develops a systematic approach towards realizing a flexible event-driven information handling system on standard computer systems for large-scale e-skin with detailed descriptions on the effective design of event generators and decoders. All designs and guidelines are validated by outlining their impacts on our implementations, and by consolidating various experimental results. The resulting system design for e-skin systems is scalable, efficient, flexible, and capable of handling large amounts of information without customized hardware. The system provides the feasibility of complex large-area tactile applications, for instance in robotics.
Assuntos
Robótica/normas , Tato/fisiologia , Dispositivos Eletrônicos Vestíveis/normas , Computadores , Humanos , Robótica/tendências , Dispositivos Eletrônicos Vestíveis/tendênciasRESUMO
This paper deals with a recent design of event-driven observer-based smart sensors for output feedback control of linear systems. We re-design the triggering mechanism proposed in a previously reported system with the implementation of self-sampling data smart sensors; as a result, we improve its performance. Our approach is theoretically supported by using Lyapunov theory and numerically evidenced by controlling the inverted pendulum on the cart mechanism.
RESUMO
Increasing complexity and data-generation rates in cyber-physical systems and the industrial Internet of things are calling for a corresponding increase in AI capabilities at the resource-constrained edges of the Internet. Meanwhile, the resource requirements of digital computing and deep learning are growing exponentially, in an unsustainable manner. One possible way to bridge this gap is the adoption of resource-efficient brain-inspired "neuromorphic" processing and sensing devices, which use event-driven, asynchronous, dynamic neurosynaptic elements with colocated memory for distributed processing and machine learning. However, since neuromorphic systems are fundamentally different from conventional von Neumann computers and clock-driven sensor systems, several challenges are posed to large-scale adoption and integration of neuromorphic devices into the existing distributed digital-computational infrastructure. Here, we describe the current landscape of neuromorphic computing, focusing on characteristics that pose integration challenges. Based on this analysis, we propose a microservice-based conceptual framework for neuromorphic systems integration, consisting of a neuromorphic-system proxy, which would provide virtualization and communication capabilities required in distributed systems of systems, in combination with a declarative programming approach offering engineering-process abstraction. We also present concepts that could serve as a basis for the realization of this framework, and identify directions for further research required to enable large-scale system integration of neuromorphic devices.
RESUMO
Many data processing systems are naturally modeled as pipelines, where data flows though a network of computational procedures. This representation is particularly suitable for computer vision algorithms, which in most cases possess complex logic and a big number of parameters to tune. In addition, online vision systems, such as those in the industrial automation context, have to communicate with other distributed nodes. When developing a vision system, one normally proceeds from ad hoc experimentation and prototyping to highly structured system integration. The early stages of this continuum are characterized with the challenges of developing a feasible algorithm, while the latter deal with composing the vision function with other components in a networked environment. In between, one strives to manage the complexity of the developed system, as well as to preserve existing knowledge. To tackle these challenges, this paper presents EPypes, an architecture and Python-based software framework for developing vision algorithms in a form of computational graphs and their integration with distributed systems based on publish-subscribe communication. EPypes facilitates flexibility of algorithm prototyping, as well as provides a structured approach to managing algorithm logic and exposing the developed pipelines as a part of online systems.