Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 58
Filtrar
1.
Methods ; 212: 12-20, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-36858137

RESUMO

Gut microbiota plays a crucial role in modulating pig development and health, and gut microbiota characteristics are associated with differences in feed efficiency. To answer open questions in feed efficiency analysis, biologists seek to retrieve information across multiple heterogeneous data sources. However, this is error-prone and time-consuming work since the queries can involve a sequence of multiple sub-queries over several databases. We present an implementation of an ontology-based Swine Gut Microbiota Federated Query Platform (SGMFQP) that provides a convenient, automated, and efficient query service about swine feeding and gut microbiota. The system is constructed based on a domain-specific Swine Gut Microbiota Ontology (SGMO), which facilitates the construction of queries independent of the actual organization of the data in the individual sources. This process is supported by a template-based query interface. A Datalog+-based federated query engine transforms the queries into sub-queries tailored for each individual data source, and an automated workflow orchestration mechanism executes the queries in each source database and consolidates the results. The efficiency of the system is demonstrated on several swine feeding scenarios.


Assuntos
Microbioma Gastrointestinal , Interface Usuário-Computador , Animais , Suínos , Bases de Dados Factuais , Fonte de Informação , Semântica
2.
Sensors (Basel) ; 24(16)2024 Aug 15.
Artigo em Inglês | MEDLINE | ID: mdl-39204979

RESUMO

In the era of ubiquitous computing, the challenges imposed by the increasing demand for real-time data processing, security, and energy efficiency call for innovative solutions. The emergence of fog computing has provided a promising paradigm to address these challenges by bringing computational resources closer to data sources. Despite its advantages, the fog computing characteristics pose challenges in heterogeneous environments in terms of resource allocation and management, provisioning, security, and connectivity, among others. This paper introduces COGNIFOG, a novel cognitive fog framework currently under development, which was designed to leverage intelligent, decentralized decision-making processes, machine learning algorithms, and distributed computing principles to enable the autonomous operation, adaptability, and scalability across the IoT-edge-cloud continuum. By integrating cognitive capabilities, COGNIFOG is expected to increase the efficiency and reliability of next-generation computing environments, potentially providing a seamless bridge between the physical and digital worlds. Preliminary experimental results with a limited set of connectivity-related COGNIFOG building blocks show promising improvements in network resource utilization in a real-world-based IoT scenario. Overall, this work paves the way for further developments on the framework, which are aimed at making it more intelligent, resilient, and aligned with the ever-evolving demands of next-generation computing environments.

3.
Angew Chem Int Ed Engl ; : e202412566, 2024 Aug 28.
Artigo em Inglês | MEDLINE | ID: mdl-39198218

RESUMO

The advanced oxygen reduction reaction (ORR) catalysts, integrating with well-dispersed single atom (SA) and atomic cluster (AC) sites, showcase potential in bolstering catalytic activity. However, the precise structural modulation and in-depth investigation of their catalytic mechanisms pose ongoing challenges. Herein, a proactive cluster lockdown strategy is introduced, relying on the confinement of trinuclear clusters with metal atom exchange in the covalent organic polymers, enabling the targeted synthesis of a series of multicomponent ensembles featuring FeCo (Fe or Co) dual-single-atom (DSA) and atomic cluster (AC) configurations (FeCo-DSA/AC) via thermal pyrolysis. The designed FeCo-DSA/AC surpasses Fe- and Co-derived counterparts by 18 mV and 49 mV in ORR half-wave potential, whilst exhibiting exemplary performance in Zn-air batteries. Comprehensive analysis and theoretical simulation elucidate the enhanced activity stems from adeptly orchestrating dz2-dxz and O 2p orbital hybridization proximate to the Fermi level, fine-tuning the antibonding states to expedite OH* desorption and OOH* formation, thereby augmenting catalytic activity. This work elucidates the synergistic potentiation of active sites in hybrid electrocatalysts, pioneering innovative targeted design strategies for single-atom-cluster electrocatalysts.

4.
Sensors (Basel) ; 23(8)2023 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-37112349

RESUMO

Edge computing is a viable approach to improve service delivery and performance parameters by extending the cloud with resources placed closer to a given service environment. Numerous research papers in the literature have already identified the key benefits of this architectural approach. However, most results are based on simulations performed in closed network environments. This paper aims to analyze the existing implementations of processing environments containing edge resources, taking into account the targeted quality of service (QoS) parameters and the utilized orchestration platforms. Based on this analysis, the most popular edge orchestration platforms are evaluated in terms of their workflow that allows the inclusion of remote devices in the processing environment and their ability to adapt the logic of the scheduling algorithms to improve the targeted QoS attributes. The experimental results compare the performance of the platforms and show the current state of their readiness for edge computing in real network and execution environments. These findings suggest that Kubernetes and its distributions have the potential to provide effective scheduling across the resources on the network's edge. However, some challenges still have to be addressed to completely adapt these tools for such a dynamic and distributed execution environment as edge computing implies.

5.
Sensors (Basel) ; 23(3)2023 Feb 02.
Artigo em Inglês | MEDLINE | ID: mdl-36772695

RESUMO

Small and medium enterprises are significantly hampered by cyber-threats as they have inherently limited skills and financial capacities to anticipate, prevent, and handle security incidents. The EU-funded PALANTIR project aims at facilitating the outsourcing of the security supervision to external providers to relieve SMEs/MEs from this burden. However, good practices for the operation of SME/ME assets involve avoiding their exposure to external parties, which requires a tightly defined and timely enforced security policy when resources span across the cloud continuum and need interactions. This paper proposes an innovative architecture extending Network Function Virtualisation to externalise and automate threat mitigation and remediation in cloud, edge, and on-premises environments. Our contributions include an ontology for the decision-making process, a Fault-and-Breach-Management-based remediation policy model, a framework conducting remediation actions, and a set of deployment models adapted to the constraints of cloud, edge, and on-premises environment(s). Finally, we also detail an implementation prototype of the framework serving as evaluation material.

6.
Sensors (Basel) ; 23(24)2023 Dec 06.
Artigo em Inglês | MEDLINE | ID: mdl-38139501

RESUMO

The Internet of Things (IoT) has brought about significant transformations in multiple sectors, including healthcare and navigation systems, by offering essential functionalities crucial for their operations. Nevertheless, there is ongoing debate surrounding the unexplored possibilities of the IoT within the energy industry. The requirement to better the performance of distributed energy systems necessitates transitioning from traditional mission-critical electric smart grid systems to digital twin-based IoT frameworks. Energy storage systems (ESSs) used within nano-grids have the potential to enhance energy utilization, fortify resilience, and promote sustainable practices by effectively storing surplus energy. The present study introduces a conceptual framework consisting of two fundamental modules: (1) Power optimization of energy storage systems (ESSs) in peer-to-peer (P2P) energy trading. (2) Task orchestration in IoT-enabled environments using digital twin technology. The optimization of energy storage systems (ESSs) aims to effectively manage surplus ESS energy by employing particle swarm optimization (PSO) techniques. This approach is designed to fulfill the energy needs of the ESS itself as well as meet the specific requirements of participating nano-grids. The primary objective of the IoT task orchestration system, which is based on the concept of digital twins, is to enhance the process of peer-to-peer nano-grid energy trading. This is achieved by integrating virtual control mechanisms through orchestration technology combining task generation, device virtualization, task mapping, task scheduling, and task allocation and deployment. The nano-grid energy trading system's architecture utilizes IoT sensors and Raspberry Pi-based edge technology to enable virtual operation. The evaluation of the proposed study is carried out through the examination of a simulated dataset derived from nano-grid dwellings. This research analyzes the efficacy of optimization approaches in mitigating energy trading costs and optimizing power utilization in energy storage systems (ESSs). The coordination of IoT devices is crucial in improving the system's overall efficiency.

7.
J Bus Res ; 164: 114025, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37215460

RESUMO

This study investigates the effects of supply chain (SCRE) and robustness (SCRO) on COVID-19 super disruption impacts and firm's financial performance by mobilizing the resources orchestration theory (ROT) as the main theoretical framework. We adopt structural equation modeling analysis of data collected from 289 French companies. The findings reveal the significantly positive influence of resources orchestration on SCRE and SCRO and the role of the latter in mitigating the pandemic disruption impacts. Notwithstanding, depending on whether the measures are objective or subjective, the effects of SCRE and SCRO on financial performance vary. Overall, this paper presents empirical evidence of the influence of both of SCRE and SCRO on pandemic disruption impacts and financial performance. Furthermore, this research provides insights to guide practitioners and decision makers regarding resources orchestration and the deployment of SCRE and SCRO.

8.
J Bus Res ; 158: 113662, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-36644446

RESUMO

This paper aims to identify the revised international marketing strategies in communication during the COVID-19 pandemic by utilizing the firm's resources and capabilities. We conducted in-depth interviews and a questionnaire survey with key stakeholders of retail organizations which changed their digital marketing strategies during COVID-19. The data is collected from 587 respondents from different parts of the world through resource orchestration theory. The qualitative findings support a high degree of association among the firm's resources and capabilities, leveraging processes based on the revised international marketing strategies during the COVID-19 pandemic. We have developed a conceptual model based on these findings with six variables: leveraging process of the firm's capabilities information technology-related resources; information technology-related capabilities, dynamic capabilities, environmental uncertainty, and leveraging process of the firm's resources. However, environmental uncertainty and leveraging of the firm's resources were not influential in forming digital marketing strategies during COVID-19. This study proposes a new process for international marketing managers in business organizations to restructure the resources within their organizations by creating new capabilities and leveraging them.

9.
Sensors (Basel) ; 22(23)2022 Nov 22.
Artigo em Inglês | MEDLINE | ID: mdl-36501743

RESUMO

Dynamic service orchestration is becoming more and more necessary as IoT and edge computing technologies continue to advance due to the flexibility and diversity of services. With the surge in the number of edge devices and the increase in data volume of IoT scenarios, there are higher requirements for the transmission security of privacy information from each edge device and the processing efficiency of SFC orchestration. This paper proposes a kind of dynamic SFC orchestration security algorithm applicable to EC-IoT scenarios based on the federated learning framework, combined with a block coordinated descent approach and the quadratic penalty algorithm to achieve communication efficiency and data privacy protection. A deep reinforcement learning algorithm is used to simultaneously adapt the SFC orchestration method in order to dynamically observe environmental changes and decrease end-to-end delay. The experimental results show that compared with the existing dynamic SFC orchestration algorithms, the proposed algorithm can achieve better convergence and latency performance under the condition of privacy protection; the overall latency is reduced by about 33%, and the overall convergence speed is improved by about 9%, which not only achieves the security of data privacy protection of edge computing nodes, but also meets the requirements of dynamic SFC orchestration.


Assuntos
Algoritmos , Privacidade , Comunicação , Registros , Tecnologia
10.
Sensors (Basel) ; 22(17)2022 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-36081079

RESUMO

Network slicing (NS) is one of the most prominent next-generation wireless cellular technology use cases, promising to unlock the core benefits of 5G network architecture by allowing communication service providers (CSPs) and operators to construct scalable and customized logical networks. This, in turn, enables telcos to reach the full potential of their infrastructure by offering customers tailored networking solutions that meet their specific needs, which is critical in an era where no two businesses have the same requirements. This article presents a commercial overview of NS, as well as the need for a slicing automation and orchestration framework. Furthermore, it will address the current NS project objectives along with the complex functional execution of NS code flow. A summary of activities in important standards development groups and industrial forums relevant to artificial intelligence (AI) and machine learning (ML) is also provided. Finally, we identify various open research problems and potential answers to provide future guidance.


Assuntos
Inteligência Artificial , Aprendizado de Máquina , Automação , Comunicação
11.
Sensors (Basel) ; 22(19)2022 Sep 24.
Artigo em Inglês | MEDLINE | ID: mdl-36236343

RESUMO

Federated Learning (FL) enables multiple clients to train a shared model collaboratively without sharing any personal data. However, selecting a model and adapting it quickly to meet user expectations in a large-scale FL application with heterogeneous devices is challenging. In this paper, we propose a model selection and adaptation system for Federated Learning (FedMSA), which includes a hardware-aware model selection algorithm that trades-off model training efficiency and model performance base on FL developers' expectation. Meanwhile, considering the expected model should be achieved by dynamic model adaptation, FedMSA supports full automation in building and deployment of the FL task to different hardware at scale. Experiments on benchmark and real-world datasets demonstrate the effectiveness of the model selection algorithm of FedMSA in real devices (e.g., Raspberry Pi and Jetson nano).


Assuntos
Algoritmos , Aprendizagem , Aclimatação , Benchmarking , Humanos
12.
Sensors (Basel) ; 22(5)2022 Feb 23.
Artigo em Inglês | MEDLINE | ID: mdl-35270901

RESUMO

The fast growth in the amount of connected devices with computing capabilities in the past years has enabled the emergence of a new computing layer at the Edge. Despite being resource-constrained if compared with cloud servers, they offer lower latencies than those achievable by Cloud computing. The combination of both Cloud and Edge computing paradigms can provide a suitable infrastructure for complex applications' quality of service requirements that cannot easily be achieved with either of these paradigms alone. These requirements can be very different for each application, from achieving time sensitivity or assuring data privacy to storing and processing large amounts of data. Therefore, orchestrating these applications in the Cloud-Edge computing raises new challenges that need to be solved in order to fully take advantage of this layered infrastructure. This paper proposes an architecture that enables the dynamic orchestration of applications in the Cloud-Edge continuum. It focuses on the application's quality of service by providing the scheduler with input that is commonly used by modern scheduling algorithms. The architecture uses a distributed scheduling approach that can be customized in a per-application basis, which ensures that it can scale properly even in setups with high number of nodes and complex scheduling algorithms. This architecture has been implemented on top of Kubernetes and evaluated in order to asses its viability to enable more complex scheduling algorithms that take into account the quality of service of applications.


Assuntos
Algoritmos , Computação em Nuvem
13.
Sensors (Basel) ; 22(5)2022 Mar 07.
Artigo em Inglês | MEDLINE | ID: mdl-35271207

RESUMO

Nowadays, various frameworks are emerging for supporting distributed tracing techniques over microservices-based distributed applications. The objective is to improve observability and management of operational problems of distributed applications, considering bottlenecks in terms of high latencies in the interaction among the deployed microservices. However, such frameworks provide information that is disjoint from the management information that is usually collected by cloud computing orchestration platforms. There is a need to improve observability by combining such information to easily produce insights related to performance issues and to realize root cause analyses to tackle them. In this paper, we provide a modern observability approach and pilot implementation for tackling data fusion aspects in edge and cloud computing orchestration platforms. We consider the integration of signals made available by various open-source monitoring and observability frameworks, including metrics, logs and distributed tracing mechanisms. The approach is validated in an experimental orchestration environment based on the deployment and stress testing of a proof-of-concept microservices-based application. Helpful results are produced regarding the identification of the main causes of latencies in the various application parts and the better understanding of the behavior of the application under different stressing conditions.


Assuntos
Computação em Nuvem
14.
Sensors (Basel) ; 22(8)2022 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-35459015

RESUMO

Network Slicing and Deep Reinforcement Learning (DRL) are vital enablers for achieving 5G and 6G networks. A 5G/6G network can comprise various network slices from unique or multiple tenants. Network providers need to perform intelligent and efficient resource management to offer slices that meet the quality of service and quality of experience requirements of 5G/6G use cases. Resource management is far from being a straightforward task. This task demands complex and dynamic mechanisms to control admission and allocate, schedule, and orchestrate resources. Intelligent and effective resource management needs to predict the services' demand coming from tenants (each tenant with multiple network slice requests) and achieve autonomous behavior of slices. This paper identifies the relevant phases for resource management in network slicing and analyzes approaches using reinforcement learning (RL) and DRL algorithms for realizing each phase autonomously. We analyze the approaches according to the optimization objective, the network focus (core, radio access, edge, and end-to-end network), the space of states, the space of actions, the algorithms, the structure of deep neural networks, the exploration-exploitation method, and the use cases (or vertical applications). We also provide research directions related to RL/DRL-based network slice resource management.


Assuntos
Algoritmos , Redes Neurais de Computação , Aprendizagem , Projetos de Pesquisa
15.
Sensors (Basel) ; 22(15)2022 Aug 07.
Artigo em Inglês | MEDLINE | ID: mdl-35957450

RESUMO

Fog computing is an extension of cloud computing that provides computing services closer to user end-devices at the network edge. One of the challenging topics in fog networks is the placement of tasks on fog nodes to obtain the best performance and resource usage. The process of mapping tasks for resource-constrained devices is known as the service or fog application placement problem (SPP, FAPP). The highly dynamic fog infrastructures with mobile user end-devices and constantly changing fog nodes resources (e.g., battery life, security level) require distributed/decentralized service placement (orchestration) algorithms to ensure better resilience, scalability, and optimal real-time performance. However, recently proposed service placement algorithms rarely support user end-device mobility, constantly changing the resource availability of fog nodes and the ability to recover from fog node failures at the same time. In this article, we propose a distributed agent-based orchestrator model capable of flexible service provisioning in a dynamic fog computing environment by considering the constraints on the central processing unit (CPU), memory, battery level, and security level of fog nodes. Distributing the decision-making to multiple orchestrator fog nodes instead of relying on the mapping of a single central entity helps to spread the load and increase scalability and, most importantly, resilience. The prototype system based on the proposed orchestrator model was implemented and tested with real hardware. The results show that the proposed model is efficient in terms of response latency and computational overhead, which are minimal compared to the placement algorithm itself. The research confirms that the proposed orchestrator approach is suitable for various fog network applications when scalability, mobility, and fault tolerance must be guaranteed.


Assuntos
Algoritmos , Computação em Nuvem , Atenção à Saúde
16.
Sensors (Basel) ; 22(2)2022 Jan 08.
Artigo em Inglês | MEDLINE | ID: mdl-35062426

RESUMO

Fog computing emerged as a concept that responds to the requirements of upcoming solutions requiring optimizations primarily in the context of the following QoS parameters: latency, throughput, reliability, security, and network traffic reduction. The rapid development of local computing devices and container-based virtualization enabled the application of fog computing within the IoT environment. However, it is necessary to utilize algorithm-based service scheduling that considers the targeted QoS parameters to optimize the service performance and reach the potential of the fog computing concept. In this paper, we first describe our categorization of IoT services that affects the execution of our scheduling algorithm. Secondly, we propose our scheduling algorithm that considers the context of processing devices, user context, and service context to determine the optimal schedule for the execution of service components across the distributed fog-to-cloud environment. The conducted simulations confirmed the performance of the proposed algorithm and showcased its major contribution-dynamic scheduling, i.e., the responsiveness to the volatile QoS parameters due to changeable network conditions. Thus, we successfully demonstrated that our dynamic scheduling algorithm enhances the efficiency of service performance based on the targeted QoS criteria of the specific service scenario.


Assuntos
Internet das Coisas , Algoritmos , Computação em Nuvem , Reprodutibilidade dos Testes
17.
Int J Prod Econ ; 245: 108396, 2022 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-34931109

RESUMO

Although many firms are actively deploying various digital technology (DT) assets across their supply chains to mitigate the negative impact of the COVID-19 pandemic on operations, whether these DT assets are truly helpful remains unclear. To disentangle this puzzle, we investigate whether firms that have higher levels of DT asset deployment achieve better supply chain performance in the COVID-19 crisis than firms with lower levels. From an asset orchestration perspective, we focus on two dimensions of DT asset deployment: breadth and depth, which reflect the scope and scale of DT assets, respectively. The empirical results from 175 Chinese firms that have deployed DT assets to varying degrees reveal that both the breadth and the depth of DT asset deployment show positive relationships with supply chain visibility. In contrast, the depth but not the breadth of DT asset deployment poses a positive relationship with supply chain agility. Most importantly, high levels of supply chain visibility and supply chain agility were prerequisites for excellent supply chain performance in the COVID-19 crisis. We contribute to the digital supply chain management literature by uncovering the mechanism through which DT asset deployment generates impacts on supply chain performance from an asset orchestration perspective. Our study also assists firms in improving their digital transformation strategies to combat the COVID-19 pandemic.

18.
Sensors (Basel) ; 21(24)2021 Dec 08.
Artigo em Inglês | MEDLINE | ID: mdl-34960302

RESUMO

The emergence of the edge computing paradigm has shifted data processing from centralised infrastructures to heterogeneous and geographically distributed infrastructures. Therefore, data processing solutions must consider data locality to reduce the performance penalties from data transfers among remote data centres. Existing big data processing solutions provide limited support for handling data locality and are inefficient in processing small and frequent events specific to the edge environments. This article proposes a novel architecture and a proof-of-concept implementation for software container-centric big data workflow orchestration that puts data locality at the forefront. The proposed solution considers the available data locality information, leverages long-lived containers to execute workflow steps, and handles the interaction with different data sources through containers. We compare the proposed solution with Argo workflows and demonstrate a significant performance improvement in the execution speed for processing the same data units. Finally, we carry out experiments with the proposed solution under different configurations and analyze individual aspects affecting the performance of the overall solution.


Assuntos
Big Data , Biologia Computacional , Armazenamento e Recuperação da Informação , Software , Fluxo de Trabalho
19.
Sensors (Basel) ; 21(23)2021 Dec 03.
Artigo em Inglês | MEDLINE | ID: mdl-34884098

RESUMO

Network slicing is a powerful paradigm for network operators to support use cases with widely diverse requirements atop a common infrastructure. As 5G standards are completed, and commercial solutions mature, operators need to start thinking about how to integrate network slicing capabilities in their assets, so that customer-facing solutions can be made available in their portfolio. This integration is, however, not an easy task, due to the heterogeneity of assets that typically exist in carrier networks. In this regard, 5G commercial networks may consist of a number of domains, each with a different technological pace, and built out of products from multiple vendors, including legacy network devices and functions. These multi-technology, multi-vendor and brownfield features constitute a challenge for the operator, which is required to deploy and operate slices across all these domains in order to satisfy the end-to-end nature of the services hosted by these slices. In this context, the only realistic option for operators is to introduce slicing capabilities progressively, following a phased approach in their roll-out. The purpose of this paper is to precisely help designing this kind of plan, by means of a technology radar. The radar identifies a set of solutions enabling network slicing on the individual domains, and classifies these solutions into four rings, each corresponding to a different timeline: (i) as-is ring, covering today's slicing solutions; (ii) deploy ring, corresponding to solutions available in the short term; (iii) test ring, considering medium-term solutions; and (iv) explore ring, with solutions expected in the long run. This classification is done based on the technical availability of the solutions, together with the foreseen market demands. The value of this radar lies in its ability to provide a complete view of the slicing landscape with one single snapshot, by linking solutions to information that operators may use for decision making in their individual go-to-market strategies.


Assuntos
Radar , Tecnologia , Rede Social
20.
Sensors (Basel) ; 21(4)2021 Feb 13.
Artigo em Inglês | MEDLINE | ID: mdl-33668672

RESUMO

5G communications have become an enabler for the creation of new and more complex networking scenarios, bringing together different vertical ecosystems. Such behavior has been fostered by the network function virtualization (NFV) concept, where the orchestration and virtualization capabilities allow the possibility of dynamically supplying network resources according to its needs. Nevertheless, the integration and performance of heterogeneous network environments, each one supported by a different provider, and with specific characteristics and requirements, in a single NFV framework is not straightforward. In this work we propose an NFV-based framework capable of supporting the flexible, cost-effective deployment of vertical services, through the integration of two distinguished mobile environments and their networks: small sized unmanned aerial vehicles (SUAVs), supporting a flying ad hoc network (FANET) and vehicles, promoting a vehicular ad hoc network (VANET). In this context, a use case involving the public safety vertical will be used as an illustrative example to showcase the potential of this framework. This work also includes the technical implementation details of the framework proposed, allowing to analyse and discuss the delays on the network services deployment process. The results show that the deployment times can be significantly reduced through a distributed VNF configuration function based on the publish-subscribe model.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa