Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 60
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Entropy (Basel) ; 26(8)2024 Jul 30.
Artículo en Inglés | MEDLINE | ID: mdl-39202125

RESUMEN

In this paper, a network comprising wireless devices equipped with buffers transmitting deadline-constrained data packets over a slotted-ALOHA random-access channel is studied. Although communication protocols facilitating retransmissions increase reliability, a packet awaiting transmission from the queue experiences delays. Thus, packets with time constraints might be dropped before being successfully transmitted, while at the same time causing the queue size of the buffer to increase. To understand the trade-off between reliability and delays that might lead to packet drops due to deadline-constrained bursty traffic with retransmissions, the scenario of a wireless network utilizing a slotted-ALOHA random-access channel is investigated. The main focus is to reveal the trade-off between the number of retransmissions and the packet deadline as a function of the arrival rate. Towards this end, analysis of the system is performed by means of discrete-time Markov chains. Two scenarios are studied: (i) the collision channel model (in which a receiver can decode only when a single packet is transmitted), and (ii) the case for which receivers have multi-packet reception capabilities. A performance evaluation for a user with different transmit probabilities and number of retransmissions is conducted. We are able to determine numerically the optimal probability of transmissions and the number of retransmissions, given the packet arrival rate and the packet deadline. Furthermore, we highlight the impact of transmit probability and the number of retransmissions on the average drop rate and throughput.

2.
Heliyon ; 10(12): e32660, 2024 Jun 30.
Artículo en Inglés | MEDLINE | ID: mdl-38994112

RESUMEN

The article explores the potential of 5G-enabled Unmanned Aerial Vehicles (UAVs) in establishing opportunistic networks to improve network resource management, reduce energy use, and boost operational efficiency. The proposed framework utilizes 5G-enabled drones and edge command and control software to provide energy-efficient network topologies. As a result, UAVs operate edge computing for efficient data collecting and processing. This invention enhances network performance using modern Artificial Intelligence (AI) algorithms to improve UAV networking capabilities while conserving energy. An empirical investigation shows a significant improvement in network performance measures when using 5G technology compared to older 2.4 GHz systems. The communication failure rate decreased by 50 %, from 12 % to 6 %. The round-trip time was lowered by 58.3 %, from 120 Ms to 50 Ms. The payload efficiency improved by 13.3 %, dropping from 15 % to 13 %. The data transmission rate increased significantly from 1 Gbps to 5 Gbps, representing a 400 % boost. The numerical findings highlight the significant impact that 5G technology may have on UAV operations. Testing on a 5G-enabled UAV confirms the effectiveness of our technique in several domains, including precision agriculture, disaster response, and environmental monitoring. The solution seriously improves UAV network performance by reducing energy consumption and using peripheral network command-and-control software. Our results emphasize the versatile networking capacities of 5G-enabled drones, which provide new opportunities for UAV applications.

3.
Sci Rep ; 14(1): 10181, 2024 May 03.
Artículo en Inglés | MEDLINE | ID: mdl-38702395

RESUMEN

Image recognition is a pervasive task in many information-processing environments. We present a solution to a difficult pattern recognition problem that lies at the heart of experimental particle physics. Future experiments with very high-intensity beams will produce a spray of thousands of particles in each beam-target or beam-beam collision. Recognizing the trajectories of these particles as they traverse layers of electronic sensors is a massive image recognition task that has never been accomplished in real time. We present a real-time processing solution that is implemented in a commercial field-programmable gate array using high-level synthesis. It is an unsupervised learning algorithm that uses techniques of graph computing. A prime application is the low-latency analysis of dark-matter signatures involving metastable charged particles that manifest as disappearing tracks.

4.
Sensors (Basel) ; 24(9)2024 Apr 28.
Artículo en Inglés | MEDLINE | ID: mdl-38732918

RESUMEN

In this paper, we consider a low-latency Mobile Edge Computing (MEC) network where multiple User Equipment (UE) wirelessly reports to a decision-making edge server. At the same time, the transmissions are operated with Finite Blocklength (FBL) codes to achieve low-latency transmission. We introduce the task of Age upon Decision (AuD) aimed at the timeliness of tasks used for decision-making, which highlights the timeliness of the information at decision-making moments. For the case in which dynamic task generation and random fading channels are considered, we provide a task AuD minimization design by jointly selecting UE and allocating blocklength. In particular, to solve the task AuD minimization problem, we transform the optimization problem to a Markov Decision Process problem and propose an Error Probability-Controlled Action-Masked Proximal Policy Optimization (EMPPO) algorithm. Via simulation, we show that the proposed design achieves a lower AuD than baseline methods across various network conditions, especially in scenarios with significant channel Signal-to-Noise Ratio (SNR) differences and low average SNR, which shows the robustness of EMPPO and its potential for real-time applications.

5.
Magn Reson Med ; 92(3): 1162-1176, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38576131

RESUMEN

PURPOSE: Develop a true real-time implementation of MR signature matching (MRSIGMA) for free-breathing 3D MRI with sub-200 ms latency on the Elekta Unity 1.5T MR-Linac. METHODS: MRSIGMA was implemented on an external computer with a network connection to the MR-Linac. Stack-of-stars with partial kz sampling was used to accelerate data acquisition and ReconSocket was employed for simultaneous data transmission. Movienet network computed the 4D MRI motion dictionary and correlation analysis was used for signature matching. A programmable 4D MRI phantom was utilized to evaluate MRSIGMA with respect to a ground-truth translational motion reference. In vivo validation was performed on patients with pancreatic cancer, where 15 patients were employed to train Movienet and 7 patients to test the real-time implementation of MRSIGMA. Dice coefficients between real-time MRSIGMA and a retrospectively computed 4D reference were used to evaluate motion tracking performance. RESULTS: Motion dictionary was computed in under 5 s. Signature acquisition and matching presented 173 ms latency on the phantom and 193 ms on patients. MRSIGMA presented a mean error of 1.3-1.6 mm for all phantom experiments, which was below the 2 mm acquisition resolution along the motion direction. The Dice coefficient over time between MRSIGMA and reference contours was 0.88 ± 0.02 (GTV), 0.87 ± 0.02(duodenum-stomach), and 0.78 ± 0.02(small bowel), demonstrating high motion tracking performance for both tumor and organs at risk. CONCLUSION: The real-time implementation of MRSIGMA enabled true real-time free-breathing 3D MRI with sub-200 ms imaging latency on a clinical MR-Linac system, which can be used for treatment monitoring, adaptive radiotherapy and dose accumulation mapping in tumors affected by respiratory motion.


Asunto(s)
Algoritmos , Imagenología Tridimensional , Imagen por Resonancia Magnética , Neoplasias Pancreáticas , Fantasmas de Imagen , Respiración , Humanos , Imagen por Resonancia Magnética/métodos , Neoplasias Pancreáticas/diagnóstico por imagen , Movimiento (Física) , Procesamiento de Imagen Asistido por Computador/métodos , Estudios Retrospectivos , Interpretación de Imagen Asistida por Computador/métodos
6.
Sensors (Basel) ; 24(7)2024 Mar 26.
Artículo en Inglés | MEDLINE | ID: mdl-38610325

RESUMEN

The timely delivery of critical messages in real-time environments is an increasing requirement for industrial Internet of Things (IIoT) networks. Similar to wired time-sensitive networking (TSN) techniques, which bifurcate traffic flows based on priority, the proposed wireless method aims to ensure that critical traffic arrives rapidly across multiple hops to enable numerous IIoT use cases. IIoT architectures are migrating toward wirelessly connected edges, creating a desire to extend TSN-like functionality to a wireless format. Existing protocols possess inherent challenges to achieving this prioritized low-latency communication, ranging from rigidly scheduled time division transmissions, scalability/jitter of carrier-sense multiple access (CSMA) protocols, and encryption-induced latency. This paper presents a hardware-validated low-latency technique built upon receiver-assigned code division multiple access (RA-CDMA) techniques to implement a secure wireless TSN-like extension suitable for the IIoT. Results from our hardware prototype, constructed on the IntelFPGA Arria 10 platform, show that (sub-)millisecond single-hop latencies can be achieved for each of the available message types, ranging from 12 bits up to 224 bits of payload. By achieving one-way transmission of under 1 ms, a reliable wireless TSN extension with comparable timelines to 802.1Q and/or 5G is achievable and proven in concept through our hardware prototype.

7.
Entropy (Basel) ; 26(2)2024 Jan 30.
Artículo en Inglés | MEDLINE | ID: mdl-38392377

RESUMEN

Remote control over communication networks with bandwidth-constrained channels has attracted considerable recent attention because it holds the promise of enabling a large number of real-time applications, such as autonomous driving, smart grids, and the industrial internet of things (IIoT). However, due to the limited bandwidth, the sub-packets or even bits have to be transmitted successively, thereby experiencing non-negligible latency and inducing serious performance loss in remote control. To overcome this, we introduce an incremental coding method, in which the actuator acts in real time based on a partially received packet instead of waiting until the entire packet is decoded. On this basis, we applied incremental coding to a linear control system to obtain a remote-control scheme. Both its stability conditions and average linear-quadratic-Gaussian-(LQG) cost are presented. Then, we further investigated a multi-user remote-control method, with a particular focus on its applications in the demand response of smart grids over bandwidth-constrained communication networks. The utility loss due to the bandwidth constraint and communication latency are minimized by jointly optimizing the source coding and real-time demand response. The numerical results show that the incremental-coding-aided remote control performed well in both single-user and multi-user scenarios and outperformed the conventional zero-hold control scheme significantly under the LQG metric.

8.
Adv Sci (Weinh) ; 11(2): e2304355, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37939304

RESUMEN

Despite increasing interest in developing ultrasensitive widefield diamond magnetometry for various applications, achieving high temporal resolution and sensitivity simultaneously remains a key challenge. This is largely due to the transfer and processing of massive amounts of data from the frame-based sensor to capture the widefield fluorescence intensity of spin defects in diamonds. In this study, a neuromorphic vision sensor to encode the changes of fluorescence intensity into spikes in the optically detected magnetic resonance (ODMR) measurements is adopted, closely resembling the operation of the human vision system, which leads to highly compressed data volume and reduced latency. It also results in a vast dynamic range, high temporal resolution, and exceptional signal-to-background ratio. After a thorough theoretical evaluation, the experiment with an off-the-shelf event camera demonstrated a 13× improvement in temporal resolution with comparable precision of detecting ODMR resonance frequencies compared with the state-of-the-art highly specialized frame-based approach. It is successfully deploy this technology in monitoring dynamically modulated laser heating of gold nanoparticles coated on a diamond surface, a recognizably difficult task using existing approaches. The current development provides new insights for high-precision and low-latency widefield quantum sensing, with possibilities for integration with emerging memory devices to realize more intelligent quantum sensors.

9.
Trends Ecol Evol ; 39(2): 128-130, 2024 02.
Artículo en Inglés | MEDLINE | ID: mdl-38142163

RESUMEN

Modern sensor technologies increasingly enrich studies in wildlife behavior and ecology. However, constraints on weight, connectivity, energy and memory availability limit their implementation. With the advent of edge computing, there is increasing potential to mitigate these constraints, and drive major advancements in wildlife studies.


Asunto(s)
Animales Salvajes , Nube Computacional , Animales , Ecología
10.
J Neural Eng ; 20(5)2023 09 18.
Artículo en Inglés | MEDLINE | ID: mdl-37683653

RESUMEN

Objective.Neurofeedback and brain-computer interfacing technology open the exciting opportunity for establishing interactive closed-loop real-time communication with the human brain. This requires interpreting brain's rhythmic activity and generating timely feedback to the brain. Lower delay between neuronal events and the appropriate feedback increases the efficacy of such interaction. Novel more efficient approaches capable of tracking brain rhythm's phase and envelope are needed for scenarios that entail instantaneous interaction with the brain circuits.Approach.Isolating narrow-band signals incurs fundamental delays. To some extent they can be compensated using forecasting models. Given the high quality of modern time series forecasting neural networks we explored their utility for low-latency extraction of brain rhythm parameters. We tested five neural networks with conceptually distinct architectures in forecasting synthetic EEG rhythms. The strongest architecture was then trained to simultaneously filter and forecast EEG data. We compared it against the state-of-the-art techniques using synthetic and real data from 25 subjects.Main results.The temporal convolutional network (TCN) remained the strongest forecasting model that achieved in the majority of testing scenarios>90% rhythm's envelope correlation with<10 ms effective delay and<20∘circular standard deviation of phase estimates. It also remained stable enough to noise level perturbations. Trained to filter and predict the TCN outperformed the cFIR, the Kalman filter based state-space estimation technique and remained on par with the larger Conv-TasNet architecture.Significance.Here we have for the first time demonstrated the utility of the neural network approach for low-latency narrow-band filtering of brain activity signals. Our proposed approach coupled with efficient implementation enhances the effectiveness of brain-state dependent paradigms across various applications. Moreover, our framework for forecasting EEG signals holds promise for investigating the predictability of brain activity, providing valuable insights into the fundamental questions surrounding the functional organization and hierarchical information processing properties of the brain.


Asunto(s)
Interfaces Cerebro-Computador , Neurorretroalimentación , Humanos , Encéfalo , Cognición , Redes Neurales de la Computación
11.
Sensors (Basel) ; 23(18)2023 Sep 12.
Artículo en Inglés | MEDLINE | ID: mdl-37765899

RESUMEN

The emergence of Industry 4.0 has revolutionized the industrial sector, enabling the development of compact, precise, and interconnected assets. This transformation has not only generated vast amounts of data but also facilitated the migration of learning and optimization processes to edge devices. Consequently, modern industries can effectively leverage this paradigm through distributed learning to define product quality and implement predictive maintenance (PM) strategies. While computing speeds continue to advance rapidly, the latency in communication has emerged as a bottleneck for fast edge learning, particularly in time-sensitive applications such as PM. To address this issue, we explore Federated Learning (FL), a privacy-preserving framework. FL entails updating a global AI model on a parameter server (PS) through aggregation of locally trained models from edge devices. We propose an innovative approach: analog aggregation over-the-air of updates transmitted concurrently over wireless channels. This leverages the waveform-superposition property in multi-access channels, significantly reducing communication latency compared to conventional methods. However, it is vulnerable to performance degradation due to channel properties like noise and fading. In this study, we introduce a method to mitigate the impact of channel noise in FL over-the-air communication and computation (FLOACC). We integrate a novel tracking-based stochastic approximation scheme into a standard federated stochastic variance reduced gradient (FSVRG). This effectively averages out channel noise's influence, ensuring robust FLOACC performance without increasing transmission power gain. Numerical results confirm our approach's superior communication efficiency and scalability in various FL scenarios, especially when dealing with noisy channels. Simulation experiments also highlight significant enhancements in prediction accuracy and loss function reduction for analog aggregation in over-the-air FL scenarios.

12.
Entropy (Basel) ; 25(9)2023 Sep 12.
Artículo en Inglés | MEDLINE | ID: mdl-37761624

RESUMEN

This paper develops and optimizes a non-orthogonal and noncoherent multi-user massive single-input multiple-output (SIMO) framework, with the objective of enabling scalable ultra-reliable low-latency communications (sURLLC) in Beyond-5G (B5G)/6G wireless communication systems. In this framework, the huge diversity gain associated with the large-scale antenna array in the massive SIMO system is leveraged to ensure ultra-high reliability. To reduce the overhead and latency induced by the channel estimation process, we advocate for the noncoherent communication technique, which does not need the knowledge of instantaneous channel state information (CSI) but only relies on large-scale fading coefficients for message decoding. To boost the scalability of noncoherent massive SIMO systems, we enable the non-orthogonal channel access of multiple users by devising a new differential modulation scheme to ensure that each transmitted signal matrix can be uniquely determined in the noise-free case and be reliably estimated in noisy cases when the antenna array size is scaled up. The key idea is to make the transmitted signals from multiple geographically separated users be superimposed properly over the air, such that when the sum signal is correctly detected, the signal sent by each individual user can be uniquely determined. To further enhance the average error performance when the array antenna number is large, we propose a max-min Kullback-Leibler (KL) divergence-based design by jointly optimizing the transmitted powers of all users and the sub-constellation assignments among them. The simulation results show that the proposed design significantly outperforms the existing max-min Euclidean distance-based counterpart in terms of error performance. Moreover, our proposed approach also has a better error performance compared to the conventional coherent zero-forcing (ZF) receiver with orthogonal channel training, particularly for cell-edge users.

13.
Artículo en Inglés | MEDLINE | ID: mdl-37746522

RESUMEN

Processing latency is a critical issue for active noise control (ANC) due to the causality constraint of ANC systems. This paper addresses low-latency ANC in the context of deep learning (i.e. deep ANC). A time-domain method using an attentive recurrent network (ARN) is employed to perform deep ANC with smaller frame sizes, thus reducing algorithmic latency of deep ANC. In addition, we introduce a delay-compensated training to perform ANC using predicted noise for several milliseconds. Moreover, a revised overlap-add method is utilized during signal resynthesis to avoid the latency introduced due to overlaps between neighboring time frames. Experimental results show the effectiveness of the proposed strategies for achieving low-latency deep ANC. Combining the proposed strategies is capable of yielding zero, even negative, algorithmic latency without affecting ANC performance much, thus alleviating the causality constraint in ANC design.

14.
Front Neurosci ; 17: 1224457, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37638316

RESUMEN

In recent years, Deep Convolutional Neural Networks (DCNNs) have outreached the performance of classical algorithms for image restoration tasks. However, most of these methods are not suited for computational efficiency. In this work, we investigate Spiking Neural Networks (SNNs) for the specific and uncovered case of image denoising, with the goal of reaching the performance of conventional DCNN while reducing the computational cost. This task is challenging for two reasons. First, as denoising is a regression task, the network has to predict a continuous value (i.e., the noise amplitude) for each pixel of the image, with high precision. Moreover, state of the art results have been obtained with deep networks that are notably difficult to train in the spiking domain. To overcome these issues, we propose a formal analysis of the information conversion processing carried out by the Integrate and Fire (IF) spiking neurons and we formalize the trade-off between conversion error and activation sparsity in SNNs. We then propose, for the first time, an image denoising solution based on SNNs. The SNN networks are trained directly in the spike domain using surrogate gradient learning and backpropagation through time. Experimental results show that the proposed SNN provides a level of performance close to the state of the art with CNN based solutions. Specifically, our SNN achieves 30.18 dB of signal-to-noise ratio on the Set12 dataset, which is only 0.25 dB below the performance of the equivalent DCNN. Moreover we show that this performance can be achieved with low latency, i.e., using few timesteps, and with a significant level of sparsity. Finally, we analyze the energy consumption for different network latencies and network sizes. We show that the energy consumption of SNNs increases with longer latencies, making them more energy efficient compared to CNNs only for very small inference latencies. However, we also show that by increasing the network size, SNNs can provide competitive denoising performance while reducing the energy consumption by 20%.

15.
J Bus Ethics ; 186(2): 369-383, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37533566

RESUMEN

This essay examines three potential arguments against high-frequency trading and offers a qualified critique of the practice. In concrete terms, it examines a variant of high-frequency trading that is all about speed-low-latency trading-in light of moral issues surrounding arbitrage, information asymmetries, and systemic risk. The essay focuses on low-latency trading and the role of speed because it also aims to show that the commonly made assumption that speed in financial markets is morally neutral is wrong. For instance, speed is a necessary condition for low-latency trading's potential to cause harm in "flash crashes." On the other hand, it also plays a crucial role in a Lockean defense against low-latency trading being wasteful developed in this essay. Finally, this essay discusses the implications of these findings for related high-frequency trading techniques like futures arbitrage or latency arbitrage-as well as for an argument as to why quote stuffing is wrong. Overall, the qualifications offered in this essay act as a counterbalance to overblown claims about trading at high speeds being wrong.

16.
Sensors (Basel) ; 23(10)2023 May 12.
Artículo en Inglés | MEDLINE | ID: mdl-37430603

RESUMEN

The TCP protocol is a connection-oriented and reliable transport layer communication protocol which is widely used in network communication. With the rapid development and popular application of data center networks, high-throughput, low-latency, and multi-session network data processing has become an immediate need for network devices. If only a traditional software protocol stack is used for processing, it will occupy a large amount of CPU resources and affect network performance. To address the above issues, this paper proposes a double-queue storage structure for a 10G TCP/IP hardware offload engine based on FPGA. Furthermore, a TOE reception transmission delay theoretical analysis model for interaction with the application layer is proposed, so that the TOE can dynamically select the transmission channel based on the interaction results. After board-level verification, the TOE supports 1024 TCP sessions with a reception rate of 9.5 Gbps and a minimum transmission latency of 600 ns. When the TCP packet payload length is 1024 bytes, the latency performance of TOE's double-queue storage structure improves by at least 55.3% compared to other hardware implementation approaches. When compared with software implementation approaches, the latency performance of TOE is only 3.2% of the software approaches.

17.
Glob Chang Biol ; 29(13): 3634-3651, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-37070967

RESUMEN

The increasing frequency and intensity of climate extremes and complex ecosystem responses motivate the need for integrated observational studies at low latency to determine biosphere responses and carbon-climate feedbacks. Here, we develop a satellite-based rapid attribution workflow and demonstrate its use at a 1-2-month latency to attribute drivers of the carbon cycle feedbacks during the 2020-2021 Western US drought and heatwave. In the first half of 2021, concurrent negative photosynthesis anomalies and large positive column CO2 anomalies were detected with satellites. Using a simple atmospheric mass balance approach, we estimate a surface carbon efflux anomaly of 132 TgC in June 2021, a magnitude corroborated independently with a dynamic global vegetation model. Integrated satellite observations of hydrologic processes, representing the soil-plant-atmosphere continuum (SPAC), show that these surface carbon flux anomalies are largely due to substantial reductions in photosynthesis because of a spatially widespread moisture-deficit propagation through the SPAC between 2020 and 2021. A causal model indicates deep soil moisture stores partially drove photosynthesis, maintaining its values in 2020 and driving its declines throughout 2021. The causal model also suggests legacy effects may have amplified photosynthesis deficits in 2021 beyond the direct effects of environmental forcing. The integrated, observation framework presented here provides a valuable first assessment of a biosphere extreme response and an independent testbed for improving drought propagation and mechanisms in models. The rapid identification of extreme carbon anomalies and hotspots can also aid mitigation and adaptation decisions.


Asunto(s)
Sequías , Ecosistema , Atmósfera , Ciclo del Carbono , Suelo , Plantas , Carbono , Cambio Climático
18.
Sensors (Basel) ; 23(8)2023 Apr 11.
Artículo en Inglés | MEDLINE | ID: mdl-37112226

RESUMEN

With the rapid development of the 5G power Internet of Things (IoT), new power systems have higher requirements for data transmission rates, latency, reliability, and energy efficiency. Specifically, the hybrid service of enhanced mobile broadband (eMBB) and ultra-reliable low-latency communication (URLLC) has brought new challenges to the differentiated service of the 5G power IoT. To solve the above problems, this paper first constructs a power IoT model based on NOMA for the mixed service of URLLC and eMBB. Considering the shortage of resource utilization in eMBB and URLLC hybrid power service scenarios, the problem of maximizing system throughput through joint channel selection and power allocation is proposed. The channel selection algorithm based on matching as well as the power allocation algorithm based on water injection are developed to tackle the problem. Both theoretical analysis and experimental simulation verify that our method has superior performance in system throughput and spectrum efficiency.

19.
Sensors (Basel) ; 23(7)2023 Apr 02.
Artículo en Inglés | MEDLINE | ID: mdl-37050742

RESUMEN

Augmented reality and virtual reality technologies are witnessing an evolutionary change in the 5G and Beyond (5GB) network due to their promising ability to enable an immersive and interactive environment by coupling the virtual world with the real one. However, the requirement of low-latency connectivity, which is defined as the end-to-end delay between the action and the reaction, is very crucial to leverage these technologies for a high-quality immersive experience. This paper provides a comprehensive survey and detailed insight into various advantageous approaches from the hardware and software perspectives, as well as the integration of 5G technology, towards 5GB, in enabling a low-latency environment for AR and VR applications. The contribution of 5GB systems as an outcome of several cutting-edge technologies, such as massive multiple-input, multiple-output (mMIMO) and millimeter wave (mmWave), along with the utilization of artificial intelligence (AI) and machine learning (ML) techniques towards an ultra-low-latency communication system, is also discussed in this paper. The potential of using a visible-light communications (VLC)-guided beam through a learning algorithm for a futuristic, evolved immersive experience of augmented and virtual reality with the ultra-low-latency transmission of multi-sensory tracking information with an optimal scheduling policy is discussed in this paper.

20.
EURASIP J Wirel Commun Netw ; 2023(1): 31, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36969751

RESUMEN

We propose an early-detection scheme to reduce communications latency based on sequential tests under finite blocklength regime for a fixed-rate transmission without any feedback channel. The proposed scheme processes observations sequentially to decide in favor of one of the candidate symbols. Such a process stops as soon as a decision rule is satisfied or waits for more samples under a given accuracy. We first provide the optimal achievable latency in additive white Gaussian noise channels for every channel code given a probability of block error. For example, for a rate R = 0.5 and a blocklength of 500 symbols, we show that only 63 % of the symbol time is needed to reach an error rate equal to 10 - 5 . Then, we prove that if short messages can be transmitted in parallel Gaussian channels via a multi-carrier modulation, there exists an optimal low-latency strategy for every code. Next, we show how early detection can be effective with band-limited orthogonal frequency-division multiplexing signals while maintaining a given spectral efficiency by random coding or pre-coding random matrices. Finally, we show how the proposed early-detection scheme is effective in multi-hop systems.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA