Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Entropy (Basel) ; 25(3)2023 Feb 24.
Artigo em Inglês | MEDLINE | ID: mdl-36981302

RESUMO

This paper considers a downlink resource-allocation problem in distributed interference orthogonal frequency-division multiple access (OFDMA) systems under maximal power constraints. As the upcoming fifth-generation (5G) wireless networks are increasingly complex and heterogeneous, it is challenging for resource allocation tasks to optimize the system performance metrics and guarantee user service requests simultaneously. Because of the non-convex optimization problems, using existing approaches to find the optimal resource allocation is computationally expensive. Recently, model-free reinforcement learning (RL) techniques have become alternative approaches in wireless networks to solve non-convex and NP-hard optimization problems. In this paper, we study a deep Q-learning (DQL)-based approach to address the optimization of transmit power control for users in multi-cell interference networks. In particular, we have applied a DQL algorithm for resource allocation to maximize the overall system throughput subject to the maximum power and SINR constraints in a flat frequency channel. We first formulate the optimization problem as a non-cooperative game model, where the multiple BSs compete for spectral efficiencies by improving their achievable utility functions while ensuring the quality of service (QoS) requirements to the corresponding receivers. Then, we develop a DRL-based resource allocation model to maximize the system throughput while satisfying the power and spectral efficiency requirements. In this setting, we define the state-action spaces and the reward function to explore the possible actions and learning outcomes. The numerical simulations demonstrate that the proposed DQL-based scheme outperforms the traditional model-based solution.

2.
Sensors (Basel) ; 22(9)2022 May 09.
Artigo em Inglês | MEDLINE | ID: mdl-35591279

RESUMO

With the proliferation of 5G mobile networks within next-generation wireless communication, the design and optimization of 5G networks are progressing in the direction of improving the physical layer security (PLS) paradigm. This phenomenon is due to the fact that traditional methods for the network optimization of PLS fail to adapt new features, technologies, and resource management to diversified demand applications. To improve these methods, future 5G and beyond 5G (B5G) networks will need to rely on new enabling technologies. Therefore, approaches for PLS design and optimization that are based on artificial intelligence (AI) and machine learning (ML) have been corroborated to outperform traditional security technologies. This will allow future 5G networks to be more intelligent and robust in order to significantly improve the performance of system design over traditional security methods. With the objective of advancing future PLS research, this review paper presents an elaborate discussion on the design and optimization approaches of wireless PLS techniques. In particular, we focus on both signal processing and information-theoretic security approaches to investigate the optimization techniques and system designs of PLS strategies. The review begins with the fundamental concepts that are associated with PLS, including a discussion on conventional cryptographic techniques and wiretap channel models. We then move on to discuss the performance metrics and basic optimization schemes that are typically adopted in PLS design strategies. The research directions for secure system designs and optimization problems are then reviewed in terms of signal processing, resource allocation and node/antenna selection. Thereafter, the applications of AI and ML technologies in the optimization and design of PLS systems are discussed. In this context, the ML- and AI-based solutions that pertain to end-to-end physical layer joint optimization, secure resource allocation and signal processing methods are presented. We finally conclude with discussions on future trends and technical challenges that are related to the topics of PLS system design and the benefits of AI technologies.


Assuntos
Inteligência Artificial , Tecnologia , Comunicação , Aprendizado de Máquina , Tecnologia sem Fio
3.
Sensors (Basel) ; 21(19)2021 Sep 28.
Artigo em Inglês | MEDLINE | ID: mdl-34640811

RESUMO

Extracting features from sensing data on edge devices is a challenging application for which deep neural networks (DNN) have shown promising results. Unfortunately, the general micro-controller-class processors which are widely used in sensing system fail to achieve real-time inference. Accelerating the compute-intensive DNN inference is, therefore, of utmost importance. As the physical limitation of sensing devices, the design of processor needs to meet the balanced performance metrics, including low power consumption, low latency, and flexible configuration. In this paper, we proposed a lightweight pipeline integrated deep learning architecture, which is compatible with open-source RISC-V instructions. The dataflow of DNN is organized by the very long instruction word (VLIW) pipeline. It combines with the proposed special intelligent enhanced instructions and the single instruction multiple data (SIMD) parallel processing unit. Experimental results show that total power consumption is about 411 mw and the power efficiency is about 320.7 GOPS/W.


Assuntos
Redes Neurais de Computação
4.
Sensors (Basel) ; 21(2)2021 Jan 10.
Artigo em Inglês | MEDLINE | ID: mdl-33435143

RESUMO

With the development of deep learning technologies and edge computing, the combination of them can make artificial intelligence ubiquitous. Due to the constrained computation resources of the edge device, the research in the field of on-device deep learning not only focuses on the model accuracy but also on the model efficiency, for example, inference latency. There are many attempts to optimize the existing deep learning models for the purpose of deploying them on the edge devices that meet specific application requirements while maintaining high accuracy. Such work not only requires professional knowledge but also needs a lot of experiments, which limits the customization of neural networks for varied devices and application scenarios. In order to reduce the human intervention in designing and optimizing the neural network structure, multi-objective neural architecture search methods that can automatically search for neural networks featured with high accuracy and can satisfy certain hardware performance requirements are proposed. However, the current methods commonly set accuracy and inference latency as the performance indicator during the search process, and sample numerous network structures to obtain the required neural network. Lacking regulation to the search direction with the search objectives will generate a large number of useless networks during the search process, which influences the search efficiency to a great extent. Therefore, in this paper, an efficient resource-aware search method is proposed. Firstly, the network inference consumption profiling model for any specific device is established, and it can help us directly obtain the resource consumption of each operation in the network structure and the inference latency of the entire sampled network. Next, on the basis of the Bayesian search, a resource-aware Pareto Bayesian search is proposed. Accuracy and inference latency are set as the constraints to regulate the search direction. With a clearer search direction, the overall search efficiency will be improved. Furthermore, cell-based structure and lightweight operation are applied to optimize the search space for further enhancing the search efficiency. The experimental results demonstrate that with our method, the inference latency of the searched network structure reduced 94.71% without scarifying the accuracy. At the same time, the search efficiency increased by 18.18%.

5.
Sensors (Basel) ; 19(15)2019 Aug 03.
Artigo em Inglês | MEDLINE | ID: mdl-31382640

RESUMO

The expansion and improvement of synthetic aperture radar (SAR) technology have greatly enhanced its practicality. SAR imaging requires real-time processing with limited power consumption for large input images. Designing a specific heterogeneous array processor is an effective approach to meet the power consumption constraints and real-time processing requirements of an application system. In this paper, taking a commonly used algorithm for SAR imaging-the chirp scaling algorithm (CSA)-as an example, the characteristics of each calculation stage in the SAR imaging process is analyzed, and the data flow model of SAR imaging is extracted. A heterogeneous array architecture for SAR imaging that effectively supports Fast Fourier Transformation/Inverse Fast Fourier Transform (FFT/IFFT) and phase compensation operations is proposed. First, a heterogeneous array architecture consisting of fixed-point PE units and floating-point FPE units, which are respectively proposed for the FFT/IFFT and phase compensation operations, increasing energy efficiency by 50% compared with the architecture using floating-point units. Second, data cross-placement and simultaneous access strategies are proposed to support the intra-block parallel processing of SAR block imaging, achieving up to 115.2 GOPS throughput. Third, a resource management strategy for heterogeneous computing arrays is designed, which supports the pipeline processing of FFT/IFFT and phase compensation operation, improving PE utilization by a factor of 1.82 and increasing energy efficiency by a factor of 1.5. Implemented in 65-nm technology, the experimental results show that the processor can achieve energy efficiency of up to 254 GOPS/W. The imaging fidelity and accuracy of the proposed processor were verified by evaluating the image quality of the actual scene.

6.
Zhonghua Yan Ke Za Zhi ; 50(5): 349-54, 2014 May.
Artigo em Zh | MEDLINE | ID: mdl-25052804

RESUMO

OBJECTIVE: To investigate the prevalence and characteristics of primary glaucoma in the population of Huamu community, Shanghai. METHODS: It was a population-based cross-section study.Using random cluster sampling method, 3 neighborhood committees were randomly selected from Huamu community. And this survey was carried out by screening in community combined with diagnosis in tertiary hospital from March to September 2011. Residents aged more than 50 years old were included in this study.Information was collected on the participants' presenting visual acuity with habitual correction and best corrected visual acuity, intraocular pressure (IOP) assessed with non-contact tonometer, ocular anterior segment examination results with slit lamp anterior segment photography, optic disc examination results with fundus photography. And all glaucoma suspects received IOP measurement, gonioscopy, visual field test, retinal nerve fiber layer thickness measurement in Shanghai Eye Disease Prevention and Treatment center.Glaucoma was diagnosed according to International Society for Geographic and Epidemiological Ophthalmology. Distributions of different types of primary glaucoma within different groups of gender and age were described, and prevalence rates of primary glaucoma between different groups were compared using chi-square test. RESULTS: Two thousands five hundreds and twenty-eight cases were examined and the respond rate was 80.36%. Prevalence of primary glaucoma was 3.09%, in which primary open angle glaucoma (POAG) and primary angle closure glaucoma (PACG) were 2.85% and 0.24%. The prevalence of POAG had upward trend with age. The blindness rate within one or both eyes caused by POAG and PACG was 12.5% and 3/6, and the blindness rate of POAG was lower compared with PACG. 88.89% of POAG in this investigation had not been previously diagnosed, and 100% of PACG had been previously diagnosed and received treatment. CONCLUSIONS: The prevalence of primary glaucoma in Huamu community is relatively high and the previous diagnostic and treatment rate of POAG are relatively low. Early screening and health education for primary glaucoma are important in blindness prevention work in the future.


Assuntos
Glaucoma de Ângulo Fechado/epidemiologia , Glaucoma de Ângulo Aberto/epidemiologia , Idoso , Cegueira , China/epidemiologia , China/etnologia , Estudos Transversais , Gonioscopia , Humanos , Pressão Intraocular , Pessoa de Meia-Idade , Disco Óptico , Prevalência , Tonometria Ocular , Acuidade Visual , Testes de Campo Visual
7.
Micromachines (Basel) ; 15(3)2024 Feb 23.
Artigo em Inglês | MEDLINE | ID: mdl-38542554

RESUMO

Real-time heterogeneous parallel embedded digital signal processor (DSP) systems process multiple data streams in parallel in a stringent time interval. This type of system on chip (SoC) requires the network on chip (NoC) to establish multiple symbiotic parallel data transmission paths with ultra-low transmission latency in real time. Our early NoC research PCCNOC meets this need. The PCCNOC uses packet routing to establish and lock a transmission circuit, so that PCCNOC is perfectly suitable for ultra-low latency and high-bandwidth transmission of long data packets. However, a parallel multi-data stream DSP system also needs to transmit roughly the same number of short data packets for job configuration and job execution status reports. While transferring short data packets, the link establishment routing delay of short data packets becomes relatively obvious. Our further research, thus, introduced PaCHNOC, a hybrid NoC in which long data packets are transmitted through a circuit established and locked by routing, and short data packets are attached to the routing packet and the transmission is completed during the routing process, thus avoiding the PCCNOC setup delay. Simulation shows that PaCHNOC performs well in supporting real-time heterogeneous parallel embedded DSP systems and achieves overall latency reduction 65% compared with related works. Finally, we used PaCHNOC in the baseband subsystem of a real 5G base station, which proved that our research is the best NoC for baseband subsystem of 5G base stations, which reduce 31% comprehensive latency in comparison to related works.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA