Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
1.
Entropy (Basel) ; 26(6)2024 May 28.
Artículo en Inglés | MEDLINE | ID: mdl-38920471

RESUMEN

In digital baseband processing, the forward error correction (FEC) unit belongs to the most demanding components in terms of computational complexity and power consumption. Hence, efficient implementation of FEC decoders is crucial for next-generation mobile broadband standards and an ongoing research topic. Quantization has a significant impact on the decoder area, power consumption and throughput. Thus, lower bit widths are preferred for efficient implementations but degrade the error correction capability. To address this issue, a non-uniform quantization based on the Information Bottleneck (IB) method is proposed that enables a low bit width while maintaining the essential information. Many investigations on the use of the IB method for Low-density parity-check code) LDPC decoders exist and have shown its advantages from an implementation perspective. However, for polar code decoder implementations, there exists only one publication that is not based on the state-of-the-art Fast Simplified Successive-Cancellation (Fast-SSC) decoding algorithm, and only synthesis implementation results without energy estimation are shown. In contrast, our paper presents several optimized Fast-SSC polar code decoder implementations using IB-based quantization with placement and routing results using advanced 12 nm FinFET technology. Gains of up to 16% in area and 13% in energy efficiency are achieved with IB-based quantization at a Frame Error Rate (FER) of 10-7 and a polar code of N=1024,R=0.5 compared to state-of-the-art decoders.

2.
Sensors (Basel) ; 23(15)2023 Aug 04.
Artículo en Inglés | MEDLINE | ID: mdl-37571726

RESUMEN

Wheat stripe rust disease (WRD) is extremely detrimental to wheat crop health, and it severely affects the crop yield, increasing the risk of food insecurity. Manual inspection by trained personnel is carried out to inspect the disease spread and extent of damage to wheat fields. However, this is quite inefficient, time-consuming, and laborious, owing to the large area of wheat plantations. Artificial intelligence (AI) and deep learning (DL) offer efficient and accurate solutions to such real-world problems. By analyzing large amounts of data, AI algorithms can identify patterns that are difficult for humans to detect, enabling early disease detection and prevention. However, deep learning models are data-driven, and scarcity of data related to specific crop diseases is one major hindrance in developing models. To overcome this limitation, in this work, we introduce an annotated real-world semantic segmentation dataset named the NUST Wheat Rust Disease (NWRD) dataset. Multileaf images from wheat fields under various illumination conditions with complex backgrounds were collected, preprocessed, and manually annotated to construct a segmentation dataset specific to wheat stripe rust disease. Classification of WRD into different types and categories is a task that has been solved in the literature; however, semantic segmentation of wheat crops to identify the specific areas of plants and leaves affected by the disease remains a challenge. For this reason, in this work, we target semantic segmentation of WRD to estimate the extent of disease spread in wheat fields. Sections of fields where the disease is prevalent need to be segmented to ensure that the sick plants are quarantined and remedial actions are taken. This will consequently limit the use of harmful fungicides only on the targeted disease area instead of the majority of wheat fields, promoting environmentally friendly and sustainable farming solutions. Owing to the complexity of the proposed NWRD segmentation dataset, in our experiments, promising results were obtained using the UNet semantic segmentation model and the proposed adaptive patching with feedback (APF) technique, which produced a precision of 0.506, recall of 0.624, and F1 score of 0.557 for the rust class.


Asunto(s)
Basidiomycota , Triticum , Humanos , Inteligencia Artificial , Enfermedades de las Plantas , Productos Agrícolas
3.
Sensors (Basel) ; 22(4)2022 Feb 13.
Artículo en Inglés | MEDLINE | ID: mdl-35214331

RESUMEN

The need for oil spill monitoring systems has long been of concern in an attempt to contain damage with a rapid response time. When it comes to oil thickness estimation, few reliable methods capable of accurately measuring the thickness of thick oil slick (in mm) on top of the sea surface have been advanced. In this article, we provide accurate estimates of oil slick thicknesses using nadir-looking wide-band radar sensors by incorporating both C- and X-frequency bands operating over calm ocean when the weather conditions are suitable for cleaning operations and the wind speed is very low (<3 m/s). We develop Maximum-Likelihood dual- and multi-frequency statistical signal processing algorithms to estimate the thicknesses of spilled oil. The estimators use Minimum-Euclidean-Distance classification problem, in pre-defined multidimensional constellation sets, on radar reflectivity values. Furthermore, to be able to use the algorithms in oil-spill scenarios, we devise and assess the accuracy of a practical iterative procedure to use the proposed 2D and 3D estimators for accurate and reliable thickness estimations in oil-spill scenarios under noisy conditions. Results on simulated and in-lab experimental data show that M-Scan 4D estimators outperform lower-order estimators even when the iterative procedure is applied. This work is a proof that using radar measurements taken from nadir-looking systems, thick oil slick thicknesses up to 10 mm can be accurately estimated. To the best of our knowledge, the radar active sensor has not yet been used to estimate the oil slick thickness.

4.
Sensors (Basel) ; 22(7)2022 Mar 24.
Artículo en Inglés | MEDLINE | ID: mdl-35408099

RESUMEN

Recent progress in quantum computers severely endangers the security of widely used public-key cryptosystems and of all communication that relies on it. Thus, the US NIST is currently exploring new post-quantum cryptographic algorithms that are robust against quantum computers. Security is seen as one of the most critical issues of low-power IoT devices-even with pre-quantum public-key cryptography-since IoT devices have tight energy constraints, limited computational power and strict memory limitations. In this paper, we present, to the best of our knowledge, the first in-depth investigation of the application of potential post-quantum key encapsulation mechanisms (KEMs) and digital signature algorithms (DSAs) proposed in the related US NIST process to a state-of-the-art, TLS-based, low-power IoT infrastructure. We implemented these new KEMs and DSAs in such a representative infrastructure and measured their impact on energy consumption, latency and memory requirements during TLS handshakes on an IoT edge device. Based on our investigations, we gained the following new insights. First, we show that the main contributor to high TLS handshake latency is the higher bandwidth requirement of post-quantum primitives rather than the cryptographic computation itself. Second, we demonstrate that a smart combination of multiple DSAs yields the most energy-, latency- and memory-efficient public key infrastructures, in contrast to NIST's goal to standardize only one algorithm. Third, we show that code-based, isogeny-based and lattice-based algorithms can be implemented on a low-power IoT edge device based on an off-the-shelf Cortex M4 microcontroller while maintaining viable battery runtimes. This is contrary to much research that claims dedicated hardware accelerators are mandatory.

5.
Entropy (Basel) ; 24(10)2022 Oct 12.
Artículo en Inglés | MEDLINE | ID: mdl-37420472

RESUMEN

In Message Passing (MP) decoding of Low-Density Parity Check (LDPC) codes, extrinsic information is exchanged between Check Nodes (CNs) and Variable Nodes (VNs). In a practical implementation, this information exchange is limited by quantization using only a small number of bits. In recent investigations, a novel class of Finite Alphabet Message Passing (FA-MP) decoders are designed to maximize the Mutual Information (MI) using only a small number of bits per message (e.g., 3 or 4 bits) with a communication performance close to high-precision Belief Propagation (BP) decoding. In contrast to the conventional BP decoder, operations are given as discrete-input discrete-output mappings which can be described by multidimensional LUTs (mLUTs). A common approach to avoid exponential increases in the size of mLUTs with the node degree is given by the sequential LUT (sLUT) design approach, i.e., by using a sequence of two-dimensional Lookup-Tables (LUTs) for the design, leading to a slight performance degradation. Recently, approaches such as Reconstruction-Computation-Quantization (RCQ) and Mutual Information-Maximizing Quantized Belief Propagation (MIM-QBP) have been proposed to avoid the complexity drawback of using mLUTs by using pre-designed functions that require calculations over a computational domain. It has been shown that these calculations are able to represent the mLUT mapping exactly by executing computations with infinite precision over real numbers. Based on the framework of MIM-QBP and RCQ, the Minimum-Integer Computation (MIC) decoder design generates low-bit integer computations that are derived from the Log-Likelihood Ratio (LLR) separation property of the information maximizing quantizer to replace the mLUT mappings either exactly or approximately. We derive a novel criterion for the bit resolution that is required to represent the mLUT mappings exactly. Furthermore, we show that our MIC decoder has exactly the communication performance of the corresponding mLUT decoder, but with much lower implementation complexity. We also perform an objective comparison between the state-of-the-art Min-Sum (MS) and the FA-MP decoder implementations for throughput towards 1 Tb/s in a state-of-the-art 28 nm Fully-Depleted Silicon-on-Insulator (FD-SOI) technology. Furthermore, we demonstrate that our new MIC decoder implementation outperforms previous FA-MP decoders and MS decoders in terms of reduced routing complexity, area efficiency and energy efficiency.

6.
Sensors (Basel) ; 22(1)2021 Dec 30.
Artículo en Inglés | MEDLINE | ID: mdl-35009805

RESUMEN

With the recent increase in the use of augmented reality (AR) in educational laboratory settings, there is a need for new intelligent sensor systems capturing all aspects of the real environment. We present a smart sensor system meeting these requirements for STEM (science, technology, engineering, and mathematics) experiments in electrical circuits. The system consists of custom experiment boxes and cables combined with an application for the Microsoft HoloLens 2, which creates an AR experiment environment. The boxes combine sensors for measuring the electrical voltage and current at the integrated electrical components as well as a reconstruction of the currently constructed electrical circuit and the position of the sensor box on a table. Combing these data, the AR application visualizes the measurement data spatially and temporally coherent to the real experiment boxes, thus fulfilling demands derived from traditional multimedia learning theory. Following an evaluation of the accuracy and precision of the presented sensors, the usability of the system was evaluated with n=20 pupils in a German high school. In this evaluation, the usability of the system was rated with a system usability score of 94 out of 100.


Asunto(s)
Realidad Aumentada , Instituciones Académicas
7.
Sensors (Basel) ; 20(10)2020 May 16.
Artículo en Inglés | MEDLINE | ID: mdl-32429341

RESUMEN

The estimation of human hand pose has become the basis for many vital applications where the user depends mainly on the hand pose as a system input. Virtual reality (VR) headset, shadow dexterous hand and in-air signature verification are a few examples of applications that require to track the hand movements in real-time. The state-of-the-art 3D hand pose estimation methods are based on the Convolutional Neural Network (CNN). These methods are implemented on Graphics Processing Units (GPUs) mainly due to their extensive computational requirements. However, GPUs are not suitable for the practical application scenarios, where the low power consumption is crucial. Furthermore, the difficulty of embedding a bulky GPU into a small device prevents the portability of such applications on mobile devices. The goal of this work is to provide an energy efficient solution for an existing depth camera based hand pose estimation algorithm. First, we compress the deep neural network model by applying the dynamic quantization techniques on different layers to achieve maximum compression without compromising accuracy. Afterwards, we design a custom hardware architecture. For our device we selected the FPGA as a target platform because FPGAs provide high energy efficiency and can be integrated in portable devices. Our solution implemented on Xilinx UltraScale+ MPSoC FPGA is 4.2× faster and 577.3× more energy efficient than the original implementation of the hand pose estimation algorithm on NVIDIA GeForce GTX 1070.


Asunto(s)
Algoritmos , Mano , Redes Neurales de la Computación , Humanos , Movimiento , Fenómenos Físicos
8.
Nat Commun ; 14(1): 6348, 2023 Oct 10.
Artículo en Inglés | MEDLINE | ID: mdl-37816751

RESUMEN

Advancements in AI led to the emergence of in-memory-computing architectures as a promising solution for the associated computing and memory challenges. This study introduces a novel in-memory-computing (IMC) crossbar macro utilizing a multi-level ferroelectric field-effect transistor (FeFET) cell for multi-bit multiply and accumulate (MAC) operations. The proposed 1FeFET-1R cell design stores multi-bit information while minimizing device variability effects on accuracy. Experimental validation was performed using 28 nm HKMG technology-based FeFET devices. Unlike traditional resistive memory-based analog computing, our approach leverages the electrical characteristics of stored data within the memory cell to derive MAC operation results encoded in activation time and accumulated current. Remarkably, our design achieves 96.6% accuracy for handwriting recognition and 91.5% accuracy for image classification without extra training. Furthermore, it demonstrates exceptional performance, achieving 885.4 TOPS/W-nearly double that of existing designs. This study represents the first successful implementation of an in-memory macro using a multi-state FeFET cell for complete MAC operations, preserving crossbar density without additional structural overhead.

9.
J Imaging ; 7(9)2021 Sep 03.
Artículo en Inglés | MEDLINE | ID: mdl-34564101

RESUMEN

In recent years, there has been an increasing demand to digitize and electronically access historical records. Optical character recognition (OCR) is typically applied to scanned historical archives to transcribe them from document images into machine-readable texts. Many libraries offer special stationary equipment for scanning historical documents. However, to digitize these records without removing them from where they are archived, portable devices that combine scanning and OCR capabilities are required. An existing end-to-end OCR software called anyOCR achieves high recognition accuracy for historical documents. However, it is unsuitable for portable devices, as it exhibits high computational complexity resulting in long runtime and high power consumption. Therefore, we have designed and implemented a configurable hardware-software programmable SoC called iDocChip that makes use of anyOCR techniques to achieve high accuracy. As a low-power and energy-efficient system with real-time capabilities, the iDocChip delivers the required portability. In this paper, we present the hybrid CPU-FPGA architecture of iDocChip along with the optimized software implementations of the anyOCR. We demonstrate our results on multiple platforms with respect to runtime and power consumption. The iDocChip system outperforms the existing anyOCR by 44× while achieving 2201× higher energy efficiency and a 3.8% increase in recognition accuracy.

10.
Artículo en Inglés | MEDLINE | ID: mdl-19162648

RESUMEN

A new approach for ECG data compression is proposed in this paper. Using a nonlinear least squares optimization procedure, the approach employs an algorithm based on template model fitting. Only 12 parameters are required to fully represent the ECG signal without diagnostic information loss. The effectiveness of our ECG compression technique is described in terms of high compression ratios, relatively low distortion values of less than 9%, and a low computational cost, thus demonstrating the beneficial use of our technique for ECG data storage and online transmission. Comparisons with other recent compression methods in the literature have shown that our method performs better.


Asunto(s)
Algoritmos , Compresión de Datos/métodos , Diagnóstico por Computador/métodos , Electrocardiografía/métodos , Procesamiento de Señales Asistido por Computador , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA