Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 338
Filtrar
Mais filtros

País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Proc Natl Acad Sci U S A ; 121(17): e2319625121, 2024 Apr 23.
Artigo em Inglês | MEDLINE | ID: mdl-38640343

RESUMO

Distributed nonconvex optimization underpins key functionalities of numerous distributed systems, ranging from power systems, smart buildings, cooperative robots, vehicle networks to sensor networks. Recently, it has also merged as a promising solution to handle the enormous growth in data and model sizes in deep learning. A fundamental problem in distributed nonconvex optimization is avoiding convergence to saddle points, which significantly degrade optimization accuracy. We find that the process of quantization, which is necessary for all digital communications, can be exploited to enable saddle-point avoidance. More specifically, we propose a stochastic quantization scheme and prove that it can effectively escape saddle points and ensure convergence to a second-order stationary point in distributed nonconvex optimization. With an easily adjustable quantization granularity, the approach allows a user to control the number of bits sent per iteration and, hence, to aggressively reduce the communication overhead. Numerical experimental results using distributed optimization and learning problems on benchmark datasets confirm the effectiveness of the approach.

2.
Small ; : e2311491, 2024 Apr 29.
Artigo em Inglês | MEDLINE | ID: mdl-38682729

RESUMO

Conductance quantization of 2D materials is significant for understanding the charge transport at the atomic scale, which provides a platform to manipulate the quantum states, showing promising applications for nanoelectronics and memristors. However, the conventional methods for investigating conductance quantization are only applicable to materials consisting of one element, such as metal and graphene. The experimental observation of conductance quantization in transition metal dichalcogenides (TMDCs) with complex compositions and structures remains a challenge. To address this issue, an approach is proposed to characterize the charge transport across a single atom in TMDCs by integrating in situ synthesized 1T'-WTe2 electrodes with scanning tunneling microscope break junction (STM-BJ) technique. The quantized conductance of 1T'-WTe2 is measured for the first time, and the quantum states can be modulated by stretching speed and solvent. Combined with theoretical calculations, the evolution of quantized and corresponding configurations during the break junction process is demonstrated. This work provides a facile and reliable avenue to characterize and modulate conductance quantization of 2D materials, intensively expanding the research scope of quantum effects in diverse materials.

3.
Sensors (Basel) ; 24(4)2024 Feb 09.
Artigo em Inglês | MEDLINE | ID: mdl-38400306

RESUMO

Deep-learning models play a significant role in modern software solutions, with the capabilities of handling complex tasks, improving accuracy, automating processes, and adapting to diverse domains, eventually contributing to advancements in various industries. This study provides a comparative study on deep-learning techniques that can also be deployed on resource-constrained edge devices. As a novel contribution, we analyze the performance of seven Convolutional Neural Network models in the context of data augmentation, feature extraction, and model compression using acoustic data. The results show that the best performers can achieve an optimal trade-off between model accuracy and size when compressed with weight and filter pruning followed by 8-bit quantization. In adherence to the study workflow utilizing the forest sound dataset, MobileNet-v3-small and ACDNet achieved accuracies of 87.95% and 85.64%, respectively, while maintaining compact sizes of 243 KB and 484 KB, respectively. Henceforth, this study concludes that CNNs can be optimized and compressed to be deployed in resource-constrained edge devices for classifying forest environment sounds.

4.
Sensors (Basel) ; 24(8)2024 Apr 19.
Artigo em Inglês | MEDLINE | ID: mdl-38676223

RESUMO

Vector Quantization (VQ) is a technique with a wide range of applications. For example, it can be used for image compression. The codebook design for VQ has great significance in the quality of the quantized signals and can benefit from the use of swarm intelligence. Initialization of the Linde-Buzo-Gray (LBG) algorithm, which is the most popular VQ codebook design algorithm, is a step that directly influences VQ performance, as the convergence speed and codebook quality depend on the initial codebook. A widely used initialization alternative is random initialization, in which the initial set of codevectors is drawn randomly from the training set. Other initialization methods can lead to a better quality of the designed codebooks. The present work evaluates the impacts of initialization strategies on swarm intelligence algorithms for codebook design in terms of the quality of the designed codebooks, assessed by the quality of the reconstructed images, and in terms of the convergence speed, evaluated by the number of iterations. Initialization strategies consist of a combination of codebooks obtained by initialization algorithms from the literature with codebooks composed of vectors randomly selected from the training set. The possibility of combining different initialization techniques provides new perspectives in the search for the quality of the VQ codebooks. Nine initialization strategies are presented, which are compared with random initialization. Initialization strategies are evaluated on the following algorithms for codebook design based on swarm clustering: modified firefly algorithm-Linde-Buzo-Gray (M-FA-LBG), modified particle swarm optimization-Linde-Buzo-Gray (M-PSO-LBG), modified fish school search-Linde-Buzo-Gray (M-FSS-LBG) and their accelerated versions (M-FA-LBGa, M-PSO-LBGa and M-FSS-LBGa) which are obtained by replacing the LBG with the accelerated LBG algorithm. The simulation results point out to the benefits of the proposed initialization strategies. The results show gains up to 4.43 dB in terms of PSNR for image Clock with M-PSO-LBG codebooks of size 512 and codebook design time savings up to 67.05% for image Clock, with M-FF-LBGa codebooks with size N=512, by using initialization strategies in substitution to Random initialization.

5.
Sensors (Basel) ; 24(13)2024 Jun 28.
Artigo em Inglês | MEDLINE | ID: mdl-39000982

RESUMO

Accurate 3D image recognition, critical for autonomous driving safety, is shifting from the LIDAR-based point cloud to camera-based depth estimation technologies driven by cost considerations and the point cloud's limitations in detecting distant small objects. This research aims to enhance MDE (Monocular Depth Estimation) using a single camera, offering extreme cost-effectiveness in acquiring 3D environmental data. In particular, this paper focuses on novel data augmentation methods designed to enhance the accuracy of MDE. Our research addresses the challenge of limited MDE data quantities by proposing the use of synthetic-based augmentation techniques: Mask, Mask-Scale, and CutFlip. The implementation of these synthetic-based data augmentation strategies has demonstrably enhanced the accuracy of MDE models by 4.0% compared to the original dataset. Furthermore, this study introduces the RMS (Real-time Monocular Depth Estimation configuration considering Resolution, Efficiency, and Latency) algorithm, designed for the optimization of neural networks to augment the performance of contemporary monocular depth estimation technologies through a three-step process. Initially, it selects a model based on minimum latency and REL criteria, followed by refining the model's accuracy using various data augmentation techniques and loss functions. Finally, the refined model is compressed using quantization and pruning techniques to minimize its size for efficient on-device real-time applications. Experimental results from implementing the RMS algorithm indicated that, within the required latency and size constraints, the IEBins model exhibited the most accurate REL (absolute RELative error) performance, achieving a 0.0480 REL. Furthermore, the data augmentation combination of the original dataset with Flip, Mask, and CutFlip, alongside the SigLoss loss function, displayed the best REL performance, with a score of 0.0461. The network compression technique using FP16 was analyzed as the most effective, reducing the model size by 83.4% compared to the original while maintaining the least impact on REL performance and latency. Finally, the performance of the RMS algorithm was validated on the on-device autonomous driving platform, NVIDIA Jetson AGX Orin, through which optimal deployment strategies were derived for various applications and scenarios requiring autonomous driving technologies.

6.
Sensors (Basel) ; 24(14)2024 Jul 20.
Artigo em Inglês | MEDLINE | ID: mdl-39066116

RESUMO

Due to the low-complexity implementation, direction-of-arrival (DOA) estimation-based one-bit quantized data are of interest, but also, signal processing struggles to obtain the demanded estimation accuracy. In this study, we injected a number of noise components into the receiving data before the uniform linear array (ULA) composed of one-bit quantizers. Then, based on this designed noise-boosted quantizer unit (NBQU), we propose an efficient one-bit multiple signal classification (MUSIC) method for estimating the DOA. Benefiting from the injected noise, the numerical results show that the proposed NBQU-based MUSIC method outperforms existing one-bit MUSIC methods in terms of estimation accuracy and resolution. Furthermore, with the optimal root mean square (RMS) of the injected noise, the estimation accuracy of the proposed method for estimating DOA can approach that of the MUSIC method based on the complete analog data.

7.
Sensors (Basel) ; 24(7)2024 Apr 02.
Artigo em Inglês | MEDLINE | ID: mdl-38610476

RESUMO

The advancement of unmanned aerial vehicles (UAVs) enables early detection of numerous disasters. Efforts have been made to automate the monitoring of data from UAVs, with machine learning methods recently attracting significant interest. These solutions often face challenges with high computational costs and energy usage. Conventionally, data from UAVs are processed using cloud computing, where they are sent to the cloud for analysis. However, this method might not meet the real-time needs of disaster relief scenarios. In contrast, edge computing provides real-time processing at the site but still struggles with computational and energy efficiency issues. To overcome these obstacles and enhance resource utilization, this paper presents a convolutional neural network (CNN) model with an early exit mechanism designed for fire detection in UAVs. This model is implemented using TSMC 40 nm CMOS technology, which aids in hardware acceleration. Notably, the neural network has a modest parameter count of 11.2 k. In the hardware computation part, the CNN circuit completes fire detection in approximately 230,000 cycles. Power-gating techniques are also used to turn off inactive memory, contributing to reduced power consumption. The experimental results show that this neural network reaches a maximum accuracy of 81.49% in the hardware implementation stage. After automatic layout and routing, the CNN hardware accelerator can operate at 300 MHz, consuming 117 mW of power.

8.
Nano Lett ; 23(1): 17-24, 2023 Jan 11.
Artigo em Inglês | MEDLINE | ID: mdl-36573935

RESUMO

The development of devices that exhibit both superconducting and semiconducting properties is an important endeavor for emerging quantum technologies. We investigate superconducting nanowires fabricated on a silicon-on-insulator (SOI) platform. Aluminum from deposited contact electrodes is found to interdiffuse with Si along the entire length of the nanowire, over micrometer length scales and at temperatures well below the Al-Si eutectic. The phase-transformed material is conformal with the predefined device patterns. The superconducting properties of a transformed mesoscopic ring formed on a SOI platform are investigated. Low-temperature magnetoresistance oscillations, quantized in units of the fluxoid, h/2e, are observed.

9.
Nano Lett ; 23(23): 11137-11144, 2023 Dec 13.
Artigo em Inglês | MEDLINE | ID: mdl-37948302

RESUMO

Disorder is the primary obstacle in the current Majorana nanowire experiments. Reducing disorder or achieving ballistic transport is thus of paramount importance. In clean and ballistic nanowire devices, quantized conductance is expected, with plateau quality serving as a benchmark for disorder assessment. Here, we introduce ballistic PbTe nanowire devices grown by using the selective-area-growth (SAG) technique. Quantized conductance plateaus in units of 2e2/h are observed at zero magnetic field. This observation represents an advancement in diminishing disorder within SAG nanowires as most of the previously studied SAG nanowires (InSb or InAs) have not exhibited zero-field ballistic transport. Notably, the plateau values indicate that the ubiquitous valley degeneracy in PbTe is lifted in nanowire devices. This degeneracy lifting addresses an additional concern in the pursuit of Majorana realization. Moreover, these ballistic PbTe nanowires may enable the search for clean signatures of the spin-orbit helical gap in future devices.

10.
Nano Lett ; 23(8): 3274-3281, 2023 Apr 26.
Artigo em Inglês | MEDLINE | ID: mdl-37014819

RESUMO

Landau quantization associated with the quantized cyclotron motion of electrons under magnetic field provides the effective way to investigate topologically protected quantum states with entangled degrees of freedom and multiple quantum numbers. Here we report the cascade of Landau quantization in a strained type-II Dirac semimetal NiTe2 with spectroscopic-imaging scanning tunneling microscopy. The uniform-height surfaces exhibit single-sequence Landau levels (LLs) at a magnetic field originating from the quantization of topological surface state (TSS) across the Fermi level. Strikingly, we reveal the multiple sequence of LLs in the strained surface regions where the rotation symmetry is broken. First-principles calculations demonstrate that the multiple LLs attest to the remarkable lifting of the valley degeneracy of TSS by the in-plane uniaxial or shear strains. Our findings pave a pathway to tune multiple degrees of freedom and quantum numbers of TMDs via strain engineering for practical applications such as high-frequency rectifiers, Josephson diode and valleytronics.

11.
Entropy (Basel) ; 26(6)2024 May 29.
Artigo em Inglês | MEDLINE | ID: mdl-38920476

RESUMO

Block compressed sensing (BCS) is a promising method for resource-constrained image/video coding applications. However, the quantization of BCS measurements has posed a challenge, leading to significant quantization errors and encoding redundancy. In this paper, we propose a quantization method for BCS measurements using convolutional neural networks (CNN). The quantization process maps measurements to quantized data that follow a uniform distribution based on the measurements' distribution, which aims to maximize the amount of information carried by the quantized data. The dequantization process restores the quantized data to data that conform to the measurements' distribution. The restored data are then modified by the correlation information of the measurements drawn from the quantized data, with the goal of minimizing the quantization errors. The proposed method uses CNNs to construct quantization and dequantization processes, and the networks are trained jointly. The distribution parameters of each block are used as side information, which is quantized with 1 bit by the same method. Extensive experiments on four public datasets showed that, compared with uniform quantization and entropy coding, the proposed method can improve the PSNR by an average of 0.48 dB without using entropy coding when the compression bit rate is 0.1 bpp.

12.
Entropy (Basel) ; 26(5)2024 Apr 28.
Artigo em Inglês | MEDLINE | ID: mdl-38785620

RESUMO

Quantum physics is intrinsically probabilistic, where the Born rule yields the probabilities associated with a state that deterministically evolves. The entropy of a quantum state quantifies the amount of randomness (or information loss) of such a state. The degrees of freedom of a quantum state are position and spin. We focus on the spin degree of freedom and elucidate the spin-entropy. Then, we present some of its properties and show how entanglement increases spin-entropy. A dynamic model for the time evolution of spin-entropy concludes the paper.

13.
Entropy (Basel) ; 26(1)2024 Jan 17.
Artigo em Inglês | MEDLINE | ID: mdl-38248201

RESUMO

We are looking at an aggregation of matter into granules. Diffusion plays a pivotal role here. When going down to the nanometer scale (the so-called nanoscale quantum-size effect limit), quantum mechanics, and the Heisenberg uncertainty relation, may take over the role of classical diffusion, as viewed typically in the mesoscopic/stochastic limit. A d-dimensional entropy-production aggregation of the granules-involving matter in the granule-size space is considered in terms of a (sub)diffusive realization. It turns out that when taking a full d-dimensional pathway of the aggregation toward the nanoscale, one is capable of disclosing a Heisenberg-type (diffusional) relation, setting up an upper uncertainty bound for the (sub)diffusive, very slow granules-including environment that, within the granule-size analogy invoked, matches the quantum limit of h/2πµ (µ-average mass of a granule; h-the Planck's constant) for the diffusion coefficient of the aggregation, first proposed by Fürth in 1933 and qualitatively foreseen by Schrödinger some years before, with both in the context of a diffusing particle. The classical quantum passage uncovered here, also termed insightfully as the quantum-size effect (as borrowed from the quantum dots' parlance), works properly for the three-dimensional (d = 3) case, making use of a substantial physical fact that the (nano)granules interact readily via their surfaces with the also-granular surroundings in which they are immersed. This natural observation is embodied in the basic averaging construction of the diffusion coefficient of the entropy-productive (nano)aggregation of interest.

14.
J Med Internet Res ; 25: e42637, 2023 06 09.
Artigo em Inglês | MEDLINE | ID: mdl-37294606

RESUMO

BACKGROUND: Computer-aided detection, used in the screening and diagnosing of cognitive impairment, provides an objective, valid, and convenient assessment. Particularly, digital sensor technology is a promising detection method. OBJECTIVE: This study aimed to develop and validate a novel Trail Making Test (TMT) using a combination of paper and electronic devices. METHODS: This study included community-dwelling older adult individuals (n=297), who were classified into (1) cognitively healthy controls (HC; n=100 participants), (2) participants diagnosed with mild cognitive impairment (MCI; n=98 participants), and (3) participants with Alzheimer disease (AD; n=99 participants). An electromagnetic tablet was used to record each participant's hand-drawn stroke. A sheet of A4 paper was placed on top of the tablet to maintain the traditional interaction style for participants who were not familiar or comfortable with electronic devices (such as touchscreens). In this way, all participants were instructed to perform the TMT-square and circle. Furthermore, we developed an efficient and interpretable cognitive impairment-screening model to automatically analyze cognitive impairment levels that were dependent on demographic characteristics and time-, pressure-, jerk-, and template-related features. Among these features, novel template-based features were based on a vector quantization algorithm. First, the model identified a candidate trajectory as the standard answer (template) from the HC group. The distance between the recorded trajectories and reference was computed as an important evaluation index. To verify the effectiveness of our method, we compared the performance of a well-trained machine learning model using the extracted evaluation index with conventional demographic characteristics and time-related features. The well-trained model was validated using follow-up data (HC group: n=38; MCI group: n=32; and AD group: n=22). RESULTS: We compared 5 candidate machine learning methods and selected random forest as the ideal model with the best performance (accuracy: 0.726 for HC vs MCI, 0.929 for HC vs AD, and 0.815 for AD vs MCI). Meanwhile, the well-trained classifier achieved better performance than the conventional assessment method, with high stability and accuracy of the follow-up data. CONCLUSIONS: The study demonstrated that a model combining both paper and electronic TMTs increases the accuracy of evaluating participants' cognitive impairment compared to conventional paper-based feature assessment.


Assuntos
Doença de Alzheimer , Disfunção Cognitiva , Humanos , Idoso , Teste de Sequência Alfanumérica , Imageamento por Ressonância Magnética/métodos , Disfunção Cognitiva/diagnóstico , Disfunção Cognitiva/psicologia , Doença de Alzheimer/diagnóstico , Eletrônica
15.
Sensors (Basel) ; 23(2)2023 Jan 16.
Artigo em Inglês | MEDLINE | ID: mdl-36679813

RESUMO

In this paper, a complex-valued Zadoff-Chu measurement matrix is proposed and used in an image-based quantized compressive sensing (CS) scheme. The results of theoretical analysis and simulations show that the reconstruction performance generated by the proposed Zadoff-Chu measurement matrix is better than that obtained by commonly used real-valued measurement matrices. We also applied block compressive sensing (BCS) to reduce the computational complexity of CS and analyzed the effect of block size on the reconstruction performance of the method. The results of simulations revealed that an appropriate choice of block size can not only reduce the computational complexity but also improve the accuracy of reconstruction. Moreover, we studied the effect of quantization on the reconstruction performance of image-based BCS through simulations, and the results showed that analog-to-digital converters with medium resolutions are sufficient to implement quantization and achieve comparable reconstruction performance to that obtained at high resolutions, based on which an image-based BCS framework with low power consumption can thus be developed.


Assuntos
Algoritmos , Compressão de Dados
16.
Sensors (Basel) ; 23(10)2023 May 11.
Artigo em Inglês | MEDLINE | ID: mdl-37430583

RESUMO

Over the past few years, several applications have been extensively exploiting the advantages of deep learning, in particular when using convolutional neural networks (CNNs). The intrinsic flexibility of such models makes them widely adopted in a variety of practical applications, from medical to industrial. In this latter scenario, however, using consumer Personal Computer (PC) hardware is not always suitable for the potential harsh conditions of the working environment and the strict timing that industrial applications typically have. Therefore, the design of custom FPGA (Field Programmable Gate Array) solutions for network inference is gaining massive attention from researchers and companies as well. In this paper, we propose a family of network architectures composed of three kinds of custom layers working with integer arithmetic with a customizable precision (down to just two bits). Such layers are designed to be effectively trained on classical GPUs (Graphics Processing Units) and then synthesized to FPGA hardware for real-time inference. The idea is to provide a trainable quantization layer, called Requantizer, acting both as a non-linear activation for neurons and a value rescaler to match the desired bit precision. This way, the training is not only quantization-aware, but also capable of estimating the optimal scaling coefficients to accommodate both the non-linear nature of the activations and the constraints imposed by the limited precision. In the experimental section, we test the performance of this kind of model while working both on classical PC hardware and a case-study implementation of a signal peak detection device running on a real FPGA. We employ TensorFlow Lite for training and comparison, and use Xilinx FPGAs and Vivado for synthesis and implementation. The results show an accuracy of the quantized networks close to the floating point version, without the need for representative data for calibration as in other approaches, and performance that is better than dedicated peak detection algorithms. The FPGA implementation is able to run in real time at a rate of four gigapixels per second with moderate hardware resources, while achieving a sustained efficiency of 0.5 TOPS/W (tera operations per second per watt), in line with custom integrated hardware accelerators.

17.
Sensors (Basel) ; 23(5)2023 Feb 21.
Artigo em Inglês | MEDLINE | ID: mdl-36904605

RESUMO

Processing-in-Memory (PIM) based on Resistive Random Access Memory (RRAM) is an emerging acceleration architecture for artificial neural networks. This paper proposes an RRAM PIM accelerator architecture that does not use Analog-to-Digital Converters (ADCs) and Digital-to-Analog Converters (DACs). Additionally, no additional memory usage is required to avoid the need for a large amount of data transportation in convolution computation. Partial quantization is introduced to reduce the accuracy loss. The proposed architecture can substantially reduce the overall power consumption and accelerate computation. The simulation results show that the image recognition rate for the Convolutional Neural Network (CNN) algorithm can reach 284 frames per second at 50 MHz using this architecture. The accuracy of the partial quantization remains almost unchanged compared to the algorithm without quantization.

18.
Sensors (Basel) ; 23(5)2023 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-36904905

RESUMO

Atrial Fibrillation (AF) is one of the most common heart arrhythmias. It is known to cause up to 15% of all strokes. In current times, modern detection systems for arrhythmias, such as single-use patch electrocardiogram (ECG) devices, have to be energy efficient, small, and affordable. In this work, specialized hardware accelerators were developed. First, an artificial neural network (NN) for the detection of AF was optimized. Special attention was paid to the minimum requirements for the inference on a RISC-V-based microcontroller. Hence, a 32-bit floating-point-based NN was analyzed. To reduce the silicon area needed, the NN was quantized to an 8-bit fixed-point datatype (Q7). Based on this datatype, specialized accelerators were developed. Those accelerators included single-instruction multiple-data (SIMD) hardware as well as accelerators for activation functions such as sigmoid and hyperbolic tangents. To accelerate activation functions that require the e-function as part of their computation (e.g., softmax), an e-function accelerator was implemented in the hardware. To compensate for the losses of quantization, the network was expanded and optimized for run-time and memory requirements. The resulting NN has a 7.5% lower run-time in clock cycles (cc) without the accelerators and 2.2 percentage points (pp) lower accuracy compared to a floating-point-based net, while requiring 65% less memory. With the specialized accelerators, the inference run-time was lowered by 87.2% while the F1-Score decreased by 6.1 pp. Implementing the Q7 accelerators instead of the floating-point unit (FPU), the silicon area needed for the microcontroller in 180 nm-technology is below 1 mm2.


Assuntos
Fibrilação Atrial , Humanos , Silício , Eletrocardiografia , Computadores , Redes Neurais de Computação
19.
Sensors (Basel) ; 23(20)2023 Oct 22.
Artigo em Inglês | MEDLINE | ID: mdl-37896729

RESUMO

Heart rate variability (HRV) serves as a significant physiological measure that mirrors the regulatory capacity of the cardiac autonomic nervous system. It not only indicates the extent of the autonomic nervous system's influence on heart function but also unveils the connection between emotions and psychological disorders. Currently, in the field of emotion recognition using HRV, most methods focus on feature extraction through the comprehensive analysis of signal characteristics; however, these methods lack in-depth analysis of the local features in the HRV signal and cannot fully utilize the information of the HRV signal. Therefore, we propose the HRV Emotion Recognition (HER) method, utilizing the amplitude level quantization (ALQ) technique for feature extraction. First, we employ the emotion quantification analysis (EQA) technique to impartially assess the semantic resemblance of emotions within the domain of emotional arousal. Then, we use the ALQ method to extract rich local information features by analyzing the local information in each frequency range of the HRV signal. Finally, the extracted features are classified using a logistic regression (LR) classification algorithm, which can achieve efficient and accurate emotion recognition. According to the experiment findings, the approach surpasses existing techniques in emotion recognition accuracy, achieving an average accuracy rate of 84.3%. Therefore, the HER method proposed in this paper can effectively utilize the local features in HRV signals to achieve efficient and accurate emotion recognition. This will provide strong support for emotion research in psychology, medicine, and other fields.


Assuntos
Emoções , Transtornos Mentais , Humanos , Frequência Cardíaca/fisiologia , Emoções/fisiologia , Algoritmos , Eletrocardiografia
20.
Sensors (Basel) ; 23(18)2023 Sep 07.
Artigo em Inglês | MEDLINE | ID: mdl-37765778

RESUMO

Machine learning deployment on edge devices has faced challenges such as computational costs and privacy issues. Membership inference attack (MIA) refers to the attack where the adversary aims to infer whether a data sample belongs to the training set. In other words, user data privacy might be compromised by MIA from a well-trained model. Therefore, it is vital to have defense mechanisms in place to protect training data, especially in privacy-sensitive applications such as healthcare. This paper exploits the implications of quantization on privacy leakage and proposes a novel quantization method that enhances the resistance of a neural network against MIA. Recent studies have shown that model quantization leads to resistance against membership inference attacks. Existing quantization approaches primarily prioritize performance and energy efficiency; we propose a quantization framework with the main objective of boosting the resistance against membership inference attacks. Unlike conventional quantization methods whose primary objectives are compression or increased speed, our proposed quantization aims to provide defense against MIA. We evaluate the effectiveness of our methods on various popular benchmark datasets and model architectures. All popular evaluation metrics, including precision, recall, and F1-score, show improvement when compared to the full bitwidth model. For example, for ResNet on Cifar10, our experimental results show that our algorithm can reduce the attack accuracy of MIA by 14%, the true positive rate by 37%, and F1-score of members by 39% compared to the full bitwidth network. Here, reduction in true positive rate means the attacker will not be able to identify the training dataset members, which is the main goal of the MIA.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA