Your browser doesn't support javascript.
loading
A memristive all-inclusive hypernetwork for parallel analog deployment of full search space architectures.
Lyu, Bo; Yang, Yin; Cao, Yuting; Shi, Tuo; Chen, Yiran; Huang, Tingwen; Wen, Shiping.
Afiliação
  • Lyu B; Zhejiang Lab, Hangzhou, Zhejiang, China. Electronic address: bo.lyu@zhejianglab.com.
  • Yang Y; College of Science and Engineering, Hamad Bin Khalifa University, Doha 5855, Qatar. Electronic address: yyang@hbku.edu.qa.
  • Cao Y; College of Science and Engineering, Hamad Bin Khalifa University, Doha 5855, Qatar. Electronic address: ycao@hbku.edu.qa.
  • Shi T; Zhejiang Lab, Hangzhou, Zhejiang, China. Electronic address: shituo@zhejianglab.com.
  • Chen Y; Department of Electrical and Computer Engineering, Duke University, Durham, NC, USA. Electronic address: yiran.chen@duke.edu.
  • Huang T; Science Program, Texas A & M University at Qatar, Doha 23874, Qatar. Electronic address: tingwen.huang@qatar.tamu.edu.
  • Wen S; Australian AI Institute, Faculty of Engineering and Information Technology, University of Technology Sydney, NSW 2007, Australia. Electronic address: shiping.wen@uts.edu.au.
Neural Netw ; 175: 106312, 2024 Jul.
Article em En | MEDLINE | ID: mdl-38642415
ABSTRACT
In recent years, there has been a significant advancement in memristor-based neural networks, positioning them as a pivotal processing-in-memory deployment architecture for a wide array of deep learning applications. Within this realm of progress, the emerging parallel analog memristive platforms are prominent for their ability to generate multiple feature maps in a single processing cycle. However, a notable limitation is that they are specifically tailored for neural networks with fixed structures. As an orthogonal direction, recent research reveals that neural architecture should be specialized for tasks and deployment platforms. Building upon this, the neural architecture search (NAS) methods effectively explore promising architectures in a large design space. However, these NAS-based architectures are generally heterogeneous and diversified, making it challenging for deployment on current single-prototype, customized, parallel analog memristive hardware circuits. Therefore, investigating memristive analog deployment that overrides the full search space is a promising and challenging problem. Inspired by this, and beginning with the DARTS search space, we study the memristive hardware design of primitive operations and propose the memristive all-inclusive hypernetwork that covers 2×1025 network architectures. Our computational simulation results on 3 representative architectures (DARTS-V1, DARTS-V2, PDARTS) show that our memristive all-inclusive hypernetwork achieves promising results on the CIFAR10 dataset (89.2% of PDARTS with 8-bit quantization precision), and is compatible with all architectures in the DARTS full-space. The hardware performance simulation indicates that the memristive all-inclusive hypernetwork costs slightly more resource consumption (nearly the same in power, 22%∼25% increase in Latency, 1.5× in Area) relative to the individual deployment, which is reasonable and may reach a tolerable trade-off deployment scheme for industrial scenarios.
Assuntos
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Redes Neurais de Computação Idioma: En Revista: Neural Netw / Neural netw / Neural networks Assunto da revista: NEUROLOGIA Ano de publicação: 2024 Tipo de documento: Article País de publicação: Estados Unidos

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Redes Neurais de Computação Idioma: En Revista: Neural Netw / Neural netw / Neural networks Assunto da revista: NEUROLOGIA Ano de publicação: 2024 Tipo de documento: Article País de publicação: Estados Unidos