RESUMO
The ventral visual pathway (VVP) of the human brain efficiently implements target recognition by employing a deep hierarchical structure to build complex visual concepts from simple features. Artificial neural networks (ANNs) based on spintronic devices are capable of target recognition, but their poor interpretability and limited network depth hinder ANNs from mimicking the VVP. Hardware implementation of the VVP requires a biorealistic spintronic device as well as the corresponding interpretable and deep network structure, which have not been reported so far. Here, we report a ferrimagnetic neuron with a continuously differentiable exponential linear unit (CeLu) activation function, which is closer to biological neurons and could mitigate the issue of limited network depth. Meanwhile, we also demonstrate that a ferrimagnet can construct artificial synapses with high linearity and symmetry to meet the requirements of weight update algorithms. Based on these neurons and synapses, we propose an all-spin convolutional neural network (CNN) with a high interpretability and deep neural network, to mimic the VVP. Compared to the state-of-the-art spintronic-based neuromorphic computing model, the CNN with bionic function, using experimentally derived device parameters, achieves high recognition accuracies of over 91% and 98% on the CIFAR-10 datasets and the MNIST datasets, respectively, showing improvements of 1.13% and 1.76%. Our work provides a promising method to improve the bionic performance of spintronic device-based neural networks.
Assuntos
Redes Neurais de Computação , Humanos , Vias Visuais/fisiologia , Neurônios/fisiologia , Algoritmos , Imãs/químicaRESUMO
This study investigates a brain-computer interface (BCI) system based on an augmented reality (AR) environment and steady-state visual evoked potentials (SSVEP). The system is designed to facilitate the selection of real-world objects through visual gaze in real-life scenarios. By integrating object detection technology and AR technology, the system augmented real objects with visual enhancements, providing users with visual stimuli that induced corresponding brain signals. SSVEP technology was then utilized to interpret these brain signals and identify the objects that users focused on. Additionally, an adaptive dynamic time-window-based filter bank canonical correlation analysis was employed to rapidly parse the subjects' brain signals. Experimental results indicated that the system could effectively recognize SSVEP signals, achieving an average accuracy rate of 90.6% in visual target identification. This system extends the application of SSVEP signals to real-life scenarios, demonstrating feasibility and efficacy in assisting individuals with mobility impairments and physical disabilities in object selection tasks.
Assuntos
Realidade Aumentada , Interfaces Cérebro-Computador , Eletroencefalografia , Potenciais Evocados Visuais , Humanos , Potenciais Evocados Visuais/fisiologia , Estimulação Luminosa , Interface Usuário-Computador , AlgoritmosRESUMO
The extraction of typical features of underwater target signals and excellent recognition algorithms are the keys to achieving underwater acoustic target recognition of divers. This paper proposes a feature extraction method for diver signals: frequency-domain multi-sub-band energy (FMSE), aiming to achieve accurate recognition of diver underwater acoustic targets by passive sonar. The impact of the presence or absence of targets, different numbers of targets, different signal-to-noise ratios, and different detection distances on this method was studied based on experimental data under different conditions, such as water pools and lakes. It was found that the FMSE method has the best robustness and performance compared with two other signal feature extraction methods: mel frequency cepstral coefficient filtering and gammatone frequency cepstral coefficient filtering. Combined with the commonly used recognition algorithm of support vector machines, the FMSE method can achieve a comprehensive recognition accuracy of over 94% for frogman underwater acoustic targets. This indicates that the FMSE method is suitable for underwater acoustic recognition of diver targets.
RESUMO
Target detection in satellite images is an essential topic in the field of remote sensing and computer vision. Despite extensive research efforts, accurate and efficient target detection in remote sensing images remains unsolved due to the large target scale span, dense distribution, and overhead imaging and complex backgrounds, which result in high target feature similarity and serious occlusion. In order to address the above issues in a comprehensive manner, within this paper, we first propose a Centralised Visual Processing Center (CVPC), this structure is a parallel visual processing center for Transformer encoder and CNN, employing a lightweight encoder to capture broad, long-range interdependencies. Pixel-level Learning Center (PLC) module is used to establish pixel-level correlations and improve the depiction of detailed features. CVPC effectively improves the detection efficiency of remote sensing targets with high feature similarity and severe occlusion. Secondly, we propose a centralised feature cross-layer fusion pyramid structure to fuse the results with the CVPC in a top-down manner to enhance the detailed feature representation capability at each layer. Ultimately, we present a Context Enhanced Adaptive Sparse Convolutional Network (CEASC), which improves the accuracy while ensuring the detection efficiency. Based on the above modules, we designed and conducted a series of experiments. These experiments are conducted on three challenging public datasets, DOTA-v1.0, DIOR, and RSDO, showing that our proposed 3CNet achieves a more advanced detection accuracy while balancing the detection speed (78.62% mAP for DOTA-v1.0, 79.12% mAP for DIOR, and 95.50% mAP for RSOD).
RESUMO
Glycinamide ribonucleotide formyltransferase (GARFT) is an important enzyme in the folate metabolism pathway, and chemical drugs targeting GARFT have been used in tumor treatments over the past few decades. The development of novel antimetabolism drugs that target GARFT with improved performance and superior activity remains an attractive strategy. Herein, we proposed a targeted double-template molecularly imprinted polymer (MIP) for enhancing macrophage phagocytosis and synergistic antimetabolic therapy. The double-template MIP was prepared by imprinting the exposed peptide segment of the extracellular domain of CD47 and the active center of GARFT. Owing to the imprinted cavities on the surface of MIP, it can actively target cancer cells and mask the "do not eat me" signal upon binding to CD47 thereby blocking the CD47-SIRPα pathway and ultimately enhancing phagocytosis by macrophages. In addition, MIP can specifically bind to the active center of GARFT upon entry into the cells, thereby inhibiting its catalytic activity and ultimately interfering with the normal expression of DNA. A series of cell experiments demonstrated that MIP can effectively target CD47 overexpressed 4T1 cancer cells and inhibit the growth of 4T1 cells. The enhanced phagocytosis ability of macrophages-RAW264.7 cells was also clearly observed by confocal imaging experiments. In vivo experiments also showed that the MIP exhibited a satisfactory tumor inhibition effect. Therefore, this study provides a new idea for the application of molecular imprinting technology to antimetabolic therapy in conjunction with macrophage-mediated immunotherapy.
Assuntos
Antígeno CD47 , Macrófagos , Polímeros Molecularmente Impressos , Fagocitose , Antígeno CD47/metabolismo , Antígeno CD47/química , Fagocitose/efeitos dos fármacos , Animais , Camundongos , Macrófagos/efeitos dos fármacos , Macrófagos/metabolismo , Células RAW 264.7 , Polímeros Molecularmente Impressos/química , Linhagem Celular Tumoral , Feminino , Camundongos Endogâmicos BALB C , Humanos , Antineoplásicos/química , Antineoplásicos/farmacologiaRESUMO
The cultivation of the Chinese mitten crab (Eriocheir sinensis) is an important component of China's aquaculture industry and also a field of concern worldwide. It focuses on the selection of high-quality, disease-free juvenile crabs. However, the early maturity rate of more than 18.2% and the mortality rate of more than 60% make it difficult to select suitable juveniles for adult culture. The juveniles exhibit subtle distinguishing features, and the methods for differentiating between sexes vary significantly; without training from professional breeders, it is challenging for laypersons to identify and select the appropriate juveniles. Therefore, we propose a task-aligned detection algorithm for identifying one-year-old precocious Chinese mitten crabs, named R-TNET. Initially, the required images were obtained by capturing key frames, and then they were annotated and preprocessed by professionals to build a training dataset. Subsequently, the ResNeXt network was selected as the backbone feature extraction network, with Convolutional Block Attention Modules (CBAMs) and a Deformable Convolution Network (DCN) embedded in its residual blocks to enhance its capability to extract complex features. Adaptive spatial feature fusion (ASFF) was then integrated into the feature fusion network to preserve the detailed features of small targets such as one-year-old precocious Chinese mitten crab juveniles. Finally, based on the detection head proposed by task-aligned one-stage object detection, the parameters of its anchor alignment metric were adjusted to detect, locate, and classify the crab juveniles. The experimental results showed that this method achieves a mean average precision (mAP) of 88.78% and an F1-score of 97.89%. This exceeded the best-performing mainstream object detection algorithm, YOLOv7, by 4.17% in mAP and 1.77% in the F1-score. Ultimately, in practical application scenarios, the algorithm effectively identified one-year-old precocious Chinese mitten crabs, providing technical support for the automated selection of high-quality crab juveniles in the cultivation process, thereby promoting the rapid development of aquaculture and agricultural intelligence in China.
RESUMO
Synthetic Aperture Radar (SAR) is renowned for its all-weather and all-time imaging capabilities, making it invaluable for ship target recognition. Despite the advancements in deep learning models, the efficiency of Convolutional Neural Networks (CNNs) in the frequency domain is often constrained by memory limitations and the stringent real-time requirements of embedded systems. To surmount these obstacles, we introduce the Split_ Composite method, an innovative convolution acceleration technique grounded in Fast Fourier Transform (FFT). This method employs input block decomposition and a composite zero-padding approach to streamline memory bandwidth and computational complexity via optimized frequency-domain convolution and image reconstruction. By capitalizing on FFT's inherent periodicity to augment frequency resolution, Split_ Composite facilitates weight sharing, curtailing both memory access and computational demands. Our experiments, conducted using the OpenSARShip-4 dataset, confirm that the Split_ Composite method upholds high recognition precision while markedly enhancing inference velocity, especially in the realm of large-scale data processing, thereby exhibiting exceptional scalability and efficiency. When juxtaposed with state-of-the-art convolution optimization technologies such as Winograd and TensorRT, Split_ Composite has demonstrated a significant lead in inference speed without compromising the precision of recognition.
RESUMO
Non-visual auditory camouflage plays a major role in the art of underwater deception. In this work, a hybrid active/semi-active omnidirectional cloaking shell structure composed of alternate complementary piezoelectric and smart viscoelastic (PZT/SVE) actuator layers is proposed that can effectively conceal a three dimensional underwater macroscopic object from broadband incident sound waves. The smart hybrid structure incorporates a finite sequence of fully active parallel-connected multimorph PZT constraining layers inter-stacked with semi-active SVE core layers both of which are collaboratively operative in the framework of a Particle Swarm Optimized (PSO) multiple-input multiple-output active damping control (MIMO-ADC) scheme. The elasto-acoustic modeling of the problem is conducted by coupling the spatial state space methodology based on the classical three-dimensional exact piezoelasticity theory with the wave equations for the inner and outer acoustic domains. The acoustic cloaking performance of proposed configuration is evaluated for four distinct classes of highly functional SVE interlayer materials with tunable (field-dependent) rheological properties, namely, magnetorheological elastomer (MRE), shape memory polymer (SMP), electrorheological fluid (ERF), and magnetorheological shear thickening polishing fluid (MRSTPF). Extensive numerical results reveal significant broadband reductions of the far-field backscattering amplitude in the ( f ∞ θ = π , k ex R ex ) as well as the percentage error of external cloaked field ( % Err ) by incorporating a sufficient number of smart multimorph PZT/SVE material layers. Furthermore, it is concluded that comparable low frequency acoustic cloaking effects is possible without expenditure of any external energy just by employing the entirely inactive MRSTPF-based cloak as an alternative to the semiactive or fully active multimorph PZT/SVE cloaks. The outcome of proposed study can advantageously serve as the first step towards practical development and experimental implementation of future high performance smart acoustic cloaking devices with expanded broadband near-perfect omnidirectional invisibility for three dimensional objects of diverse geometries.
RESUMO
Train wheels are crucial components for ensuring the safety of trains. The accurate and fast identification of wheel tread defects is necessary for the timely maintenance of wheels, which is essential for achieving the premise of conditional repair. Image-based detection methods are commonly used for detecting tread defects, but they still have issues with the misdetection of water stains and the leaking of small defects. In this paper, we address the challenges posed by the detection of wheel tread defects by proposing improvements to the YOLOv8 model. Firstly, the impact of water stains on tread defect detection is avoided by optimising the structure of the detection layer. Secondly, an improved SPPCSPC module is introduced to enhance the detection of small targets. Finally, the SIoU loss function is used to accelerate the convergence speed of the network, which ensures defect recognition accuracy with high operational efficiency. Validation was performed on the constructed tread defect dataset. The results demonstrate that the enhanced YOLOv8 model in this paper outperforms the original network and significantly improves the tread defect detection indexes. The average precision, accuracy, and recall reached 96.95%, 96.30%, and 95.31%.
RESUMO
Introduction: Accurate classification of single-trial electroencephalogram (EEG) is crucial for EEG-based target image recognition in rapid serial visual presentation (RSVP) tasks. P300 is an important component of a single-trial EEG for RSVP tasks. However, single-trial EEG are usually characterized by low signal-to-noise ratio and limited sample sizes. Methods: Given these challenges, it is necessary to optimize existing convolutional neural networks (CNNs) to improve the performance of P300 classification. The proposed CNN model called PSAEEGNet, integrates standard convolutional layers, pyramid squeeze attention (PSA) modules, and deep convolutional layers. This approach arises the extraction of temporal and spatial features of the P300 to a finer granularity level. Results: Compared with several existing single-trial EEG classification methods for RSVP tasks, the proposed model shows significantly improved performance. The mean true positive rate for PSAEEGNet is 0.7949, and the mean area under the receiver operating characteristic curve (AUC) is 0.9341 (p < 0.05). Discussion: These results suggest that the proposed model effectively extracts features from both temporal and spatial dimensions of P300, leading to a more accurate classification of single-trial EEG during RSVP tasks. Therefore, this model has the potential to significantly enhance the performance of target recognition systems based on EEG, contributing to the advancement and practical implementation of target recognition in this field.
RESUMO
Peptides acquire target affinity based on the combination of residues in their sequences and the conformation formed by their flexible folding, an ability that makes them very attractive biomaterials in therapeutic, diagnostic, and assay fields. With the development of computer technology, computer-aided design and screening of affinity peptides has become a more efficient and faster method. This review summarizes successful cases of computer-aided design and screening of affinity peptide ligands in recent years and lists the computer programs and online servers used in the process. In particular, the characteristics of different design and screening methods are summarized and categorized to help researchers choose between different methods. In addition, experimentally validated sequences are listed, and their applications are described, providing directions for the future development and application of computational peptide screening and design.
Assuntos
Simulação por Computador , Peptídeos , Ligantes , Peptídeos/química , Desenho de Fármacos , Desenho Assistido por Computador , HumanosRESUMO
In recent years, the application of deep learning models for underwater target recognition has become a popular trend. Most of these are pure 1D models used for processing time-domain signals or pure 2D models used for processing time-frequency spectra. In this paper, a recent temporal 2D modeling method is introduced into the construction of ship radiation noise classification models, combining 1D and 2D. This method is based on the periodic characteristics of time-domain signals, shaping them into 2D signals and discovering long-term correlations between sampling points through 2D convolution to compensate for the limitations of 1D convolution. Integrating this method with the current state-of-the-art model structure and using samples from the Deepship database for network training and testing, it was found that this method could further improve the accuracy (0.9%) and reduce the parameter count (30%), providing a new option for model construction and optimization. Meanwhile, the effectiveness of training models using time-domain signals or time-frequency representations has been compared, finding that the model based on time-domain signals is more sensitive and has a smaller storage footprint (reduced to 30%), whereas the model based on time-frequency representation can achieve higher accuracy (1-2%).
RESUMO
In this paper, an intelligent blind guide system based on 2D LiDAR and RGB-D camera sensing is proposed, and the system is mounted on a smart cane. The intelligent guide system relies on 2D LiDAR, an RGB-D camera, IMU, GPS, Jetson nano B01, STM32, and other hardware. The main advantage of the intelligent guide system proposed by us is that the distance between the smart cane and obstacles can be measured by 2D LiDAR based on the cartographer algorithm, thus achieving simultaneous localization and mapping (SLAM). At the same time, through the improved YOLOv5 algorithm, pedestrians, vehicles, pedestrian crosswalks, traffic lights, warning posts, stone piers, tactile paving, and other objects in front of the visually impaired can be quickly and effectively identified. Laser SLAM and improved YOLOv5 obstacle identification tests were carried out inside a teaching building on the campus of Hainan Normal University and on a pedestrian crossing on Longkun South Road in Haikou City, Hainan Province. The results show that the intelligent guide system developed by us can drive the omnidirectional wheels at the bottom of the smart cane and provide the smart cane with a self-leading blind guide function, like a "guide dog", which can effectively guide the visually impaired to avoid obstacles and reach their predetermined destination, and can quickly and effectively identify the obstacles on the way out. The mapping and positioning accuracy of the system's laser SLAM is 1 m ± 7 cm, and the laser SLAM speed of this system is 25~31 FPS, which can realize the short-distance obstacle avoidance and navigation function both in indoor and outdoor environments. The improved YOLOv5 helps to identify 86 types of objects. The recognition rates for pedestrian crosswalks and for vehicles are 84.6% and 71.8%, respectively; the overall recognition rate for 86 types of objects is 61.2%, and the obstacle recognition rate of the intelligent guide system is 25-26 FPS.
RESUMO
In recent years, target recognition technology for synthetic aperture radar (SAR) images has witnessed significant advancements, particularly with the development of convolutional neural networks (CNNs). However, acquiring SAR images requires significant resources, both in terms of time and cost. Moreover, due to the inherent properties of radar sensors, SAR images are often marred by speckle noise, a form of high-frequency noise. To address this issue, we introduce a Generative Adversarial Network (GAN) with a dual discriminator and high-frequency pass filter, named DH-GAN, specifically designed for generating simulated images. DH-GAN produces images that emulate the high-frequency characteristics of real SAR images. Through power spectral density (PSD) analysis and experiments, we demonstrate the validity of the DH-GAN approach. The experimental results show that not only do the SAR image generated using DH-GAN closely resemble the high-frequency component of real SAR images, but the proficiency of CNNs in target recognition, when trained with these simulated images, is also notably enhanced.
RESUMO
Rapid, sensitive and selective biosensing is highly important for analyzing biological targets and dynamic physiological processes in cells and living organisms. As an emerging tool, clustered regularly interspaced short palindromic repeats (CRISPR) system is featured with excellent complementary-dependent cleavage and efficient trans-cleavage ability. These merits enable CRISPR system to improve the specificity, sensitivity, and speed for molecular detection. Herein, the structures and functions of several CRISPR proteins for biosensing are summarized in depth. Moreover, the strategies of target recognition, signal conversion, and signal amplification for CRISPR-based biosensing were highlighted from the perspective of biosensor design principles. The state-of-art applications and recent advances of CRISPR system are then outlined, with emphasis on their fluorescent, electrochemical, colorimetric, and applications in POCT technology. Finally, the current challenges and future prospects of this frontier research area are discussed.
Assuntos
Técnicas Biossensoriais , Colorimetria , Corantes , Sistemas CRISPR-Cas/genéticaRESUMO
At present, the micro-Doppler effects of underwater targets is a challenging new research problem. This paper studies the micro-Doppler effect of underwater targets, analyzes the moving characteristics of underwater micro-motion components, establishes echo models of harmonic vibration points and plane and rotating propellers, and reveals the complex modulation laws of the micro-Doppler effect. In addition, since an echo is a multi-component signal superposed by multiple modulated signals, this paper provides a sparse reconstruction method combined with time-frequency distributions and realizes signal separation and time-frequency analysis. A MicroDopplerlet time-frequency atomic dictionary, matching the complex modulated form of echoes, is designed, which effectively realizes the concise representation of echoes and a micro-Doppler effect analysis. Meanwhile, the needed micro-motion parameter information for underwater signal detection and recognition is extracted.
RESUMO
Owing to the disparity between the computing power and hardware development in electronic neural networks, optical diffraction networks have emerged as crucial technologies for various applications, including target recognition, because of their high speed, low power consumption, and large bandwidth. However, traditional optical diffraction networks and electronic neural networks are limited by long training durations and hardware requirements for complex applications. To overcome these constraints, this paper proposes an innovative opto-electronic hybrid system that combines optical diffraction networks with electronic neural networks. Using scattering layers to replace the diffraction layers in traditional optical diffraction networks, this hybrid system circumvents the challenging training process associated with diffraction layers. Spectral outputs of the optical diffraction network were processed using a simple backpropagation neural network, forming an opto-electronic hybrid network exhibiting exceptional performance with minimal data. For three-class target recognition, this network attains a classification accuracy of 93.3% within a substantially short training time of 9.2 s using only 100 data samples (training: 70 and testing: 30). Furthermore, it demonstrates exceptional insensitivity to position errors in scattering elements, enhancing its robustness. Therefore, the proposed opto-electronic hybrid network presents substantial application prospects in the fields of machine vision, face recognition, and remote sensing.
RESUMO
Radar high-resolution range profile (HRRP) provides geometric and structural information of target, which is important for radar automatic target recognition (RATR). However, due to the limited information dimension of HRRP, achieving accurate target recognition is challenging in applications. In recent years, with the rapid development of radar components and signal processing technology, the acquisition and use of target multi-frequency and polarization scattering information has become a significant way to improve target recognition performance. Meanwhile, deep learning inspired by the human brain has shown great promise in pattern recognition applications. In this paper, a Multi-channel Fusion Feature Extraction Network (MFFE-Net) inspired by the human brain is proposed for dual-band polarimetric HRRP, aiming at addressing the challenges faced in HRRP target recognition. In the proposed network, inspired by the human brain's multi-dimensional information interaction, the similarity and difference features of dual-frequency HRRP are first extracted to realize the interactive fusion of frequency features. Then, inspired by the human brain's selective attention mechanism, the interactive weights are obtained for multi-polarization features and multi-scale representation, enabling feature aggregation and multi-scale fusion. Finally, inspired by the human brain's hierarchical learning mechanism, the layer-by-layer feature extraction and fusion with residual connections are designed to enhance the separability of features. Experiments on simulated and measured datasets verify the accurate recognition capability of MFFE-Net, and ablative studies are conducted to confirm the effectiveness of components of network for recognition.
RESUMO
Long-range target detection in thermal infrared imagery is a challenging research problem due to the low resolution and limited detail captured by thermal sensors. The limited size and variability in thermal image datasets for small target detection is also a major constraint for the development of accurate and robust detection algorithms. To address both the sensor and data constraints, we propose a novel convolutional neural network (CNN) feature extraction architecture designed for small object detection in data-limited settings. More specifically, we focus on long-range ground-based thermal vehicle detection, but also show the effectiveness of the proposed algorithm on drone and satellite aerial imagery. The design of the proposed architecture is inspired by an analysis of popular object detectors as well as custom-designed networks. We find that restricted receptive fields (rather than more globalized features, as is the trend), along with less downsampling of feature maps and attenuated processing of fine-grained features, lead to greatly improved detection rates while mitigating the model's capacity to overfit on small or poorly varied datasets. Our approach achieves state-of-the-art results on the Defense Systems Information Analysis Center (DSIAC) automated target recognition (ATR) and the Tiny Object Detection in Aerial Images (AI-TOD) datasets.
RESUMO
In recent times, the realm of remote sensing has witnessed a remarkable surge in the area of deep learning, specifically in the domain of target recognition within synthetic aperture radar (SAR) images. However, prevailing deep learning models have often placed undue emphasis on network depth and width while disregarding the imperative requirement for a harmonious equilibrium between accuracy and speed. To address this concern, this paper presents FCCD-SAR, a SAR target recognition algorithm based on the lightweight FasterNet network. Initially, a lightweight and SAR-specific feature extraction backbone is meticulously crafted to better align with SAR image data. Subsequently, an agile upsampling operator named CARAFE is introduced, augmenting the extraction of scattering information and fortifying target recognition precision. Moreover, the inclusion of a rapid, lightweight module, denoted as C3-Faster, serves to heighten both recognition accuracy and computational efficiency. Finally, in cognizance of the diverse scales and vast variations exhibited by SAR targets, a detection head employing DyHead's attention mechanism is implemented to adeptly capture feature information across multiple scales, elevating recognition performance on SAR targets. Exhaustive experimentation on the MSTAR dataset unequivocally demonstrates the exceptional prowess of our FCCD-SAR algorithm, boasting a mere 2.72 M parameters and 6.11 G FLOPs, culminating in an awe-inspiring 99.5% mean Average Precision (mAP) and epitomizing its unparalleled proficiency.