Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
1.
Sensors (Basel) ; 24(9)2024 Apr 26.
Artículo en Inglés | MEDLINE | ID: mdl-38732865

RESUMEN

Cracks provide the earliest and most immediate visual response to structural deterioration of asphalt pavements. Most of the current methods for crack detection are based on visible light sensors and convolutional neural networks. However, such an approach obviously limits the detection to daytime and good lighting conditions. Therefore, this paper proposes a crack detection technique cross-modal feature alignment of YOLOV5 based on visible and infrared images. The infrared spectrum characteristics of silicate concrete can be an important supplement. The adaptive illumination-aware weight generation module is introduced to compute illumination probability to guide the training of the fusion network. In order to alleviate the problem of weak alignment of the multi-scale feature map, the FA-BIFPN feature pyramid module is proposed. The parallel structure of a dual backbone network takes 40% less time to train than a single backbone network. As determined through validation on FLIR, LLVIP, and VEDAI bimodal datasets, the fused images have more stable performance compared to the visible images. In addition, the detector proposed in this paper surpasses the current advanced YOLOV5 unimodal detector and CFT cross-modal fusion module. In the publicly available bimodal road crack dataset, our method is able to detect cracks of 5 pixels with 98.3% accuracy under weak illumination.

2.
BMC Bioinformatics ; 24(1): 431, 2023 Nov 14.
Artículo en Inglés | MEDLINE | ID: mdl-37964228

RESUMEN

BACKGROUND: Liquid chromatography-mass spectrometry is widely used in untargeted metabolomics for composition profiling. In multi-run analysis scenarios, features of each run are aligned into consensus features by feature alignment algorithms to observe the intensity variations across runs. However, most of the existing feature alignment methods focus more on accurate retention time correction, while underestimating the importance of feature matching. None of the existing methods can comprehensively consider feature correspondences among all runs and achieve optimal matching. RESULTS: To comprehensively analyze feature correspondences among runs, we propose G-Aligner, a graph-based feature alignment method for untargeted LC-MS data. In the feature matching stage, G-Aligner treats features and potential correspondences as nodes and edges in a multipartite graph, considers the multi-run feature matching problem an unbalanced multidimensional assignment problem, and provides three combinatorial optimization algorithms to find optimal matching solutions. In comparison with the feature alignment methods in OpenMS, MZmine2 and XCMS on three public metabolomics benchmark datasets, G-Aligner achieved the best feature alignment performance on all the three datasets with up to 9.8% and 26.6% increase in accurately aligned features and analytes, and helped all comparison software obtain more accurate results on their self-extracted features by integrating G-Aligner to their analysis workflow. G-Aligner is open-source and freely available at https://github.com/CSi-Studio/G-Aligner under a permissive license. Benchmark datasets, manual annotation results, evaluation methods and results are available at https://doi.org/10.5281/zenodo.8313034 CONCLUSIONS: In this study, we proposed G-Aligner to improve feature matching accuracy for untargeted metabolomics LC-MS data. G-Aligner comprehensively considered potential feature correspondences between all runs, converting the feature matching problem as a multidimensional assignment problem (MAP). In evaluations on three public metabolomics benchmark datasets, G-Aligner achieved the highest alignment accuracy on manual annotated and popular software extracted features, proving the effectiveness and robustness of the algorithm.


Asunto(s)
Programas Informáticos , Espectrometría de Masas en Tándem , Cromatografía Liquida/métodos , Espectrometría de Masas en Tándem/métodos , Algoritmos , Metabolómica/métodos
3.
BMC Med Imaging ; 23(1): 146, 2023 10 02.
Artículo en Inglés | MEDLINE | ID: mdl-37784025

RESUMEN

COVID-19, the global pandemic of twenty-first century, has caused major challenges and setbacks for researchers and medical infrastructure worldwide. The CoVID-19 influences on the patients respiratory system cause flooding of airways in the lungs. Multiple techniques have been proposed since the outbreak each of which is interdepended on features and larger training datasets. It is challenging scenario to consolidate larger datasets for accurate and reliable decision support. This research article proposes a chest X-Ray images classification approach based on feature thresholding in categorizing the CoVID-19 samples. The proposed approach uses the threshold value-based Feature Extraction (TVFx) technique and has been validated on 661-CoVID-19 X-Ray datasets in providing decision support for medical experts. The model has three layers of training datasets to attain a sequential pattern based on various learning features. The aligned feature-set of the proposed technique has successfully categorized CoVID-19 active samples into mild, serious, and extreme categories as per medical standards. The proposed technique has achieved an accuracy of 97.42% in categorizing and classifying given samples sets.


Asunto(s)
COVID-19 , Humanos , COVID-19/diagnóstico por imagen , Rayos X , Redes Neurales de la Computación , Pandemias , Tórax
4.
Sensors (Basel) ; 23(8)2023 Apr 18.
Artículo en Inglés | MEDLINE | ID: mdl-37112418

RESUMEN

Face anti-spoofing is critical for enhancing the robustness of face recognition systems against presentation attacks. Existing methods predominantly rely on binary classification tasks. Recently, methods based on domain generalization have yielded promising results. However, due to distribution discrepancies between various domains, the differences in the feature space related to the domain considerably hinder the generalization of features from unfamiliar domains. In this work, we propose a multi-domain feature alignment framework (MADG) that addresses poor generalization when multiple source domains are distributed in the scattered feature space. Specifically, an adversarial learning process is designed to narrow the differences between domains, achieving the effect of aligning the features of multiple sources, thus resulting in multi-domain alignment. Moreover, to further improve the effectiveness of our proposed framework, we incorporate multi-directional triplet loss to achieve a higher degree of separation in the feature space between fake and real faces. To evaluate the performance of our method, we conducted extensive experiments on several public datasets. The results demonstrate that our proposed approach outperforms current state-of-the-art methods, thereby validating its effectiveness in face anti-spoofing.

5.
Sensors (Basel) ; 23(1)2022 Dec 25.
Artículo en Inglés | MEDLINE | ID: mdl-36616808

RESUMEN

Arbitrarily Oriented Object Detection in aerial images is a highly challenging task in computer vision. The mainstream methods are based on the feature pyramid, while for remote-sensing targets, the misalignment of multi-scale features is always a thorny problem. In this article, we address the feature misalignment problem of oriented object detection from three dimensions: spatial, axial, and semantic. First, for the spatial misalignment problem, we design an intra-level alignment network based on leading features that can synchronize the location information of different pyramid features by sparse sampling. For multi-oriented aerial targets, we propose an axially aware convolution to solve the mismatch between the traditional sampling method and the orientation of instances. With the proposed collaborative optimization strategy based on shared weights, the above two modules can achieve coarse-to-fine feature alignment in spatial and axial dimensions. Last but not least, we propose a hierarchical-wise semantic alignment network to address the semantic gap between pyramid features that can cope with remote-sensing targets at varying scales by endowing the feature map with global semantic perception across pyramid levels. Extensive experiments on several challenging aerial benchmarks show state-of-the-art accuracy and appreciable inference speed. Specifically, we achieve a mean Average Precision (mAP) of 78.11% on DOTA, 90.10% on HRSC2016, and 90.29% on UCAS-AOD.

6.
Sensors (Basel) ; 22(20)2022 Oct 17.
Artículo en Inglés | MEDLINE | ID: mdl-36298241

RESUMEN

Motion blur recovery is a common method in the field of remote sensing image processing that can effectively improve the accuracy of detection and recognition. Among the existing motion blur recovery methods, the algorithms based on deep learning do not rely on a priori knowledge and, thus, have better generalizability. However, the existing deep learning algorithms usually suffer from feature misalignment, resulting in a high probability of missing details or errors in the recovered images. This paper proposes an end-to-end generative adversarial network (SDD-GAN) for single-image motion deblurring to address this problem and to optimize the recovery of blurred remote sensing images. Firstly, this paper applies a feature alignment module (FAFM) in the generator to learn the offset between feature maps to adjust the position of each sample in the convolution kernel and to align the feature maps according to the context; secondly, a feature importance selection module is introduced in the generator to adaptively filter the feature maps in the spatial and channel domains, preserving reliable details in the feature maps and improving the performance of the algorithm. In addition, this paper constructs a self-constructed remote sensing dataset (RSDATA) based on the mechanism of image blurring caused by the high-speed orbital motion of satellites. Comparative experiments are conducted on self-built remote sensing datasets and public datasets as well as on real remote sensing blurred images taken by an in-orbit satellite (CX-6(02)). The results show that the algorithm in this paper outperforms the comparison algorithm in terms of both quantitative evaluation and visual effects.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Procesamiento de Imagen Asistido por Computador/métodos , Movimiento (Física)
7.
Int J Neural Syst ; 34(10): 2450055, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39136190

RESUMEN

Automatic seizure detection from Electroencephalography (EEG) is of great importance in aiding the diagnosis and treatment of epilepsy due to the advantages of convenience and economy. Existing seizure detection methods are usually patient-specific, the training and testing are carried out on the same patient, limiting their scalability to other patients. To address this issue, we propose a cross-subject seizure detection method via unsupervised domain adaptation. The proposed method aims to obtain seizure specific information through shallow and deep feature alignments. For shallow feature alignment, we use convolutional neural network (CNN) to extract seizure-related features. The distribution gap of the shallow features between different patients is minimized by multi-kernel maximum mean discrepancies (MK-MMD). For deep feature alignment, adversarial learning is utilized. The feature extractor tries to learn feature representations that try to confuse the domain classifier, making the extracted deep features more generalizable to new patients. The performance of our method is evaluated on the CHB-MIT and Siena databases in epoch-based experiments. Additionally, event-based experiments are also conducted on the CHB-MIT dataset. The results validate the feasibility of our method in diminishing the domain disparities among different patients.


Asunto(s)
Electroencefalografía , Redes Neurales de la Computación , Convulsiones , Aprendizaje Automático no Supervisado , Humanos , Electroencefalografía/métodos , Convulsiones/diagnóstico , Convulsiones/fisiopatología , Aprendizaje Profundo , Procesamiento de Señales Asistido por Computador
8.
Phys Med Biol ; 69(11)2024 May 23.
Artículo en Inglés | MEDLINE | ID: mdl-38657628

RESUMEN

Although the U-shaped architecture, represented by UNet, has become a major network model for brain tumor segmentation, the repeated convolution and sampling operations can easily lead to the loss of crucial information. Additionally, directly fusing features from different levels without distinction can easily result in feature misalignment, affecting segmentation accuracy. On the other hand, traditional convolutional blocks used for feature extraction cannot capture the abundant multi-scale information present in brain tumor images. This paper proposes a multi-scale feature-aligned segmentation model called GMAlignNet that fully utilizes Ghost convolution to solve these problems. Ghost hierarchical decoupled fusion unit and Ghost hierarchical decoupled unit are used instead of standard convolutions in the encoding and decoding paths. This transformation replaces the holistic learning of volume structures by traditional convolutional blocks with multi-level learning on a specific view, facilitating the acquisition of abundant multi-scale contextual information through low-cost operations. Furthermore, a feature alignment unit is proposed that can utilize semantic information flow to guide the recovery of upsampled features. It performs pixel-level semantic information correction on misaligned features due to feature fusion. The proposed method is also employed to optimize three classic networks, namely DMFNet, HDCNet, and 3D UNet, demonstrating its effectiveness in automatic brain tumor segmentation. The proposed network model was applied to the BraTS 2018 dataset, and the results indicate that the proposed GMAlignNet achieved Dice coefficients of 81.65%, 90.07%, and 85.16% for enhancing tumor, whole tumor, and tumor core segmentation, respectively. Moreover, with only 0.29 M parameters and 26.88G FLOPs, it demonstrates better potential in terms of computational efficiency and possesses the advantages of lightweight. Extensive experiments on the BraTS 2018, BraTS 2019, and BraTS 2020 datasets suggest that the proposed model exhibits better potential in handling edge details and contour recognition.


Asunto(s)
Neoplasias Encefálicas , Procesamiento de Imagen Asistido por Computador , Semántica , Neoplasias Encefálicas/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética
9.
Diagnostics (Basel) ; 14(16)2024 Aug 12.
Artículo en Inglés | MEDLINE | ID: mdl-39202240

RESUMEN

Given the diversity of medical images, traditional image segmentation models face the issue of domain shift. Unsupervised domain adaptation (UDA) methods have emerged as a pivotal strategy for cross modality analysis. These methods typically utilize generative adversarial networks (GANs) for both image-level and feature-level domain adaptation through the transformation and reconstruction of images, assuming the features between domains are well-aligned. However, this assumption falters with significant gaps between different medical image modalities, such as MRI and CT. These gaps hinder the effective training of segmentation networks with cross-modality images and can lead to misleading training guidance and instability. To address these challenges, this paper introduces a novel approach comprising a cross-modality feature alignment sub-network and a cross pseudo supervised dual-stream segmentation sub-network. These components work together to bridge domain discrepancies more effectively and ensure a stable training environment. The feature alignment sub-network is designed for the bidirectional alignment of features between the source and target domains, incorporating a self-attention module to aid in learning structurally consistent and relevant information. The segmentation sub-network leverages an enhanced cross-pseudo-supervised loss to harmonize the output of the two segmentation networks, assessing pseudo-distances between domains to improve the pseudo-label quality and thus enhancing the overall learning efficiency of the framework. This method's success is demonstrated by notable advancements in segmentation precision across target domains for abdomen and brain tasks.

10.
Med Biol Eng Comput ; 62(7): 1991-2004, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38429443

RESUMEN

Detection of suspicious pulmonary nodules from lung CT scans is a crucial task in computer-aided diagnosis (CAD) systems. In recent years, various deep learning-based approaches have been proposed and demonstrated significant potential for addressing this task. However, existing deep convolutional neural networks exhibit limited long-range dependency capabilities and neglect crucial contextual information, resulting in reduced performance on detecting small-size nodules in CT scans. In this work, we propose a novel end-to-end framework called LGDNet for the detection of suspicious pulmonary nodules in lung CT scans by fusing local features and global representations. To overcome the limited long-range dependency capabilities inherent in convolutional operations, a dual-branch module is designed to integrate the convolutional neural network (CNN) branch that extracts local features with the transformer branch that captures global representations. To further address the issue of misalignment between local features and global representations, an attention gate module is proposed in the up-sampling stage to selectively combine misaligned semantic data from both branches, resulting in more accurate detection of small-size nodules. Our experiments on the large-scale LIDC dataset demonstrate that the proposed LGDNet with the dual-branch module and attention gate module could significantly improve the nodule detection sensitivity by achieving a final competition performance metric (CPM) score of 89.49%, outperforming the state-of-the-art nodule detection methods, indicating its potential for clinical applications in the early diagnosis of lung diseases.


Asunto(s)
Neoplasias Pulmonares , Redes Neurales de la Computación , Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/diagnóstico , Aprendizaje Profundo , Diagnóstico por Computador/métodos , Nódulo Pulmonar Solitario/diagnóstico por imagen , Algoritmos , Nódulos Pulmonares Múltiples/diagnóstico por imagen , Pulmón/diagnóstico por imagen , Interpretación de Imagen Radiográfica Asistida por Computador/métodos
11.
Comput Biol Med ; 171: 108104, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38335821

RESUMEN

Drug-food interactions (DFIs) crucially impact patient safety and drug efficacy by modifying absorption, distribution, metabolism, and excretion. The application of deep learning for predicting DFIs is promising, yet the development of computational models remains in its early stages. This is mainly due to the complexity of food compounds, challenging dataset developers in acquiring comprehensive ingredient data, often resulting in incomplete or vague food component descriptions. DFI-MS tackles this issue by employing an accurate feature representation method alongside a refined computational model. It innovatively achieves a more precise characterization of food features, a previously daunting task in DFI research. This is accomplished through modules designed for perturbation interactions, feature alignment and domain separation, and inference feedback. These modules extract essential information from features, using a perturbation module and a feature interaction encoder to establish robust representations. The feature alignment and domain separation modules are particularly effective in managing data with diverse frequencies and characteristics. DFI-MS stands out as the first in its field to combine data augmentation, feature alignment, domain separation, and contrastive learning. The flexibility of the inference feedback module allows its application in various downstream tasks. Demonstrating exceptional performance across multiple datasets, DFI-MS represents a significant advancement in food presentations technology. Our code and data are available at https://github.com/kkkayle/DFI-MS.


Asunto(s)
Interacciones Alimento-Droga , Alimentos , Humanos , Aprendizaje Automático Supervisado
12.
Int J Neural Syst ; : 2450064, 2024 Sep 23.
Artículo en Inglés | MEDLINE | ID: mdl-39310980

RESUMEN

Referring image segmentation aims to accurately align image pixels and text features for object segmentation based on natural language descriptions. This paper proposes NSNPRIS (convolutional nonlinear spiking neural P systems for referring image segmentation), a novel model based on convolutional nonlinear spiking neural P systems. NSNPRIS features NSNPFusion and Language Gate modules to enhance feature interaction during encoding, along with an NSNPDecoder for feature alignment and decoding. Experimental results on RefCOCO, RefCOCO[Formula: see text], and G-Ref datasets demonstrate that NSNPRIS performs better than mainstream methods. Our contributions include advances in the alignment of pixel and textual features and the improvement of segmentation accuracy.

13.
Phys Med Biol ; 68(17)2023 08 18.
Artículo en Inglés | MEDLINE | ID: mdl-37541224

RESUMEN

Objective. This study aims to address the significant challenges posed by pneumothorax segmentation in computed tomography images due to the resemblance between pneumothorax regions and gas-containing structures such as the trachea and bronchus.Approach. We introduce a novel dynamic adaptive windowing transformer (DAWTran) network incorporating implicit feature alignment for precise pneumothorax segmentation. The DAWTran network consists of an encoder module, which employs a DAWTran, and a decoder module. We have proposed a unique dynamic adaptive windowing strategy that enables multi-head self-attention to effectively capture multi-scale information. The decoder module incorporates an implicit feature alignment function to minimize information deviation. Moreover, we utilize a hybrid loss function to address the imbalance between positive and negative samples.Main results. Our experimental results demonstrate that the DAWTran network significantly improves the segmentation performance. Specifically, it achieves a higher dice similarity coefficient (DSC) of 91.35% (a larger DSC value implies better performance), showing an increase of 2.21% compared to the TransUNet method. Meanwhile, it significantly reduces the Hausdorff distance (HD) to 8.06 mm (a smaller HD value implies better performance), reflecting a reduction of 29.92% in comparison to the TransUNet method. Incorporating the dynamic adaptive windowing (DAW) mechanism has proven to enhance DAWTran's performance, leading to a 4.53% increase in DSC and a 15.85% reduction in HD as compared to SwinUnet. The application of the implicit feature alignment (IFA) further improves the segmentation accuracy, increasing the DSC by an additional 0.11% and reducing the HD by another 10.01% compared to the model only employing DAW.Significance. These results highlight the potential of the DAWTran network for accurate pneumothorax segmentation in clinical applications, suggesting that it could be an invaluable tool in improving the precision and effectiveness of diagnosis and treatment in related healthcare scenarios. The improved segmentation performance with the inclusion of DAW and IFA validates the effectiveness of our proposed model and its components.


Asunto(s)
Neumotórax , Humanos , Neumotórax/diagnóstico por imagen , Bronquios , Tomografía Computarizada por Rayos X , Tráquea , Procesamiento de Imagen Asistido por Computador
14.
Front Neurorobot ; 17: 1119231, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36845064

RESUMEN

Lightweight semantic segmentation promotes the application of semantic segmentation in tiny devices. The existing lightweight semantic segmentation network (LSNet) has the problems of low precision and a large number of parameters. In response to the above problems, we designed a full 1D convolutional LSNet. The tremendous success of this network is attributed to the following three modules: 1D multi-layer space module (1D-MS), 1D multi-layer channel module (1D-MC), and flow alignment module (FA). The 1D-MS and the 1D-MC add global feature extraction operations based on the multi-layer perceptron (MLP) idea. This module uses 1D convolutional coding, which is more flexible than MLP. It increases the global information operation, improving features' coding ability. The FA module fuses high-level and low-level semantic information, which solves the problem of precision loss caused by the misalignment of features. We designed a 1D-mixer encoder based on the transformer structure. It performed fusion encoding of the feature space information extracted by the 1D-MS module and the channel information extracted by the 1D-MC module. 1D-mixer obtains high-quality encoded features with very few parameters, which is the key to the network's success. The attention pyramid with FA (AP-FA) uses an AP to decode features and adds a FA module to solve the problem of feature misalignment. Our network requires no pre-training and only needs a 1080Ti GPU for training. It achieved 72.6 mIoU and 95.6 FPS on the Cityscapes dataset and 70.5 mIoU and 122 FPS on the CamVid dataset. We ported the network trained on the ADE2K dataset to mobile devices, and the latency of 224 ms proves the application value of the network on mobile devices. The results on the three datasets prove that the network generalization ability we designed is powerful. Compared to state-of-the-art lightweight semantic segmentation algorithms, our designed network achieves the best balance between segmentation accuracy and parameters. The parameters of LSNet are only 0.62 M, which is currently the network with the highest segmentation accuracy within 1 M parameters.

15.
Comput Biol Med ; 154: 106570, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36739819

RESUMEN

Alzheimer's disease (AD) is the most common form of dementia and there is no effective treatment currently. Using artificial intelligence technology to assist the diagnosis and intervention as early as possible is of great significance to delay the development of AD. Structural Magnetic Resonance Imaging (sMRI) has shown great practical values on computer-aided AD diagnosis. Affected by data from different sources or acquisition domains in realistic scenarios, MRI data often suffer from domain shift problem. In this paper, we propose a deep Prototype-Guided Multi-Scale Domain Adaptation (PMDA) framework to handle MRI data with domain shift problem, and realize automatic auxiliary diagnosis of AD, Mild Cognitive Impairment (MCI) and Cognitively Normal (CN). PMDA is composed of three modules: (1) MRI multi-scale feature extraction module combines the advantages of 3D convolution and self-attention to effectively extract multi-scale features in high-dimensional space, (2) Prototype Maximum Density Divergence (Pro-MDD) module adopts prototype learning to constrain the feature outlier samples in a mini-batch when MDD is used to align source domain and target domain, and (3) Adversarial Domain Adaptation module is applied to achieve global feature alignment of the source domain and target domain and co-training two distinctive discriminators to mitigate the over-fitting issue. Experiments have been performed on 3T and 1.5T sMRI with domain shift in ADNI dataset. The experimental results demonstrated that the proposed framework PMDA outperforms supervised learning methods and several state-of-the-art domain adaptation methods and achieves a superior accuracy of 92.11%, 76.01% and 82.37% on AD vs. CN, AD vs. MCI, and MCI vs. CN tasks, respectively.


Asunto(s)
Enfermedad de Alzheimer , Disfunción Cognitiva , Humanos , Enfermedad de Alzheimer/diagnóstico por imagen , Interpretación de Imagen Asistida por Computador/métodos , Inteligencia Artificial , Encéfalo/patología , Imagen por Resonancia Magnética/métodos , Disfunción Cognitiva/diagnóstico por imagen
16.
Front Neurorobot ; 16: 823484, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35756158

RESUMEN

The task of sketch face recognition refers to matching cross-modality facial images from sketch to photo, which is widely applied in the criminal investigation area. Existing works aim to bridge the cross-modality gap by inter-modality feature alignment approaches, however, the small sample problem has received much less attention, resulting in limited performance. In this paper, an effective Cross Task Modality Alignment Network (CTMAN) is proposed for sketch face recognition. To address the small sample problem, a meta learning training episode strategy is first introduced to mimic few-shot tasks. Based on the episode strategy, a two-stream network termed modality alignment embedding learning is used to capture more modality-specific and modality-sharable features, meanwhile, two cross task memory mechanisms are proposed to collect sufficient negative features to further improve the feature learning. Finally, a cross task modality alignment loss is proposed to capture modality-related information of cross task features for more effective training. Extensive experiments are conducted to validate the superiority of the CTMAN, which significantly outperforms state-of-the-art methods on the UoM-SGFSv2 set A, set B, CUFSF, and PRIP-VSGC dataset.

SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda