Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 71
Filtrar
1.
Nutrients ; 16(18)2024 Sep 19.
Artículo en Inglés | MEDLINE | ID: mdl-39339760

RESUMEN

Endothelial dysfunction occurs prior to atherosclerosis, which is an independent predictor of cardiovascular diseases (CVDs). Diabetes mellitus impairs endothelial function by triggering oxidative stress and inflammation in vascular tissues. Isoliquiritigenin (ISL), one of the major bioactive ingredients extracted from licorice, has been reported to inhibit inflammation and oxidative stress. However, the therapeutic effects of ISL on ameliorating type 2 diabetes (T2D)-associated endothelial dysfunction remain unknown. In our animal study, db/db male mice were utilized as a model for T2D-associated endothelial dysfunction, while their counterpart, heterozygote db/m+ male mice, served as the control. Mouse brain microvascular endothelial cells (mBMECs) were used for in vitro experiments. Interleukin-1ß (IL-1ß) was used to induce endothelial cell dysfunction. ISL significantly reversed the impairment of endothelium-dependent relaxations (EDRs) in db/db mouse aortas. ISL treatment decreased ROS (reactive oxygen species) levels in db/db mice aortic sections and IL-1ß-treated endothelial cells. Encouragingly, ISL attenuated the overexpression of pro-inflammatory factors MCP-1, TNF-α, and IL-6 in db/db mouse aortas and IL-1ß-impaired endothelial cells. The NOX2 (NADPH oxidase 2) overexpression was inhibited by ISL treatment. Notably, ISL treatment restored the expression levels of IL-10, SOD1, Nrf2, and HO-1 in db/db mouse aortas and IL-1ß-impaired endothelial cells. This study illustrates, for the first time, that ISL attenuates endothelial dysfunction in T2D mice, offering new insights into the pharmacological effects of ISL. Our findings demonstrate the potential of ISL as a promising therapeutic agent for the treatment of vascular diseases, paving the way for the further exploration of novel vascular therapies.


Asunto(s)
Chalconas , Diabetes Mellitus Tipo 2 , Células Endoteliales , Endotelio Vascular , Glycyrrhiza , Estrés Oxidativo , Extractos Vegetales , Animales , Chalconas/farmacología , Diabetes Mellitus Tipo 2/tratamiento farmacológico , Diabetes Mellitus Tipo 2/metabolismo , Glycyrrhiza/química , Masculino , Ratones , Endotelio Vascular/efectos de los fármacos , Endotelio Vascular/metabolismo , Extractos Vegetales/farmacología , Células Endoteliales/efectos de los fármacos , Células Endoteliales/metabolismo , Estrés Oxidativo/efectos de los fármacos , Especies Reactivas de Oxígeno/metabolismo , Aorta/efectos de los fármacos , Diabetes Mellitus Experimental/tratamiento farmacológico , Ratones Endogámicos C57BL , Interleucina-1beta/metabolismo
2.
Artículo en Inglés | MEDLINE | ID: mdl-39316491

RESUMEN

Hashing technology has exhibited great cross-modal retrieval potential due to its appealing retrieval efficiency and storage effectiveness. Most current supervised cross-modal retrieval methods heavily rely on accurate semantic supervision, which is intractable for annotations with ever-growing sample sizes. By comparison, the existing unsupervised methods rely on accurate sample similarity preservation strategies with intensive computational costs to compensate for the lack of semantic guidance, which causes these methods to lose the power to bridge the semantic gap. Furthermore, both kinds of approaches need to search for the nearest samples among all samples in a large search space, whose process is laborious. To address these issues, this paper proposes an unsupervised dual deep hashing (UDDH) method with semantic-index and content-code for cross-modal retrieval. Deep hashing networks are utilized to extract deep features and jointly encode the dual hashing codes in a collaborative manner with a common semantic index and modality content codes to simultaneously bridge the semantic and heterogeneous gaps for cross-modal retrieval. The dual deep hashing architecture, comprising the head code on semantic index and tail codes on modality content, enhances the efficiency for cross-modal retrieval. A query sample only needs to search for the retrieved samples with the same semantic index, thus greatly shrinking the search space and achieving superior retrieval efficiency. UDDH integrates the learning processes of deep feature extraction, binary optimization, common semantic index, and modality content code within a unified model, allowing for collaborative optimization to enhance the overall performance. Extensive experiments are conducted to demonstrate the retrieval superiority of the proposed approach over the state-of-the-art baselines.

3.
Artículo en Inglés | MEDLINE | ID: mdl-39316487

RESUMEN

Convolutional neural networks (CNNs) have achieved significant performance on various real-life tasks. However, the large number of parameters in convolutional layers requires huge storage and computation resources, making it challenging to deploy CNNs on memory-constrained embedded devices. In this article, we propose a novel compression method that generates the convolution filters in each layer using a set of learnable low-dimensional quantized filter bases. The proposed method reconstructs the convolution filters by stacking the linear combinations of these filter bases. By using quantized values in weights, the compact filters can be represented using fewer bits so that the network can be highly compressed. Furthermore, we explore the sparsity of coefficients through L1 -ball projection when conducting linear combination to further reduce the storage consumption and prevent overfitting. We also provide a detailed analysis of the compression performance of the proposed method. Evaluations of image classification and object detection tasks using various network structures demonstrate that the proposed method achieves a higher compression ratio with comparable accuracy compared with the existing state-of-the-art filter decomposition and network quantization methods.

4.
Artículo en Inglés | MEDLINE | ID: mdl-39178069

RESUMEN

Mild cognitive impairment (MCI) represents an early stage of Alzheimer's disease (AD), characterized by subtle clinical symptoms that pose challenges for accurate diagnosis. The quest for the identification of MCI individuals has highlighted the importance of comprehending the underlying mechanisms of disease causation. Integrated analysis of brain imaging and genomics offers a promising avenue for predicting MCI risk before clinical symptom onset. However, most existing methods face challenges in: 1) mining the brain network-specific topological structure and addressing the single nucleotide polymorphisms (SNPs)-related noise contamination and 2) extracting the discriminative properties of brain imaging genomics, resulting in limited accuracy for MCI diagnosis. To this end, a modality-aware discriminative fusion network (MA-DFN) is proposed to integrate the complementary information from brain imaging genomics to diagnose MCI. Specifically, we first design two modality-specific feature extraction modules: the graph convolutional network with edge-augmented self-attention module (GCN-EASA) and the deep adversarial denoising autoencoder module (DAD-AE), to capture the topological structure of brain networks and the intrinsic distribution of SNPs. Subsequently, a discriminative-enhanced fusion network with correlation regularization module (DFN-CorrReg) is employed to enhance inter-modal consistency and between-class discrimination in brain imaging and genomics. Compared to other state-of-the-art approaches, MA-DFN not only exhibits superior performance in stratifying cognitive normal (CN) and MCI individuals but also identifies disease-related brain regions and risk SNPs locus, which hold potential as putative biomarkers for MCI diagnosis.

5.
Artículo en Inglés | MEDLINE | ID: mdl-39141461

RESUMEN

Traditional clustering methods rely on pairwise affinity to divide samples into different subgroups. However, high-dimensional small-sample (HDLSS) data are affected by the concentration effects, rendering traditional pairwise metrics unable to accurately describe relationships between samples, leading to suboptimal clustering results. This article advances the proposition of employing high-order affinities to characterize multiple sample relationships as a strategic means to circumnavigate the concentration effects. We establish a nexus between different order affinities by constructing specialized decomposable high-order affinities, thereby formulating a uniform mathematical framework. Building upon this insight, a novel clustering method named uniform tensor clustering (UTC) is proposed, which learns a consensus low-dimensional embedding for clustering by the synergistic exploitation of multiple-order affinities. Extensive experiments on synthetic and real-world datasets demonstrate two findings: 1) high-order affinities are better suited for characterizing sample relationships in complex data and 2) reasonable use of different order affinities can enhance clustering effectiveness, especially in handling high-dimensional data.

6.
Artículo en Inglés | MEDLINE | ID: mdl-38980782

RESUMEN

Tensor spectral clustering (TSC) is a recently proposed approach to robustly group data into underlying clusters. Unlike the traditional spectral clustering (SC), which merely uses pairwise similarities of data in an affinity matrix, TSC aims at exploring their multiwise similarities in an affinity tensor to achieve better performance. However, the performance of TSC highly relies on the design of multiwise similarities, and it remains unclear especially for high-dimension-low-sample-size (HDLSS) data. To this end, this article has proposed a discriminating TSC (DTSC) for HDLSS data. Specifically, DTSC uses the proposed discriminating affinity tensor that encodes the pair-to-pair similarities, which are particularly constructed by the anchor-based distance. HDLSS asymptotic analysis shows that the proposed affinity tensor can explicitly differentiate samples from different clusters when the feature dimension is large. This theoretical property allows DTSC to improve the clustering performance on HDLSS data. Experimental results on synthetic and benchmark datasets demonstrate the effectiveness and robustness of the proposed method in comparison to several baseline methods.

7.
Artículo en Inglés | MEDLINE | ID: mdl-38949943

RESUMEN

The broad learning system (BLS) featuring lightweight, incremental extension, and strong generalization capabilities has been successful in its applications. Despite these advantages, BLS struggles in multitask learning (MTL) scenarios with its limited ability to simultaneously unravel multiple complex tasks where existing BLS models cannot adequately capture and leverage essential information across tasks, decreasing their effectiveness and efficacy in MTL scenarios. To address these limitations, we proposed an innovative MTL framework explicitly designed for BLS, named group sparse regularization for broad multitask learning system using related task-wise (BMtLS-RG). This framework combines a task-related BLS learning mechanism with a group sparse optimization strategy, significantly boosting BLS's ability to generalize in MTL environments. The task-related learning component harnesses task correlations to enable shared learning and optimize parameters efficiently. Meanwhile, the group sparse optimization approach helps minimize the effects of irrelevant or noisy data, thus enhancing the robustness and stability of BLS in navigating complex learning scenarios. To address the varied requirements of MTL challenges, we presented two additional variants of BMtLS-RG: BMtLS-RG with sharing parameters of feature mapped nodes (BMtLS-RGf), which integrates a shared feature mapping layer, and BMtLS-RGf and enhanced nodes (BMtLS-RGfe), which further includes an enhanced node layer atop the shared feature mapping structure. These adaptations provide customized solutions tailored to the diverse landscape of MTL problems. We compared BMtLS-RG with state-of-the-art (SOTA) MTL and BLS algorithms through comprehensive experimental evaluation across multiple practical MTL and UCI datasets. BMtLS-RG outperformed SOTA methods in 97.81% of classification tasks and achieved optimal performance in 96.00% of regression tasks, demonstrating its superior accuracy and robustness. Furthermore, BMtLS-RG exhibited satisfactory training efficiency, outperforming existing MTL algorithms by 8.04-42.85 times.

8.
Artículo en Inglés | MEDLINE | ID: mdl-38691434

RESUMEN

This article studies an emerging practical problem called heterogeneous prototype learning (HPL). Unlike the conventional heterogeneous face synthesis (HFS) problem that focuses on precisely translating a face image from a source domain to another target one without removing facial variations, HPL aims at learning the variation-free prototype of an image in the target domain while preserving the identity characteristics. HPL is a compounded problem involving two cross-coupled subproblems, that is, domain transfer and prototype learning (PL), thus making most of the existing HFS methods that simply transfer the domain style of images unsuitable for HPL. To tackle HPL, we advocate disentangling the prototype and domain factors in their respective latent feature spaces and then replacing the source domain with the target one for generating a new heterogeneous prototype. In doing so, the two subproblems in HPL can be solved jointly in a unified manner. Based on this, we propose a disentangled HPL framework, dubbed DisHPL, which is composed of one encoder-decoder generator and two discriminators. The generator and discriminators play adversarial games such that the generator embeds contaminated images into a prototype feature space only capturing identity information and a domain-specific feature space, while generating realistic-looking heterogeneous prototypes. Experiments on various heterogeneous datasets with diverse variations validate the superiority of DisHPL.

9.
Artículo en Inglés | MEDLINE | ID: mdl-38652619

RESUMEN

Cross-modal hashing (CMH) has attracted considerable attention in recent years. Almost all existing CMH methods primarily focus on reducing the modality gap and semantic gap, i.e., aligning multi-modal features and their semantics in Hamming space, without taking into account the space gap, i.e., difference between the real number space and the Hamming space. In fact, the space gap can affect the performance of CMH methods. In this paper, we analyze and demonstrate how the space gap affects the existing CMH methods, which therefore raises two problems: solution space compression and loss function oscillation. These two problems eventually cause the retrieval performance deteriorating. Based on these findings, we propose a novel algorithm, namely Semantic Channel Hashing (SCH). Firstly, we classify sample pairs into fully semantic-similar, partially semantic-similar, and semantic-negative ones based on their similarity and impose different constraints on them, respectively, to ensure that the entire Hamming space is utilized. Then, we introduce a semantic channel to alleviate the issue of loss function oscillation. Experimental results on three public datasets demonstrate that SCH outperforms the state-of-the-art methods. Furthermore, experimental validations are provided to substantiate the conjectures regarding solution space compression and loss function oscillation, offering visual evidence of their impact on the CMH methods. Codes are available at https://github.com/hutt94/SCH.

10.
IEEE Trans Pattern Anal Mach Intell ; 46(10): 6795-6808, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-38593012

RESUMEN

Graph-based multi-view clustering encodes multi-view data into sample affinities to find consensus representation, effectively overcoming heterogeneity across different views. However, traditional affinity measures tend to collapse as the feature dimension expands, posing challenges in estimating a unified alignment that reveals both cross-view and inner relationships. To tackle this challenge, we propose to achieve multi-view uniform clustering via consensus representation co-regularization. First, the sample affinities are encoded by both popular dyadic affinity and recent high-order affinities to comprehensively characterize spatial distributions of the HDLSS data. Second, a fused consensus representation is learned through aligning the multi-view low-dimensional representation by co-regularization. The learning of the fused representation is modeled by a high-order eigenvalue problem within manifold space to preserve the intrinsic connections and complementary correlations of original data. A numerical scheme via manifold minimization is designed to solve the high-order eigenvalue problem efficaciously. Experiments on eight HDLSS datasets demonstrate the effectiveness of our proposed method in comparison with the recent thirteen benchmark methods.

11.
IEEE Trans Pattern Anal Mach Intell ; 46(7): 5080-5091, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38315604

RESUMEN

Tensor spectral clustering (TSC) is an emerging approach that explores multi-wise similarities to boost learning. However, two key challenges have yet to be well addressed in the existing TSC methods: (1) The construction and storage of high-order affinity tensors to encode the multi-wise similarities are memory-intensive and hampers their applicability, and (2) they mostly employ a two-stage approach that integrates multiple affinity tensors of different orders to learn a consensus tensor spectral embedding, thus often leading to a suboptimal clustering result. To this end, this paper proposes a tensor spectral clustering network (TSC-Net) to achieve one-stage learning of a consensus tensor spectral embedding, while reducing the memory cost. TSC-Net employs a deep neural network that learns to map the input samples to the consensus tensor spectral embedding, guided by a TSC objective with multiple affinity tensors. It uses stochastic optimization to calculate a small part of the affinity tensors, thereby avoiding loading the whole affinity tensors for computation, thus significantly reducing the memory cost. Through using an ensemble of multiple affinity tensors, the TSC can dramatically improve clustering performance. Empirical studies on benchmark datasets demonstrate that TSC-Net outperforms the recent baseline methods.

12.
Artículo en Inglés | MEDLINE | ID: mdl-38289837

RESUMEN

Partial multilabel learning (PML) addresses the issue of noisy supervision, which contains an overcomplete set of candidate labels for each instance with only a valid subset of training data. Using label enhancement techniques, researchers have computed the probability of a label being ground truth. However, enhancing labels in the noisy label space makes it impossible for the existing partial multilabel label enhancement methods to achieve satisfactory results. Besides, few methods simultaneously involve the ambiguity problem, the feature space's redundancy, and the model's efficiency in PML. To address these issues, this article presents a novel joint partial multilabel framework using broad learning systems (namely BLS-PML) with three innovative mechanisms: 1) a trustworthy label space is reconstructed through a novel label enhancement method to avoid the bias caused by noisy labels; 2) a low-dimensional feature space is obtained by a confidence-based dimensionality reduction method to reduce the effect of redundancy in the feature space; and 3) a noise-tolerant BLS is proposed by adding a dimensionality reduction layer and a trustworthy label layer to deal with PML problem. We evaluated it on six real-world and seven synthetic datasets, using eight state-of-the-art partial multilabel algorithms as baselines and six evaluation metrics. Out of 144 experimental scenarios, our method significantly outperforms the baselines by about 80%, demonstrating its robustness and effectiveness in handling partial multilabel tasks.

13.
IEEE Trans Pattern Anal Mach Intell ; 46(5): 3637-3652, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38145535

RESUMEN

In multi-view environment, it would yield missing observations due to the limitation of the observation process. The most current representation learning methods struggle to explore complete information by lacking either cross-generative via simply filling in missing view data, or solidative via inferring a consistent representation among the existing views. To address this problem, we propose a deep generative model to learn a complete generative latent representation, namely Complete Multi-view Variational Auto-Encoders (CMVAE), which models the generation of the multiple views from a complete latent variable represented by a mixture of Gaussian distributions. Thus, the missing view can be fully characterized by the latent variables and is resolved by estimating its posterior distribution. Accordingly, a novel variational lower bound is introduced to integrate view-invariant information into posterior inference to enhance the solidative of the learned latent representation. The intrinsic correlations between views are mined to seek cross-view generality, and information leading to missing views is fused by view weights to reach solidity. Benchmark experimental results in clustering, classification, and cross-view image generation tasks demonstrate the superiority of CMVAE, while time complexity and parameter sensitivity analyses illustrate the efficiency and robustness. Additionally, application to bioinformatics data exemplifies its practical significance.

14.
Artículo en Inglés | MEDLINE | ID: mdl-37566497

RESUMEN

Mounting evidence shows that Alzheimer's disease (AD) manifests the dysfunction of the brain network much earlier before the onset of clinical symptoms, making its early diagnosis possible. Current brain network analyses treat high-dimensional network data as a regular matrix or vector, which destroys the essential network topology, thereby seriously affecting diagnosis accuracy. In this context, harmonic waves provide a solid theoretical background for exploring brain network topology. However, the harmonic waves are originally intended to discover neurological disease propagation patterns in the brain, which makes it difficult to accommodate brain disease diagnosis with high heterogeneity. To address this challenge, this article proposes a network manifold harmonic discriminant analysis (MHDA) method for accurately detecting AD. Each brain network is regarded as an instance drawn on a Stiefel manifold. Every instance is represented by a set of orthonormal eigenvectors (i.e., harmonic waves) derived from its Laplacian matrix, which fully respects the topological structure of the brain network. An MHDA method within the Stiefel space is proposed to identify the group-dependent common harmonic waves, which can be used as group-specific references for downstream analyses. Extensive experiments are conducted to demonstrate the effectiveness of the proposed method in stratifying cognitively normal (CN) controls, mild cognitive impairment (MCI), and AD.

15.
Artículo en Inglés | MEDLINE | ID: mdl-37079407

RESUMEN

Quality prediction is beneficial to intelligent inspection, advanced process control, operation optimization, and product quality improvements of complex industrial processes. Most of the existing work obeys the assumption that training samples and testing samples follow similar data distributions. The assumption is, however, not true for practical multimode processes with dynamics. In practice, traditional approaches mostly establish a prediction model using the samples from the principal operating mode (POM) with abundant samples. The model is inapplicable to other modes with a few samples. In view of this, this article will propose a novel dynamic latent variable (DLV)-based transfer learning approach, called transfer DLV regression (TDLVR), for quality prediction of multimode processes with dynamics. The proposed TDLVR can not only derive the dynamics between process variables and quality variables in the POM but also extract the co-dynamic variations among process variables between the POM and the new mode. This can effectively overcome data marginal distribution discrepancy and enrich the information of the new mode. To make full use of the available labeled samples from the new mode, an error compensation mechanism is incorporated into the established TDLVR, termed compensated TDLVR (CTDLVR), to adapt to the conditional distribution discrepancy. Empirical studies show the efficacy of the proposed TDLVR and CTDLVR methods in several case studies, including numerical simulation examples and two real-industrial process examples.

16.
Artículo en Inglés | MEDLINE | ID: mdl-37021983

RESUMEN

The scene classification of remote sensing (RS) images plays an essential role in the RS community, aiming to assign the semantics to different RS scenes. With the increase of spatial resolution of RS images, high-resolution RS (HRRS) image scene classification becomes a challenging task because the contents within HRRS images are diverse in type, various in scale, and massive in volume. Recently, deep convolution neural networks (DCNNs) provide the promising results of the HRRS scene classification. Most of them regard HRRS scene classification tasks as single-label problems. In this way, the semantics represented by the manual annotation decide the final classification results directly. Although it is feasible, the various semantics hidden in HRRS images are ignored, thus resulting in inaccurate decision. To overcome this limitation, we propose a semantic-aware graph network (SAGN) for HRRS images. SAGN consists of a dense feature pyramid network (DFPN), an adaptive semantic analysis module (ASAM), a dynamic graph feature update module, and a scene decision module (SDM). Their function is to extract the multi-scale information, mine the various semantics, exploit the unstructured relations between diverse semantics, and make the decision for HRRS scenes, respectively. Instead of transforming single-label problems into multi-label issues, our SAGN elaborates the proper methods to make full use of diverse semantics hidden in HRRS images to accomplish scene classification tasks. The extensive experiments are conducted on three popular HRRS scene data sets. Experimental results show the effectiveness of the proposed SAGN.

17.
Front Nutr ; 10: 1060226, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37025617

RESUMEN

Background: Cardiovascular diseases (CVDs) have been the major cause of mortality in type 2 diabetes. However, new approaches are still warranted since current diabetic medications, which focus mainly on glycemic control, do not effectively lower cardiovascular mortality rate in diabetic patients. Protocatechuic acid (PCA) is a phenolic acid widely distributed in garlic, onion, cauliflower and other plant-based foods. Given the anti-oxidative effects of PCA in vitro, we hypothesized that PCA would also have direct beneficial effects on endothelial function in addition to the systemic effects on vascular health demonstrated by previous studies. Methods and results: Since IL-1ß is the major pathological contributor to endothelial dysfunction in diabetes, the anti-inflammatory effects of PCA specific on endothelial cells were further verified by the use of IL-1ß-induced inflammation model. Direct incubation of db/db mouse aortas with physiological concentration of PCA significantly ameliorated endothelium-dependent relaxation impairment, as well as reactive oxygen species overproduction mediated by diabetes. In addition to the well-studied anti-oxidative activity, PCA demonstrated strong anti-inflammatory effects by suppressing the pro-inflammatory cytokines MCP1, VCAM1 and ICAM1, as well as increasing the phosphorylation of eNOS and Akt in the inflammatory endothelial cell model induced by the key player in diabetic endothelial dysfunction IL-1ß. Upon blocking of Akt phosphorylation, p-eNOS/eNOS remained low and the inhibition of pro-inflammatory cytokines by PCA ceased. Conclusion: PCA exerts protection on vascular endothelial function against inflammation through Akt/eNOS pathway, suggesting daily acquisition of PCA may be encouraged for diabetic patients.

18.
Cells ; 12(4)2023 02 19.
Artículo en Inglés | MEDLINE | ID: mdl-36831329

RESUMEN

Progress has been made in identifying stem cell aging as a pathological manifestation of a variety of diseases, including obesity. Adipose stem cells (ASCs) play a core role in adipocyte turnover, which maintains tissue homeostasis. Given aberrant lineage determination as a feature of stem cell aging, failure in adipogenesis is a culprit of adipose hypertrophy, resulting in adiposopathy and related complications. In this review, we elucidate how ASC fails in entering adipogenic lineage, with a specific focus on extracellular signaling pathways, epigenetic drift, metabolic reprogramming, and mechanical stretch. Nonetheless, such detrimental alternations can be reversed by guiding ASCs towards adipogenesis. Considering the pathological role of ASC aging in obesity, targeting adipogenesis as an anti-obesity treatment will be a key area of future research, and a strategy to rejuvenate tissue stem cell will be capable of alleviating metabolic syndrome.


Asunto(s)
Adipocitos , Tejido Adiposo , Humanos , Tejido Adiposo/metabolismo , Adipocitos/metabolismo , Adipogénesis , Células Madre/metabolismo , Envejecimiento , Obesidad/metabolismo
19.
IEEE Trans Neural Netw Learn Syst ; 34(2): 867-881, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-34403349

RESUMEN

Single sample per person face recognition (SSPP FR) is one of the most challenging problems in FR due to the extreme lack of enrolment data. To date, the most popular SSPP FR methods are the generic learning methods, which recognize query face images based on the so-called prototype plus variation (i.e., P+V) model. However, the classic P+V model suffers from two major limitations: 1) it linearly combines the prototype and variation images in the observational pixel-spatial space and cannot generalize to multiple nonlinear variations, e.g., poses, which are common in face images and 2) it would be severely impaired once the enrolment face images are contaminated by nuisance variations. To address the two limitations, it is desirable to disentangle the prototype and variation in a latent feature space and to manipulate the images in a semantic manner. To this end, we propose a novel disentangled prototype plus variation model, dubbed DisP+V, which consists of an encoder-decoder generator and two discriminators. The generator and discriminators play two adversarial games such that the generator nonlinearly encodes the images into a latent semantic space, where the more discriminative prototype feature and the less discriminative variation feature are disentangled. Meanwhile, the prototype and variation features can guide the generator to generate an identity-preserved prototype and the corresponding variation, respectively. Experiments on various real-world face datasets demonstrate the superiority of our DisP+V model over the classic P+V model for SSPP FR. Furthermore, DisP+V demonstrates its unique characteristics in both prototype recovery and face editing/interpolation.


Asunto(s)
Algoritmos , Redes Neurales de la Computación , Humanos , Cara , Reconocimiento de Normas Patrones Automatizadas/métodos
20.
IEEE Trans Cybern ; 53(6): 3688-3701, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-35427226

RESUMEN

Reversible data hiding in ciphertext has potential applications for privacy protection and transmitting extra data in a cloud environment. For instance, an original plain-text image can be recovered from the encrypted image generated after data embedding, while the embedded data can be extracted before or after decryption. However, homomorphic processing can hardly be applied to an encrypted image with hidden data to generate the desired image. This is partly due to that the image content may be changed by preprocessing or/and data embedding. Even if the corresponding plain-text pixel values are kept unchanged by lossless data hiding, the hidden data will be destroyed by outer processing. To address this issue, a lossless data hiding method called random element substitution (RES) is proposed for the Paillier cryptosystem by substituting the to-be-hidden bits for the random element of a cipher value. Moreover, the RES method is combined with another preprocessing-free algorithm to generate two schemes for lossless data hiding in encrypted images. With either scheme, a processed image will be obtained after the encrypted image undergoes processing in the homomorphic encrypted domain. Besides retrieving a part of the hidden data without image decryption, the data hidden with the RES method can be extracted after decryption, even after some processing has been conducted on encrypted images. The experimental results show the efficacy and superior performance of the proposed schemes.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA