Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 25
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
J Environ Manage ; 368: 122125, 2024 Aug 08.
Artigo em Inglês | MEDLINE | ID: mdl-39121621

RESUMO

Digital industrialization represented by big data provides substantial support for the high-quality development of the digital economy, but its impact on urban energy conservation development requires further research. To this end, based on the panel data of Chinese cities from 2010 to 2019 and taking the establishment of the national big data comprehensive pilot zone (NBDCPZ) as a quasi-natural experiment, this paper explores the impact, mechanism, and spatial spillover effect of digital industrialization represented by big data on urban energy conservation development using the Difference-in-Differences (DID) method. The results show that digital industrialization can help achieve urban energy conservation development, which still holds after a series of robustness tests. Mechanism analysis reveals that digital industrialization impacts urban energy conservation development by driving industrial sector output growth, promoting industrial upgrading, stimulating green technology innovation, and alleviating resource misallocation. Heterogeneity analysis indicates that the energy conservation effect of digital industrialization is more significant in the central region, intra-regional demonstration comprehensive pilot zones, large cities, non-resource-based cities, and high-level digital infrastructure cities. Additionally, digital industrialization can promote energy conservation development in neighboring areas through spatial spillover effect. This paper enriches the theoretical framework concerning the relationship between digital industrialization and energy conservation development. The findings have significant implications for achieving the coordinated development of digitalization and conservation.

2.
Phys Chem Chem Phys ; 24(34): 20390-20399, 2022 Aug 31.
Artigo em Inglês | MEDLINE | ID: mdl-35983852

RESUMO

We present a reverse design method useful for designing and analyzing metamaterial absorbers; we demonstrate its power by designing both a narrowband absorber and a wideband absorber. The method determines the structure of the absorber using an equivalent-circuit model. The narrowband metamaterial absorber structures were based on the equivalent-circuit model, and the narrowband metamaterial absorber designed using the method has an absorption fraction greater than 90% in a bandwidth of 500 nm centered at about 1450 nm. In order to extend the absorption bandwidth for the absorber, the narrowband absorber structure is adjusted based on the equivalent-circuit model, and the broadband metamaterial absorber structure is investigated. The numerical results show that the absorption bandwidth is substantially increased; the absorbance is greater than 90% for a band nearly reaching the limits of our experiment, from about 400 nm (near-ultraviolet) to about 2800 nm (deep infrared). The absorption spectrum of the wideband absorber is more sensitive to the angle of incident polarization due to the asymmetric structure, but the whole band shows polarization independence. For a large angle of 60° (TM polarization) oblique incidence, the average absorption of the broadband metamaterial absorber reaches 81%. The physical mechanism of the wideband high absorption is analyzed, which is mainly caused by Fabry-Perot resonance, surface plasmon resonance, local surface plasmon resonance, and the hybrid coupling among them. Our proposed design with high-broadband absorption has significant potential for thermoelectric and thermal emitters, solar thermal energy harvesting, and invisible device applications.

3.
J Phys Chem A ; 125(47): 10223-10234, 2021 Dec 02.
Artigo em Inglês | MEDLINE | ID: mdl-34788032

RESUMO

Quantitative rate determination of elementary reactions is a major task in the study of chemical kinetics. To ensure the fidelity of their determination, progressively tightened constraints need to be placed on their measurement, especially with the development of various notable experimental techniques. However, the evaluation of reaction rates and their uncertainties is frequently conducted with substantial subjectivity due to data source, thermodynamic conditions, sampling range, and sparsity. To reduce the extent of biased rate evaluation, we propose herein an approach of uncertainty-weighted statistical analysis, utilizing weighted average, and weighted least-square regression in statistical inference. Based on the backbone H2/O2 chemistry, rate data for each elementary reaction are collected from the time-history profile in shock tube experiments and high-level theoretical calculations, with their assigned weight inversely depending on uncertainty, which would overall avoid subjective assessments and provide more accurate rate evaluation. Aided by sensitivity analysis, the rates of a few key reactions are further constrained in the less investigated low- to intermediate-temperature conditions using high-fidelity flow reactor data. Good performance of the constructed mechanism is confirmed with validation against the target of the high-fidelity flow reactor data. This study demonstrates a systematic approach for reaction rate evaluation and uncertainty quantification.

4.
Nano Lett ; 14(9): 5238-43, 2014 Sep 10.
Artigo em Inglês | MEDLINE | ID: mdl-25102376

RESUMO

We address here the need for a general strategy to control molecular assembly over multiple length scales. Efficient organic photovoltaics require an active layer comprised of a mesoscale interconnected networks of nanoscale aggregates of semiconductors. We demonstrate a method, using principles of molecular self-assembly and geometric packing, for controlled assembly of semiconductors at the nanoscale and mesoscale. Nanoparticles of poly(3-hexylthiophene) (P3HT) or [6,6]-phenyl-C61-butyric acid methyl ester (PCBM) were fabricated with targeted sizes. Nanoparticles containing a blend of both P3HT and PCBM were also fabricated. The active layer morphology was tuned by the changing particle composition, particle radii, and the ratios of P3HT:PCBM particles. Photovoltaic devices were fabricated from these aqueous nanoparticle dispersions with comparable device performance to typical bulk-heterojunction devices. Our strategy opens a revolutionary pathway to study and tune the active layer morphology systematically while exercising control of the component assembly at multiple length scales.

5.
IEEE Trans Image Process ; 33: 4261-4273, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38954580

RESUMO

Conventional image set methods typically learn from small to medium-sized image set datasets. However, when applied to large-scale image set applications such as classification and retrieval, they face two primary challenges: 1) effectively modeling complex image sets; and 2) efficiently performing tasks. To address the above issues, we propose a novel Multiple Riemannian Kernel Hashing (MRKH) method that leverages the powerful capabilities of Riemannian manifold and Hashing on effective and efficient image set representation. MRKH considers multiple heterogeneous Riemannian manifolds to represent each image set. It introduces a multiple kernel learning framework designed to effectively combine statistics from multiple manifolds, and constructs kernels by selecting a small set of anchor points, enabling efficient scalability for large-scale applications. In addition, MRKH further exploits inter- and intra-modal semantic structure to enhance discrimination. Instead of employing continuous feature to represent each image set, MRKH suggests learning hash code for each image set, thereby achieving efficient computation and storage. We present an iterative algorithm with theoretical convergence guarantee to optimize MRKH, and the computational complexity is linear with the size of dataset. Extensive experiments on five image set benchmark datasets including three large-scale ones demonstrate the proposed method outperforms state-of-the-arts in accuracy and efficiency particularly in large-scale image set classification and retrieval.

6.
Artigo em Inglês | MEDLINE | ID: mdl-39028597

RESUMO

Cross-modal hashing encodes different modalities of multimodal data into low-dimensional Hamming space for fast cross-modal retrieval. In multi-label cross-modal retrieval, multimodal data are often annotated with multiple labels, and some labels, e.g.", ocean" and "cloud", often co-occur. However, existing cross-modal hashing methods overlook label dependency that is crucial for improving performance. To fulfill this gap, this article proposes graph convolutional multi-label hashing (GCMLH) for effective multi-label cross-modal retrieval. Specifically, GCMLH first generates word embedding of each label and develops label encoder to learn highly correlated label embedding via graph convolutional network (GCN). In addition, GCMLH develops feature encoder for each modality, and feature fusion module to generate highly semantic feature via GCN. GCMLH uses teacher-student learning scheme to transfer knowledge from the teacher modules, i.e., label encoder and feature fusion module, to the student module, i.e., feature encoder, such that learned hash code can well exploit multi-label dependency and multimodal semantic structure. Extensive empirical results on several benchmarks demonstrate the superiority of the proposed method over existing state-of-the-arts.

7.
IEEE Trans Cybern ; 53(10): 6236-6247, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35604988

RESUMO

Deep hashing reaps the benefits of deep learning and hashing technology, and has become the mainstream of large-scale image retrieval. It generally encodes image into hash code with feature similarity preserving, that is, geometric-structure preservation, and achieves promising retrieval results. In this article, we find that existing geometric-structure preservation manner inadequately ensures feature discrimination, while improving feature discrimination of hash code essentially determines hash learning retrieval performance. This fact principally spurs us to propose a discriminative geometric-structure-based deep hashing method (DGDH), which investigates three novel loss terms based on class centers to induce the so-called discriminative geometrical structure. In detail, the margin-aware center loss assembles samples in the same class to the corresponding class centers for intraclass compactness, then a linear classifier based on class center serves to boost interclass separability, and the radius loss further puts different class centers on a hypersphere to tentatively reduce quantization errors. An efficient alternate optimization algorithm with guaranteed desirable convergence is proposed to optimize DGDH. We theoretically analyze the robustness and generalization of the proposed method. The experiments on five popular benchmark datasets demonstrate superior image retrieval performance of the proposed DGDH over several state of the arts.

8.
Artigo em Inglês | MEDLINE | ID: mdl-37028051

RESUMO

With the development of video network, image set classification (ISC) has received a lot of attention and can be used for various practical applications, such as video based recognition, action recognition, and so on. Although the existing ISC methods have obtained promising performance, they often have extreme high complexity. Due to the superiority in storage space and complexity cost, learning to hash becomes a powerful solution scheme. However, existing hashing methods often ignore complex structural information and hierarchical semantics of the original features. They usually adopt a single-layer hashing strategy to transform high-dimensional data into short-length binary codes in one step. This sudden drop of dimension could result in the loss of advantageous discriminative information. In addition, they do not take full advantage of intrinsic semantic knowledge from whole gallery sets. To tackle these problems, in this paper, we propose a novel Hierarchical Hashing Learning (HHL) for ISC. Specifically, a coarse-to-fine hierarchical hashing scheme is proposed that utilizes a two-layer hash function to gradually refine the beneficial discriminative information in a layer-wise fashion. Besides, to alleviate the effects of redundant and corrupted features, we impose the ℓ2,1 norm on the layer-wise hash function. Moreover, we adopt a bidirectional semantic representation with the orthogonal constraint to keep intrinsic semantic information of all samples in whole image sets adequately. Comprehensive experiments demonstrate HHL acquires significant improvements in accuracy and running time. We will release the demo code on https://github.com/sunyuan-cs.

9.
IEEE Trans Image Process ; 32: 5992-6003, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37903046

RESUMO

Video hashing learns compact representation by mapping video into low-dimensional Hamming space and has achieved promising performance in large-scale video retrieval. It is challenging to effectively exploit temporal and spatial structure in an unsupervised setting. To fulfill this gap, this paper proposes Contrastive Transformer Hashing (CTH) for effective video retrieval. Specifically, CTH develops a bidirectional transformer autoencoder, based on which visual reconstruction loss is proposed. CTH is more powerful to capture bidirectional correlations among frames than conventional unidirectional models. In addition, CTH devises multi-modality contrastive loss to reveal intrinsic structure among videos. CTH constructs inter-modality and intra-modality triplet sets and proposes multi-modality contrastive loss to exploit inter-modality and intra-modality similarities simultaneously. We perform video retrieval tasks on four benchmark datasets, i.e., UCF101, HMDB51, SVW30, FCVID using the learned compact hash representation, and extensive empirical results demonstrate the proposed CTH outperforms several state-of-the-art video hashing methods.

10.
Urol J ; 20(4): 208-214, 2023 Jul 26.
Artigo em Inglês | MEDLINE | ID: mdl-36840447

RESUMO

PURPOSE: To describe the feasibility of computed tomography (CT)-ultrasound image fusion technique on guiding percutaneous kidney access in vitro and vivo. MATERIALS AND METHODS: we compare CT-ultrasound image fusion technique and ultrasound for percutaneous kidney puncture guidance by using an in vitro pig kidney model. The fusion method, fusion time, ultrasound screening time, and success rate of puncture were compared between the groups. Next, patients with kidney stones in our hospital were randomized in the study of simulated puncture guidance. The general condition of patients, fusion method, fusion time, and ultrasound screening time were compared between the groups. RESULTS: A total of 45 pig models were established, including 23 in the CT-ultrasound group and 22 in the ultrasound group. The ultrasound screening time in the CT-ultrasound group was significantly shorter than that in the ultrasound group (P < .001). In addition, the success rate of puncture in the CT-ultrasound group was significantly higher than that in the ultrasound group (P =.015). Furthermore, in the simulated PCNL puncture study, baseline data including age, BMI, and S.T.O.N.E score between the two groups showed no statistical difference. The ultrasound screening time of the two groups was (2.60 ± 0.33) min and (3.37 ± 0.51) min respectively, and the difference was statistically significant (P < .001). CONCLUSION: Our research revealed that the CT-ultrasound image fusion technique was a feasible and safe method to guide PCNL puncture. Compared with traditional ultrasound guidance, the CT-ultrasound image fusion technique can shorten the learning curve of PCNL puncture, improve the success rate of puncture, and shorten the ultrasound screening time.


Assuntos
Cálculos Renais , Nefrolitotomia Percutânea , Nefrostomia Percutânea , Animais , Rim/diagnóstico por imagem , Cálculos Renais/diagnóstico por imagem , Cálculos Renais/cirurgia , Nefrolitotomia Percutânea/métodos , Nefrostomia Percutânea/métodos , Suínos , Tomografia Computadorizada por Raios X , Ultrassonografia , Humanos
11.
IEEE Trans Image Process ; 31: 6471-6486, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36223352

RESUMO

In the field of image set classification, most existing works focus on exploiting effective latent discriminative features. However, it remains a research gap to efficiently handle this problem. In this paper, benefiting from the superiority of hashing in terms of its computational complexity and memory costs, we present a novel Discrete Metric Learning (DML) approach based on the Riemannian manifold for fast image set classification. The proposed DML jointly learns a metric in the induced space and a compact Hamming space, where efficient classification is carried out. Specifically, each image set is modeled as a point on Riemannian manifold after which the proposed DML minimizes the Hamming distance between similar Riemannian pairs and maximizes the Hamming distance between dissimilar ones by introducing a discriminative Mahalanobis-like matrix. To overcome the shortcoming of DML that relies on the vectorization of Riemannian representations, we further develop Bilinear Discrete Metric Learning (BDML) to directly manipulate the original Riemannian representations and explore the natural matrix structure for high-dimensional data. Different from conventional Riemannian metric learning methods, which require complicated Riemannian optimizations (e.g., Riemannian conjugate gradient), both DML and BDML can be efficiently optimized by computing the geodesic mean between the similarity matrix and inverse of the dissimilarity matrix. Extensive experiments conducted on different visual recognition tasks (face recognition, object recognition, and action recognition) demonstrate that the proposed methods achieve competitive performance in terms of accuracy and efficiency.

12.
IEEE Trans Pattern Anal Mach Intell ; 44(11): 7955-7974, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-34637378

RESUMO

Exabytes of data are generated daily by humans, leading to the growing needs for new efforts in dealing with the grand challenges for multi-label learning brought by big data. For example, extreme multi-label classification is an active and rapidly growing research area that deals with classification tasks with extremely large number of classes or labels; utilizing massive data with limited supervision to build a multi-label classification model becomes valuable for practical applications, etc. Besides these, there are tremendous efforts on how to harvest the strong learning capability of deep learning to better capture the label dependencies in multi-label learning, which is the key for deep learning to address real-world classification tasks. However, it is noted that there have been a lack of systemic studies that focus explicitly on analyzing the emerging trends and new challenges of multi-label learning in the era of big data. It is imperative to call for a comprehensive survey to fulfil this mission and delineate future research directions and new applications.


Assuntos
Big Data , Aprendizado de Máquina , Humanos , Aprendizado de Máquina/tendências
13.
IEEE Trans Cybern ; 52(7): 5842-5854, 2022 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-33449897

RESUMO

Gaussian process classification (GPC) provides a flexible and powerful statistical framework describing joint distributions over function space. Conventional GPCs, however, suffer from: 1) poor scalability for big data due to the full kernel matrix and 2) intractable inference due to the non-Gaussian likelihoods. Hence, various scalable GPCs have been proposed through: 1) the sparse approximation built upon a small inducing set to reduce the time complexity and 2) the approximate inference to derive analytical evidence lower bound (ELBO). However, these scalable GPCs equipped with analytical ELBO are limited to specific likelihoods or additional assumptions. In this work, we present a unifying framework that accommodates scalable GPCs using various likelihoods. Analogous to GP regression (GPR), we introduce additive noises to augment the probability space for: 1) the GPCs with step, (multinomial) probit, and logit likelihoods via the internal variables and 2) particularly, the GPC using softmax likelihood via the noise variables themselves. This leads to unified scalable GPCs with analytical ELBO by using variational inference. Empirically, our GPCs showcase superiority on extensive binary/multiclass classification tasks with up to two million data points.


Assuntos
Distribuição Normal , Probabilidade
14.
Int J Syst Evol Microbiol ; 61(Pt 5): 1133-1137, 2011 May.
Artigo em Inglês | MEDLINE | ID: mdl-20543152

RESUMO

Two closely related, Gram-stain-negative, rod-shaped, spore-forming strains, B27(T) and F6-B70, were isolated from soil samples of Tianmu Mountain National Natural Reserve in Zhejiang, China. Phylogenetic analysis based on 16S rRNA gene and rpoB sequences indicated that the isolates were members of the genus Paenibacillus. Both isolates were closely related to Paenibacillus ehimensis IFO 15659(T), Paenibacillus elgii SD17(T) and Paenibacillus koreensis YC300(T) (≥ 95.2 % 16S rRNA gene sequence similarity). DNA-DNA relatedness between strain B27(T) and P. ehimensis DSM 11029(T), P. elgii NBRC 100335(T) and P. koreensis KCTC 2393(T) was 21.2, 28.6 and 16.8 %, respectively. The major cellular fatty acids of strains B27(T) and F6-B70 were anteiso-C(15 : 0) and iso-C(15 : 0). The cell wall contained meso-diaminopimelic acid. The two isolates differed from their closest neighbours in terms of phenotypic characteristics and cellular fatty acid profiles (such as variable for oxidase, negative for methyl red test, unable to produce acid from d-fructose and glycogen and relatively higher amounts of iso-C(15 : 0) and lower amounts of C(16 : 0) and iso-C(16 : 0)). Strains B27(T) and F6-B70 represent a novel species of the genus Paenibacillus, for which the name Paenibacillus tianmuensis sp. nov. is proposed. The type strain is B27(T) ( = DSM 22342(T)  = CGMCC 1.8946(T)).


Assuntos
Paenibacillus/classificação , Paenibacillus/isolamento & purificação , Microbiologia do Solo , China , DNA Bacteriano/genética , Ácidos Graxos/metabolismo , Dados de Sequência Molecular , Paenibacillus/genética , Paenibacillus/metabolismo , Filogenia , RNA Ribossômico 16S/genética
15.
Stem Cell Res Ther ; 12(1): 543, 2021 10 18.
Artigo em Inglês | MEDLINE | ID: mdl-34663464

RESUMO

BACKGROUND: Periodontal disease, an oral disease characterized by loss of alveolar bone and progressive teeth loss, is the sixth major complication of diabetes. It is spreading worldwide as it is difficult to be cured. The insulin-like growth factor 1 receptor (IGF-1R) plays an important role in regulating functional impairment associated with diabetes. However, it is unclear whether IGF-1R expression in periodontal tissue is related to alveolar bone destruction in diabetic patients. SUMO modification has been reported in various diseases and is associated with an increasing number of biological processes, but previous studies have not focused on diabetic periodontitis. This study aimed to explore the role of IGF-1R in osteogenic differentiation of periodontal ligament stem cells (PDLSCs) in high glucose and control the multiple downstream damage signal factors. METHODS: PDLSCs were isolated and cultured after extraction of impacted teeth from healthy donors or subtractive orthodontic extraction in adolescents. PDLSCs were cultured in the osteogenic medium with different glucose concentrations prepared by medical 5% sterile glucose solution. The effects of different glucose concentrations on the osteogenic differentiation ability of PDLSCs were studied at the genetic and cellular levels by staining assay, Western Blot, RT-PCR, Co-IP and cytofluorescence. RESULTS: We found that SNAI2, RUNX2 expression decreased in PDLSCs cultured in high glucose osteogenic medium compared with that in normal glucose osteogenic medium, which were osteogenesis-related marker. In addition, the IGF-1R expression, sumoylation of IGF-1R and osteogenic differentiation in PDLSCs cultured in high glucose osteogenic medium were not consistent with those cultured in normal glucose osteogenic medium. However, osteogenic differentiation of PDLCSs enhanced after adding IGF-1R inhibitors to high glucose osteogenic medium. CONCLUSION: Our data demonstrated that SUMO1 modification of IGF-1R inhibited osteogenic differentiation of PDLSCs by binding to SNAI2 in high glucose environment, a key factor leading to alveolar bone loss in diabetic patients. Thus we could maximize the control of multiple downstream damage signaling factors and bring new hope for alveolar bone regeneration in diabetic patients.


Assuntos
Osteogênese , Ligamento Periodontal , Receptor IGF Tipo 1/genética , Proteína SUMO-1/genética , Fatores de Transcrição da Família Snail , Adolescente , Diferenciação Celular , Células Cultivadas , Glucose/farmacologia , Humanos , Fatores de Transcrição da Família Snail/genética , Células-Tronco
16.
IEEE Trans Neural Netw Learn Syst ; 31(11): 4405-4423, 2020 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-31944966

RESUMO

The vast quantity of information brought by big data as well as the evolving computer hardware encourages success stories in the machine learning community. In the meanwhile, it poses challenges for the Gaussian process regression (GPR), a well-known nonparametric, and interpretable Bayesian model, which suffers from cubic complexity to data size. To improve the scalability while retaining desirable prediction quality, a variety of scalable GPs have been presented. However, they have not yet been comprehensively reviewed and analyzed to be well understood by both academia and industry. The review of scalable GPs in the GP community is timely and important due to the explosion of data size. To this end, this article is devoted to reviewing state-of-the-art scalable GPs involving two main categories: global approximations that distillate the entire data and local approximations that divide the data for subspace learning. Particularly, for global approximations, we mainly focus on sparse approximations comprising prior approximations that modify the prior but perform exact inference, posterior approximations that retain exact prior but perform approximate inference, and structured sparse approximations that exploit specific structures in kernel matrix; for local approximations, we highlight the mixture/product of experts that conducts model averaging from multiple local experts to boost predictions. To present a complete review, recent advances for improving the scalability and capability of scalable GPs are reviewed. Finally, the extensions and open issues of scalable GPs in various scenarios are reviewed and discussed to inspire novel ideas for future research avenues.

17.
IEEE Trans Neural Netw Learn Syst ; 31(7): 2409-2429, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-31714241

RESUMO

The aim of multi-output learning is to simultaneously predict multiple outputs given an input. It is an important learning problem for decision-making since making decisions in the real world often involves multiple complex factors and criteria. In recent times, an increasing number of research studies have focused on ways to predict multiple outputs at once. Such efforts have transpired in different forms according to the particular multi-output learning problem under study. Classic cases of multi-output learning include multi-label learning, multi-dimensional learning, multi-target regression, and others. From our survey of the topic, we were struck by a lack in studies that generalize the different forms of multi-output learning into a common framework. This article fills that gap with a comprehensive review and analysis of the multi-output learning paradigm. In particular, we characterize the four Vs of multi-output learning, i.e., volume, velocity, variety, and veracity, and the ways in which the four Vs both benefit and bring challenges to multi-output learning by taking inspiration from big data. We analyze the life cycle of output labeling, present the main mathematical definitions of multi-output learning, and examine the field's key challenges and corresponding solutions as found in the literature. Several model evaluation metrics and popular data repositories are also discussed. Last but not least, we highlight some emerging challenges with multi-output learning from the perspective of the four Vs as potential research directions worthy of further studies.

18.
IEEE Trans Image Process ; 28(2): 577-588, 2019 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-30222564

RESUMO

Hyperspectral imagery (HSI) has shown promising results in real-world applications. However, the technological evolution of optical sensors poses two main challenges in HSI classification: 1) the spectral band is usually redundant and noisy and 2) HSI with millions of pixels has become increasingly common in real-world applications. Motivated by the recent success of hybrid huberized support vector machines (HHSVMs), which inherit the benefits of both lasso and ridge regression, this paper first investigates the advantages of HHSVM for HSI applications. Unfortunately, the existing HHSVM solvers suffer from prohibitive computational costs on large-scale data sets. To solve this problem, this paper proposes simple and effective stochastic HHSVM algorithms for HSI classification. In the stochastic settings, we show that with a probability of at least , our algorithms find an -accurate solution using iterations. Since the convergence rate of our algorithms does not depend on the size of the training set, our algorithms are suitable for handling large-scale problems. We demonstrate the superiority of our algorithms by conducting experiments on large-scale binary and multiclass classification problems, comparing to the state-of-the-art HHSVM solvers. Finally, we apply our algorithms to real HSI classification and achieve promising results.

19.
IEEE Trans Neural Netw Learn Syst ; 29(9): 4324-4338, 2018 09.
Artigo em Inglês | MEDLINE | ID: mdl-29990175

RESUMO

Embedding methods have shown promising performance in multilabel prediction, as they are able to discover the label dependence. However, most methods ignore the correlations between the input and output, such that their learned embeddings are not well aligned, which leads to degradation in prediction performance. This paper presents a formulation for multilabel learning, from the perspective of cross-view learning, that explores the correlations between the input and the output. The proposed method, called Co-Embedding (CoE), jointly learns a semantic common subspace and view-specific mappings within one framework. The semantic similarity structure among the embeddings is further preserved, ensuring that close embeddings share similar labels. Additionally, CoE conducts multilabel prediction through the cross-view $k$ nearest neighborhood ( $k$ NN) search among the learned embeddings, which significantly reduces computational costs compared with conventional decoding schemes. A hashing-based model, i.e., Co-Hashing (CoH), is further proposed. CoH is based on CoE, and imposes the binary constraint on continuous latent embeddings. CoH aims to generate compact binary representations to improve the prediction efficiency by benefiting from the efficient $k$ NN search of multiple labels in the Hamming space. Extensive experiments on various real-world data sets demonstrate the superiority of the proposed methods over the state of the arts in terms of both prediction accuracy and efficiency.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA