Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
J Hazard Mater ; 472: 134456, 2024 Jul 05.
Artículo en Inglés | MEDLINE | ID: mdl-38703678

RESUMEN

Exposure to toxic chemicals threatens species and ecosystems. This study introduces a novel approach using Graph Neural Networks (GNNs) to integrate aquatic toxicity data, providing an alternative to complement traditional in vivo ecotoxicity testing. This study pioneers the application of GNN in ecotoxicology by formulating the problem as a relation prediction task. GRAPE's key innovation lies in simultaneously modelling 444 aquatic species and 2826 chemicals within a graph, leveraging relations from existing datasets where informative species and chemical features are augmented to make informed predictions. Extensive evaluations demonstrate the superiority of GRAPE over Logistic Regression (LR) and Multi-Layer Perceptron (MLP) models, achieving remarkable improvements of up to a 30% increase in recall values. GRAPE consistently outperforms LR and MLP in predicting novel chemicals and new species. In particular, GRAPE showcases substantial enhancements in recall values, with improvements of ≥ 100% for novel chemicals and up to 13% for new species. Specifically, GRAPE correctly predicts the effects of novel chemicals (104 out of 126) and effects on new species (7 out of 8). Moreover, the study highlights the effectiveness of the proposed chemical features and induced network topology through GNN for accurately predicting metallic (74 out of 86) and organic (612 out of 674) chemicals, showcasing the broad applicability and robustness of the GRAPE model in ecotoxicological investigations. The code/data are provided at https://github.com/csiro-robotics/GRAPE.


Asunto(s)
Ecotoxicología , Redes Neurales de la Computación , Animales , Contaminantes Químicos del Agua/toxicidad
2.
IEEE Trans Pattern Anal Mach Intell ; 45(11): 13509-13522, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-37486846

RESUMEN

Traditional approaches for learning on categorical data underexploit the dependencies between columns (a.k.a. fields) in a dataset because they rely on the embedding of data points driven alone by the classification/regression loss. In contrast, we propose a novel method for learning on categorical data with the goal of exploiting dependencies between fields. Instead of modelling statistics of features globally (i.e., by the covariance matrix of features), we learn a global field dependency matrix that captures dependencies between fields and then we refine the global field dependency matrix at the instance-wise level with different weights (so-called local dependency modelling) w.r.t. each field to improve the modelling of the field dependencies. Our algorithm exploits the meta-learning paradigm, i.e., the dependency matrices are refined in the inner loop of the meta-learning algorithm without the use of labels, whereas the outer loop intertwines the updates of the embedding matrix (the matrix performing projection) and global dependency matrix in a supervised fashion (with the use of labels). Our method is simple yet it outperforms several state-of-the-art methods on six popular dataset benchmarks. Detailed ablation studies provide additional insights into our method.

3.
IEEE Trans Pattern Anal Mach Intell ; 45(3): 2682-2697, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-35816536

RESUMEN

We address the problem of ground-to-satellite image geo-localization, that is, estimating the camera latitude, longitude and orientation (azimuth angle) by matching a query image captured at the ground level against a large-scale database with geotagged satellite images. Our prior arts treat the above task as pure image retrieval by selecting the most similar satellite reference image matching the ground-level query image. However, such an approach often produces coarse location estimates because the geotag of the retrieved satellite image only corresponds to the image center while the ground camera can be located at any point within the image. To further consolidate our prior research finding, we present a novel geometry-aware geo-localization method. Our new method is able to achieve the fine-grained location of a query image, up to pixel size precision of the satellite image, once its coarse location and orientation have been determined. Moreover, we propose a new geometry-aware image retrieval pipeline to improve the coarse localization accuracy. Apart from a polar transform in our conference work, this new pipeline also maps satellite image pixels to the ground-level plane in the ground-view via a geometry-constrained projective transform to emphasize informative regions, such as road structures, for cross-view geo-localization. Extensive quantitative and qualitative experiments demonstrate the effectiveness of our newly proposed framework. We also significantly improve the performance of coarse localization results compared to the state-of-the-art in terms of location recalls.

4.
IEEE Trans Pattern Anal Mach Intell ; 44(2): 648-665, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-34428136

RESUMEN

Human actions in video sequences are characterized by the complex interplay between spatial features and their temporal dynamics. In this paper, we propose novel tensor representations for compactly capturing such higher-order relationships between visual features for the task of action recognition. We propose two tensor-based feature representations, viz. (i) sequence compatibility kernel (SCK) and (ii) dynamics compatibility kernel (DCK). SCK builds on the spatio-temporal correlations between features, whereas DCK explicitly models the action dynamics of a sequence. We also explore generalization of SCK, coined SCK ⊕, that operates on subsequences to capture the local-global interplay of correlations, which can incorporate multi-modal inputs e.g., skeleton 3D body-joints and per-frame classifier scores obtained from deep learning models trained on videos. We introduce linearization of these kernels that lead to compact and fast descriptors. We provide experiments on (i) 3D skeleton action sequences, (ii) fine-grained video sequences, and (iii) standard non-fine-grained videos. As our final representations are tensors that capture higher-order relationships of features, they relate to co-occurrences for robust fine-grained recognition (Lin, 2017), (Koniusz, 2018). We use higher-order tensors and so-called Eigenvalue Power Normalization (EPN) which have been long speculated to perform spectral detection of higher-order occurrences (Koniusz, 2013), (Koniusz, 2017), thus detecting fine-grained relationships of features rather than merely count features in action sequences. We prove that a tensor of order r, built from Z* dimensional features, coupled with EPN indeed detects if at least one higher-order occurrence is 'projected' into one of its [Formula: see text] subspaces of dim. r represented by the tensor, thus forming a Tensor Power Normalization metric endowed with [Formula: see text] such 'detectors'.


Asunto(s)
Algoritmos , Actividades Humanas , Humanos
5.
IEEE Trans Pattern Anal Mach Intell ; 44(2): 591-609, 2022 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-34428137

RESUMEN

Power Normalizations (PN) are useful non-linear operators which tackle feature imbalances in classification problems. We study PNs in the deep learning setup via a novel PN layer pooling feature maps. Our layer combines the feature vectors and their respective spatial locations in the feature maps produced by the last convolutional layer of CNN into a positive definite matrix with second-order statistics to which PN operators are applied, forming so-called Second-order Pooling (SOP). As the main goal of this paper is to study Power Normalizations, we investigate the role and meaning of MaxExp and Gamma, two popular PN functions. To this end, we provide probabilistic interpretations of such element-wise operators and discover surrogates with well-behaved derivatives for end-to-end training. Furthermore, we look at the spectral applicability of MaxExp and Gamma by studying Spectral Power Normalizations (SPN). We show that SPN on the autocorrelation/covariance matrix and the Heat Diffusion Process (HDP) on a graph Laplacian matrix are closely related, thus sharing their properties. Such a finding leads us to the culmination of our work, a fast spectral MaxExp which is a variant of HDP for covariances/autocorrelation matrices. We evaluate our ideas on fine-grained recognition, scene recognition, and material classification, as well as in few-shot learning and graph classification.

6.
IEEE Trans Image Process ; 29: 15-28, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-31283506

RESUMEN

Video-based human action recognition is currently one of the most active research areas in computer vision. Various research studies indicate that the performance of action recognition is highly dependent on the type of features being extracted and how the actions are represented. Since the release of the Kinect camera, a large number of Kinect-based human action recognition techniques have been proposed in the literature. However, there still does not exist a thorough comparison of these Kinect-based techniques under the grouping of feature types, such as handcrafted versus deep learning features and depth-based versus skeleton-based features. In this paper, we analyze and compare 10 recent Kinect-based algorithms for both cross-subject action recognition and cross-view action recognition using six benchmark datasets. In addition, we have implemented and improved some of these techniques and included their variants in the comparison. Our experiments show that the majority of methods perform better on cross-subject action recognition than cross-view action recognition, that the skeleton-based features are more robust for cross-view recognition than the depth-based features, and that the deep learning features are suitable for large datasets.

7.
IEEE Trans Pattern Anal Mach Intell ; 39(2): 313-326, 2017 02.
Artículo en Inglés | MEDLINE | ID: mdl-27019477

RESUMEN

In object recognition, the Bag-of-Words model assumes: i) extraction of local descriptors from images, ii) embedding the descriptors by a coder to a given visual vocabulary space which results in mid-level features, iii) extracting statistics from mid-level features with a pooling operator that aggregates occurrences of visual words in images into signatures, which we refer to as First-order Occurrence Pooling. This paper investigates higher-order pooling that aggregates over co-occurrences of visual words. We derive Bag-of-Words with Higher-order Occurrence Pooling based on linearisation of Minor Polynomial Kernel, and extend this model to work with various pooling operators. This approach is then effectively used for fusion of various descriptor types. Moreover, we introduce Higher-order Occurrence Pooling performed directly on local image descriptors as well as a novel pooling operator that reduces the correlation in the image signatures. Finally, First-, Second-, and Third-order Occurrence Pooling are evaluated given various coders and pooling operators on several widely used benchmarks. The proposed methods are compared to other approaches such as Fisher Vector Encoding and demonstrate improved results.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA