Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Front Chem ; 11: 1283895, 2023.
Article in English | MEDLINE | ID: mdl-38075498

ABSTRACT

A robust method was developed using LC-ESI-MS/MS-based identification and quantification of 103 fortified pesticides in a mango fruit drink. Variations in QuEChERS extraction (without buffer, citrate, and/or acetate buffered) coupled with dispersive clean-up combinations were evaluated. Results showed 5 mL dilution and citrate buffered QuEChERS extraction with anhydrous (anhy) MgSO4 clean-up gave acceptable recovery for 100 pesticides @ 1 µg mL-1 fortification. The method was validated as per SANTE guidelines (SANTE/11813/2021). 95, 91, and 77 pesticides were satisfactorily recovered at 0.1, 0.05, and 0.01 µg mL-1 fortification with HorRat values ranging from 0.2-0.8 for the majority. The method showed matrix enhancement for 77 pesticides with a global uncertainty of 4.72%-23.89%. The reliability of the method was confirmed by real sample analysis of different brands of mango drinks available in the market. The greenness assessment by GAPI (Green Analytical Procedure Index) indicated the method was much greener than other contemporary methods.

2.
IEEE Trans Image Process ; 24(10): 2941-54, 2015 Oct.
Article in English | MEDLINE | ID: mdl-25966476

ABSTRACT

Data-driven dictionaries have produced the state-of-the-art results in various classification tasks. However, when the target data has a different distribution than the source data, the learned sparse representation may not be optimal. In this paper, we investigate if it is possible to optimally represent both source and target by a common dictionary. In particular, we describe a technique which jointly learns projections of data in the two domains, and a latent dictionary which can succinctly represent both the domains in the projected low-dimensional space. The algorithm is modified to learn a common discriminative dictionary, which can further improve the classification performance. The algorithm is also effective for adaptation across multiple domains and is extensible to nonlinear feature spaces. The proposed approach does not require any explicit correspondences between the source and target domains, and yields good results even when there are only a few labels available in the target domain. We also extend it to unsupervised adaptation in cases where the same feature is extracted across all domains. Further, it can also be used for heterogeneous domain adaptation, where different features are extracted for different domains. Various recognition experiments show that the proposed method performs on par or better than competitive state-of-the-art methods.

3.
IEEE Trans Pattern Anal Mach Intell ; 36(1): 113-26, 2014 Jan.
Article in English | MEDLINE | ID: mdl-24231870

ABSTRACT

Traditional biometric recognition systems rely on a single biometric signature for authentication. While the advantage of using multiple sources of information for establishing the identity has been widely recognized, computational models for multimodal biometrics recognition have only recently received attention. We propose a multimodal sparse representation method, which represents the test data by a sparse linear combination of training data, while constraining the observations from different modalities of the test subject to share their sparse representations. Thus, we simultaneously take into account correlations as well as coupling information among biometric modalities. A multimodal quality measure is also proposed to weigh each modality as it gets fused. Furthermore, we also kernelize the algorithm to handle nonlinearity in data. The optimization problem is solved using an efficient alternative direction method. Various experiments show that the proposed method compares favorably with competing fusion-based methods.


Subject(s)
Biometric Identification/methods , Algorithms , Databases, Factual , Dermatoglyphics/classification , Face/anatomy & histology , Humans , Iris/anatomy & histology
SELECTION OF CITATIONS
SEARCH DETAIL
...