Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 22
Filtrar
1.
Phys Med Biol ; 68(12)2023 Jun 08.
Artigo em Inglês | MEDLINE | ID: mdl-37201537

RESUMO

Objective. Many methods for compression and/or de-speckling of 3D optical coherence tomography (OCT) images operate on a slice-by-slice basis and, consequently, ignore spatial relations between the B-scans. Thus, we develop compression ratio (CR)-constrained low tensor train (TT)-and low multilinear (ML) rank approximations of 3D tensors for compression and de-speckling of 3D OCT images. Due to inherent denoising mechanism of low-rank approximation, compressed image is often even of better quality than the raw image it is based on.Approach. We formulate CR-constrained low rank approximations of 3D tensor as parallel non-convex non-smooth optimization problems implemented by alternating direction method of multipliers of unfolded tensors. In contrast to patch- and sparsity-based OCT image compression methods, proposed approach does not require clean images for dictionary learning, enables CR as high as 60:1, and it is fast. In contrast to deep networks based OCT image compression, proposed approach is training free and does not require any supervised data pre-processing.Main results. Proposed methodology is evaluated on twenty four images of a retina acquired on Topcon 3D OCT-1000 scanner, and twenty images of a retina acquired on Big Vision BV1000 3D OCT scanner. For the first dataset, statistical significance analysis shows that for CR ≤ 35, all low ML rank approximations and Schatten-0 (S0) norm constrained low TT rank approximation can be useful for machine learning-based diagnostics by using segmented retina layers. Also for CR ≤ 35,S0-constrained ML rank approximation andS0-constrained low TT rank approximation can be useful for visual inspection-based diagnostics. For the second dataset, statistical significance analysis shows that for CR ≤ 60 all low ML rank approximations as well asS0andS1/2low TT ranks approximations can be useful for machine learning-based diagnostics by using segmented retina layers. Also, for CR ≤ 60, low ML rank approximations constrained withSp,p∊ {0, 1/2, 2/3} and one surrogate ofS0can be useful for visual inspection-based diagnostics. That is also true for low TT rank approximations constrained withSp,p∊ {0, 1/2, 2/3} for CR ≤ 20.Significance. Studies conducted on datasets acquired by two different types of scanners confirmed capabilities of proposed framework that, for a wide range of CRs, yields de-speckled 3D OCT images suitable for clinical data archiving and remote consultation, for visual inspection-based diagnosis and for machine learning-based diagnosis by using segmented retina layers.


Assuntos
Compressão de Dados , Algoritmos , Tomografia de Coerência Óptica/métodos , Retina/diagnóstico por imagem
2.
Anal Chem ; 93(2): 745-751, 2021 01 19.
Artigo em Inglês | MEDLINE | ID: mdl-33284005

RESUMO

Because of its quantitative character and capability for high-throughput screening, 1H nuclear magnetic resonance (NMR) spectroscopy is used extensively in the profiling of biofluids such as urine and blood plasma. However, the narrow frequency bandwidth of 1H NMR spectroscopy leads to a severe overlap of the spectra of components present in the complex mixtures such as biofluids. Therefore, 1H NMR-based metabolomics analysis is focused on targeted studies related to concentrations of the small number of metabolites. Here, we propose a library-based approach to quantify proportions of overlapping metabolites from 1H NMR mixture spectra. The method boils down to the linear non-negative least squares (NNLS) problem, whereas proportions of the pure components contained in the library stand for the unknowns. The method is validated on an estimation of the proportions of (i) the 78 pure spectra, presumably related to type 2 diabetes mellitus (T2DM), from their synthetic linear mixture; (ii) metabolites present in 62 1H NMR spectra of urine of subjects with T2DM and 62 1H NMR spectra of urine of control subjects. In both cases, the in-house library of 210 pure component 1H NMR spectra represented the design matrix in the related NNLS problem. The proposed method pinpoints 63 metabolites that in a statistically significant way discriminate the T2DM group from the control group and 46 metabolites discriminating control from the T2DM group. For several T2DM-discriminative metabolites, we prove their presence by independent analytical determination or by pointing out the corresponding findings in the published literature.


Assuntos
Diabetes Mellitus Tipo 2/urina , Espectroscopia de Ressonância Magnética/métodos , Metabolômica/métodos , Urinálise/métodos , Estudos de Casos e Controles , Humanos , Bibliotecas de Moléculas Pequenas
3.
IEEE Trans Cybern ; 50(4): 1711-1725, 2020 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-30561362

RESUMO

In many applications, high-dimensional data points can be well represented by low-dimensional subspaces. To identify the subspaces, it is important to capture a global and local structure of the data which is achieved by imposing low-rank and sparseness constraints on the data representation matrix. In low-rank sparse subspace clustering (LRSSC), nuclear and l1 -norms are used to measure rank and sparsity. However, the use of nuclear and l1 -norms leads to an overpenalized problem and only approximates the original problem. In this paper, we propose two l0 quasi-norm-based regularizations. First, this paper presents regularization based on multivariate generalization of minimax-concave penalty (GMC-LRSSC), which contains the global minimizers of a l0 quasi-norm regularized objective. Afterward, we introduce the Schatten-0 ( S0 ) and l0 -regularized objective and approximate the proximal map of the joint solution using a proximal average method ( S0/l0 -LRSSC). The resulting nonconvex optimization problems are solved using an alternating direction method of multipliers with established convergence conditions of both algorithms. Results obtained on synthetic and four real-world datasets show the effectiveness of GMC-LRSSC and S0/l0 -LRSSC when compared to state-of-the-art methods.

4.
Anal Chim Acta ; 1080: 55-65, 2019 Nov 08.
Artigo em Inglês | MEDLINE | ID: mdl-31409475

RESUMO

Due to its capability for high-throughput screening 1H nuclear magnetic resonance (NMR) spectroscopy is commonly used for metabolite research. The key problem in 1H NMR spectroscopy of multicomponent mixtures is overlapping of component signals and that is increasing with the number of components, their complexity and structural similarity. It makes metabolic profiling, that is carried out through matching acquired spectra with metabolites from the library, a hard problem. Here, we propose a method for nonlinear blind separation of highly correlated components spectra from a single 1H NMR mixture spectra. The method transforms a single nonlinear mixture into multiple high-dimensional reproducible kernel Hilbert Spaces (mRKHSs). Therein, highly correlated components are separated by sparseness constrained nonnegative matrix factorization in each induced RKHS. Afterwards, metabolites are identified through comparison of separated components with the library comprised of 160 pure components. Thereby, a significant number of them are expected to be related with diabetes type 2. Conceptually similar methodology for nonlinear blind separation of correlated components from two or more mixtures is presented in the Supplementary material. Single-mixture blind source separation is exemplified on: (i) annotation of five components spectra separated from one 1H NMR model mixture spectra; (ii) annotation of fifty five metabolites separated from one 1H NMR mixture spectra of urine of subjects with and without diabetes type 2. Arguably, it is for the first time a method for blind separation of a large number of components from a single nonlinear mixture has been proposed. Moreover, the proposed method pinpoints urinary creatine, glutamic acid and 5-hydroxyindoleacetic acid as the most prominent metabolites in samples from subjects with diabetes type 2, when compared to healthy controls.


Assuntos
Metaboloma , Metabolômica/métodos , Urina/química , Adulto , Idoso , Idoso de 80 Anos ou mais , Algoritmos , Diabetes Mellitus/urina , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Projetos Piloto , Espectroscopia de Prótons por Ressonância Magnética/métodos , Bibliotecas de Moléculas Pequenas
5.
IEEE J Biomed Health Inform ; 21(6): 1656-1666, 2017 11.
Artigo em Inglês | MEDLINE | ID: mdl-27834658

RESUMO

In this paper, we propose a novel method for single-channel blind separation of nonoverlapped sources and, to the best of our knowledge, apply it for the first time to automatic segmentation of lung tumors in positron emission tomography (PET) images. Our approach first converts a 3-D PET image into a pseudo-multichannel image. Afterward, regularization free sparseness constrained non-negative matrix factorization is used to separate tumor from other tissues. By using complexity based criterion, we select tumor component as the one with minimal complexity. We have compared the proposed method with threshold based on 40% and 50% maximum standardized uptake value (SUV), graph cuts (GC), random walks (RW), and affinity propagation (AP) algorithms on 18 nonsmall cell lung cancer datasets with respect to ground truth (GT) provided by two radiologists. Dice similarity coefficient averaged with respect to two GTs is: 0.78 ± 0.12 by the proposed algorithm, 0.78 ± 0.1 by GC, 0.77 ± 0.13 by AP, 0.77 ± 0.07 by RW, and 0.75 ± 0.13 by 50% maximum SUV threshold. Since the proposed method achieved performance comparable with interactive methods, considering the unique challenges of lung tumor segmentation from PET images, our findings support possibility of using our fully automated method in routine clinics. The source codes will be available at www.mipav.net/English/research/research.html.


Assuntos
Carcinoma Pulmonar de Células não Pequenas/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Neoplasias Pulmonares/diagnóstico por imagem , Tomografia por Emissão de Pósitrons/métodos , Algoritmos , Humanos
6.
J Biomed Opt ; 21(7): 76008, 2016 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-27424605

RESUMO

Speckle artifacts can strongly hamper quantitative analysis of optical coherence tomography (OCT), which is necessary to provide assessment of ocular disorders associated with vision loss. Here, we introduce a method for speckle reduction, which leverages from low-rank + sparsity decomposition (LRpSD) of the logarithm of intensity OCT images. In particular, we combine nonconvex regularization-based low-rank approximation of an original OCT image with a sparsity term that incorporates the speckle. State-of-the-art methods for LRpSD require a priori knowledge of a rank and approximate it with nuclear norm, which is not an accurate rank indicator. As opposed to that, the proposed method provides more accurate approximation of a rank through the use of nonconvex regularization that induces sparse approximation of singular values. Furthermore, a rank value is not required to be known a priori. This, in turn, yields an automatic and computationally more efficient method for speckle reduction, which yields the OCT image with improved contrast-to-noise ratio, contrast and edge fidelity. The source code will be available at www.mipav.net/English/research/research.html.


Assuntos
Tomografia de Coerência Óptica/métodos , Tomografia de Coerência Óptica/normas , Artefatos , Linguagens de Programação
7.
IEEE Trans Image Process ; 24(12): 5854-67, 2015 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26462198

RESUMO

Accurate lung tumor delineation plays an important role in radiotherapy treatment planning. Since the lung tumor has poor boundary in positron emission tomography (PET) images and low contrast in computed tomography (CT) images, segmentation of tumor in the PET and CT images is a challenging task. In this paper, we effectively integrate the two modalities by making fully use of the superior contrast of PET images and superior spatial resolution of CT images. Random walk and graph cut method is integrated to solve the segmentation problem, in which random walk is utilized as an initialization tool to provide object seeds for graph cut segmentation on the PET and CT images. The co-segmentation problem is formulated as an energy minimization problem which is solved by max-flow/min-cut method. A graph, including two sub-graphs and a special link, is constructed, in which one sub-graph is for the PET and another is for CT, and the special link encodes a context term which penalizes the difference of the tumor segmentation on the two modalities. To fully utilize the characteristics of PET and CT images, a novel energy representation is devised. For the PET, a downhill cost and a 3D derivative cost are proposed. For the CT, a shape penalty cost is integrated into the energy function which helps to constrain the tumor region during the segmentation. We validate our algorithm on a data set which consists of 18 PET-CT images. The experimental results indicate that the proposed method is superior to the graph cut method solely using the PET or CT is more accurate compared with the random walk method, random walk co-segmentation method, and non-improved graph cut method.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Neoplasias Pulmonares/diagnóstico por imagem , Tomografia por Emissão de Pósitrons/métodos , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Carcinoma Pulmonar de Células não Pequenas/diagnóstico por imagem , Bases de Dados Factuais , Humanos
8.
J Biomed Opt ; 20(7): 76012, 2015 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-26220370

RESUMO

We propose an offset-sparsity decomposition method for the enhancement of a color microscopic image of a stained specimen. The method decomposes vectorized spectral images into offset terms and sparse terms. A sparse term represents an enhanced image, and an offset term represents a "shadow." The related optimization problem is solved by computational improvement of the accelerated proximal gradient method used initially to solve the related rank-sparsity decomposition problem. Removal of an image-adapted color offset yields an enhanced image with improved colorimetric differences among the histological structures. This is verified by a no-reference colorfulness measure estimated from 35 specimens of the human liver, 1 specimen of the mouse liver stained with hematoxylin and eosin, 6 specimens of the mouse liver stained with Sudan III, and 3 specimens of the human liver stained with the anti-CD34 monoclonal antibody. The colorimetric difference improves on average by 43.86% with a 99% confidence interval (CI) of [35.35%, 51.62%]. Furthermore, according to the mean opinion score, estimated on the basis of the evaluations of five pathologists, images enhanced by the proposed method exhibit an average quality improvement of 16.60% with a 99% CI of [10.46%, 22.73%].


Assuntos
Corantes/química , Histocitoquímica/métodos , Processamento de Imagem Assistida por Computador/métodos , Microscopia/métodos , Algoritmos , Animais , Humanos , Fígado/química , Neoplasias Hepáticas/química , Masculino , Camundongos
9.
Sci Rep ; 5: 11576, 2015 Jun 23.
Artigo em Inglês | MEDLINE | ID: mdl-26099963

RESUMO

Low-contrast images, such as color microscopic images of unstained histological specimens, are composed of objects with highly correlated spectral profiles. Such images are very hard to segment. Here, we present a method that nonlinearly maps low-contrast color image into an image with an increased number of non-physical channels and a decreased correlation between spectral profiles. The method is a proof-of-concept validated on the unsupervised segmentation of color images of unstained specimens, in which case the tissue components appear colorless when viewed under the light microscope. Specimens of human hepatocellular carcinoma, human liver with metastasis from colon and gastric cancer and mouse fatty liver were used for validation. The average correlation between the spectral profiles of the tissue components was greater than 0.9985, and the worst case correlation was greater than 0.9997. The proposed method can potentially be applied to the segmentation of low-contrast multichannel images with high spatial resolution that arise in other imaging modalities.


Assuntos
Processamento de Imagem Assistida por Computador , Microscopia/métodos , Coloração e Rotulagem , Algoritmos , Animais , Carcinoma Hepatocelular/patologia , Neoplasias do Colo/secundário , Cor , Crioultramicrotomia , Humanos , Neoplasias Hepáticas/patologia , Camundongos , Razão Sinal-Ruído , Neoplasias Gástricas/secundário
10.
Am J Pathol ; 179(2): 547-54, 2011 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-21708116

RESUMO

A methodology is proposed for nonlinear contrast-enhanced unsupervised segmentation of multispectral (color) microscopy images of principally unstained specimens. The methodology exploits spectral diversity and spatial sparseness to find anatomical differences between materials (cells, nuclei, and background) present in the image. It consists of rth-order rational variety mapping (RVM) followed by matrix/tensor factorization. Sparseness constraint implies duality between nonlinear unsupervised segmentation and multiclass pattern assignment problems. Classes not linearly separable in the original input space become separable with high probability in the higher-dimensional mapped space. Hence, RVM mapping has two advantages: it takes implicitly into account nonlinearities present in the image (ie, they are not required to be known) and it increases spectral diversity (ie, contrast) between materials, due to increased dimensionality of the mapped space. This is expected to improve performance of systems for automated classification and analysis of microscopic histopathological images. The methodology was validated using RVM of the second and third orders of the experimental multispectral microscopy images of unstained sciatic nerve fibers (nervus ischiadicus) and of unstained white pulp in the spleen tissue, compared with a manually defined ground truth labeled by two trained pathophysiologists. The methodology can also be useful for additional contrast enhancement of images of stained specimens.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Microscopia/métodos , Algoritmos , Animais , Meios de Contraste/farmacologia , Dano ao DNA , Diagnóstico por Imagem/métodos , Reações Falso-Positivas , Camundongos , Camundongos Endogâmicos NOD , Microscopia de Fluorescência/métodos , Modelos Estatísticos , Fibras Nervosas/patologia , Parafina/química , Reconhecimento Automatizado de Padrão/métodos , Nervo Isquiático/patologia , Baço/patologia
11.
Int J Mol Sci ; 12(12): 8415-30, 2011.
Artigo em Inglês | MEDLINE | ID: mdl-22272081

RESUMO

Predicting antitumor activity of compounds using regression models trained on a small number of compounds with measured biological activity is an ill-posed inverse problem. Yet, it occurs very often within the academic community. To counteract, up to some extent, overfitting problems caused by a small training data, we propose to use consensus of six regression models for prediction of biological activity of virtual library of compounds. The QSAR descriptors of 22 compounds related to the opioid growth factor (OGF, Tyr-Gly-Gly-Phe-Met) with known antitumor activity were used to train regression models: the feed-forward artificial neural network, the k-nearest neighbor, sparseness constrained linear regression, the linear and nonlinear (with polynomial and Gaussian kernel) support vector machine. Regression models were applied on a virtual library of 429 compounds that resulted in six lists with candidate compounds ranked by predicted antitumor activity. The highly ranked candidate compounds were synthesized, characterized and tested for an antiproliferative activity. Some of prepared peptides showed more pronounced activity compared with the native OGF; however, they were less active than highly ranked compounds selected previously by the radial basis function support vector machine (RBF SVM) regression model. The ill-posedness of the related inverse problem causes unstable behavior of trained regression models on test data. These results point to high complexity of prediction based on the regression models trained on a small data sample.


Assuntos
Antineoplásicos/química , Encefalina Metionina/química , Biblioteca de Peptídeos , Relação Quantitativa Estrutura-Atividade , Antineoplásicos/síntese química , Antineoplásicos/farmacologia , Proliferação de Células/efeitos dos fármacos , Encefalina Metionina/síntese química , Encefalina Metionina/farmacologia , Humanos , Células MCF-7 , Máquina de Vetores de Suporte
12.
BMC Bioinformatics ; 12: 496, 2011 Dec 30.
Artigo em Inglês | MEDLINE | ID: mdl-22208882

RESUMO

BACKGROUND: Bioinformatics data analysis is often using linear mixture model representing samples as additive mixture of components. Properly constrained blind matrix factorization methods extract those components using mixture samples only. However, automatic selection of extracted components to be retained for classification analysis remains an open issue. RESULTS: The method proposed here is applied to well-studied protein and genomic datasets of ovarian, prostate and colon cancers to extract components for disease prediction. It achieves average sensitivities of: 96.2 (sd = 2.7%), 97.6% (sd = 2.8%) and 90.8% (sd = 5.5%) and average specificities of: 93.6% (sd = 4.1%), 99% (sd = 2.2%) and 79.4% (sd = 9.8%) in 100 independent two-fold cross-validations. CONCLUSIONS: We propose an additive mixture model of a sample for feature extraction using, in principle, sparseness constrained factorization on a sample-by-sample basis. As opposed to that, existing methods factorize complete dataset simultaneously. The sample model is composed of a reference sample representing control and/or case (disease) groups and a test sample. Each sample is decomposed into two or more components that are selected automatically (without using label information) as control specific, case specific and not differentially expressed (neutral). The number of components is determined by cross-validation. Automatic assignment of features (m/z ratios or genes) to particular component is based on thresholds estimated from each sample directly. Due to the locality of decomposition, the strength of the expression of each feature across the samples can vary. Yet, they will still be allocated to the related disease and/or control specific component. Since label information is not used in the selection process, case and control specific components can be used for classification. That is not the case with standard factorization methods. Moreover, the component selected by proposed method as disease specific can be interpreted as a sub-mode and retained for further analysis to identify potential biomarkers. As opposed to standard matrix factorization methods this can be achieved on a sample (experiment)-by-sample basis. Postulating one or more components with indifferent features enables their removal from disease and control specific components on a sample-by-sample basis. This yields selected components with reduced complexity and generally, it increases prediction accuracy.


Assuntos
Neoplasias do Colo/genética , Perfilação da Expressão Gênica/métodos , Modelos Genéticos , Neoplasias Ovarianas/genética , Neoplasias da Próstata/genética , Feminino , Regulação Neoplásica da Expressão Gênica , Genômica , Humanos , Masculino , Análise de Sequência com Séries de Oligonucleotídeos , Sensibilidade e Especificidade
13.
Opt Express ; 18(17): 17819-33, 2010 Aug 16.
Artigo em Inglês | MEDLINE | ID: mdl-20721169

RESUMO

The higher order orthogonal iteration (HOOI) is used for a single-frame and multi-frame space-variant blind deconvolution (BD) performed by factorization of the tensor of blurred multi-spectral image (MSI). This is achieved by conversion of BD into blind source separation (BSS), whereupon sources represent the original image and its spatial derivatives. The HOOI-based factorization enables an essentially unique solution of the related BSS problem with orthogonality constraints imposed on factors and the core tensor of the Tucker3 model of the image tensor. In contrast, the matrix factorization-based unique solution of the same BSS problem demands sources to be statistically independent or sparse which is not true. The consequence of such an approach to BD is that it virtually does not require a priori information about the possibly space-variant point spread function (PSF): neither its model nor size of its support. For the space-variant BD problem, MSI is divided into blocks whereupon the PSF is assumed to be a space-invariant within the blocks. The success of proposed concept is demonstrated in experimentally degraded images: defocused single-frame gray scale and red-green-blue (RGB) images, single-frame gray scale and RGB images blurred by atmospheric turbulence, and a single-frame RGB image blurred by a grating (photon sieve). A comparable or better performance is demonstrated in relation to the blind Richardson-Lucy algorithm which, however, requires a priori information about parametric model of the blur.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Modelos Teóricos , Artefatos , Atmosfera , Fótons
14.
J Photochem Photobiol B ; 100(1): 10-8, 2010 Jul 02.
Artigo em Inglês | MEDLINE | ID: mdl-20409729

RESUMO

This study was designed to demonstrate robust performance of the novel dependent component analysis (DCA)-based approach to demarcation of the basal cell carcinoma (BCC) through unsupervised decomposition of the red-green-blue (RGB) fluorescent image of the BCC. Robustness to intensity fluctuation is due to the scale invariance property of DCA algorithms, which exploit spectral and spatial diversities between the BCC and the surrounding tissue. Used filtering-based DCA approach represents an extension of the independent component analysis (ICA) and is necessary in order to account for statistical dependence that is induced by spectral similarity between the BCC and surrounding tissue. This generates weak edges what represents a challenge for other segmentation methods as well. By comparative performance analysis with state-of-the-art image segmentation methods such as active contours (level set), K-means clustering, non-negative matrix factorization, ICA and ratio imaging we experimentally demonstrate good performance of DCA-based BCC demarcation in two demanding scenarios where intensity of the fluorescent image has been varied almost two orders of magnitude.


Assuntos
Carcinoma Basocelular/diagnóstico , Microscopia de Fluorescência , Neoplasias Cutâneas/diagnóstico , Algoritmos , Corantes Fluorescentes/química , Humanos , Aumento da Imagem , Interpretação de Imagem Assistida por Computador , Reconhecimento Automatizado de Padrão
15.
Anal Chem ; 82(5): 1911-20, 2010 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-20131872

RESUMO

Metabolic profiling of biological samples involves nuclear magnetic resonance (NMR) spectroscopy and mass spectrometry coupled with powerful statistical tools for complex data analysis. Here, we report a robust, sparseness-based method for the blind separation of analytes from mixtures recorded in spectroscopic and spectrometric measurements. The advantage of the proposed method in comparison to alternative blind decomposition schemes is that it is capable of estimating the number of analytes, their concentrations, and the analytes themselves from available mixtures only. The number of analytes can be less than, equal to, or greater than the number of mixtures. The method is exemplified on blind extraction of four analytes from three mixtures in 2D NMR spectroscopy and five analytes from two mixtures in mass spectrometry. The proposed methodology is of widespread significance for natural products research and the field of metabolic studies, whereupon mixtures represent samples isolated from biological fluids or tissue extracts.


Assuntos
Espectroscopia de Ressonância Magnética/métodos , Espectrometria de Massas/métodos , Algoritmos , Análise por Conglomerados
16.
Anal Chim Acta ; 653(2): 143-53, 2009 Oct 27.
Artigo em Inglês | MEDLINE | ID: mdl-19808106

RESUMO

Sparse component analysis (SCA) is demonstrated for blind extraction of three pure component spectra from only two measured mixed spectra in (13)C and (1)H nuclear magnetic resonance (NMR) spectroscopy. This appears to be the first time to report such results and that is the first novelty of the paper. Presented concept is general and directly applicable to experimental scenarios that possibly would require use of more than two mixtures. However, it is important to emphasize that number of required mixtures is always less than number of components present in these mixtures. The second novelty is formulation of blind NMR spectra decomposition exploiting sparseness of the pure components in the wavelet basis defined by either Morlet or Mexican hat wavelet. This enabled accurate estimation of the concentration matrix and number of pure components by means of data clustering algorithm and pure components spectra by means of linear programming with constraints from both (1)H and (13)C NMR experimental data. The third novelty is capability of proposed method to estimate number of pure components in demanding underdetermined blind source separation (uBSS) scenario. This is in contrast to majority of the BSS algorithms that assume this information to be known in advance. Presented results are important for the NMR spectroscopy-associated data analysis in pharmaceutical industry, medicine diagnostics and natural products research.


Assuntos
Misturas Complexas/química , Espectroscopia de Ressonância Magnética/métodos , Compostos Orgânicos/análise , Algoritmos , Isótopos de Carbono , Análise de Componente Principal , Prótons , Padrões de Referência , Processamento de Sinais Assistido por Computador , Software , Soluções
17.
Opt Lett ; 34(14): 2210-2, 2009 Jul 15.
Artigo em Inglês | MEDLINE | ID: mdl-19823551

RESUMO

Alpha-divergence-based nonnegative tensor factorization (NTF) is applied to blind multispectral image (MSI) decomposition. The matrix of spectral profiles and the matrix of spatial distributions of the materials resident in the image are identified from the factors in Tucker3 and PARAFAC models. NTF preserves local structure in the MSI that is lost as a result of vectorization of the image when nonnegative matrix factorization (NMF)- or independent component analysis (ICA)-based decompositions are used. Moreover, NTF based on the PARAFAC model is unique up to permutation and scale under mild conditions. To achieve this, NMF- and ICA-based factorizations, respectively, require enforcement of sparseness (orthogonality) and statistical independence constraints on the spatial distributions of the materials resident in the MSI, and these conditions do not hold. We demonstrate efficiency of the NTF-based factorization in relation to NMF- and ICA-based factorizations on blind decomposition of the experimental MSI with the known ground truth.


Assuntos
Algoritmos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
18.
Opt Lett ; 34(18): 2835-7, 2009 Sep 15.
Artigo em Inglês | MEDLINE | ID: mdl-19756121

RESUMO

By applying a bank of 2D Gabor filters to a blurred image, single-frame blind-image deconvolution (SF BID) is formulated as a 3D tensor factorization (TF) problem, with the key contribution that neither origin nor size of the spatially invariant blurring kernel is required to be known or estimated. Mixing matrix, the original image, and its spatial derivatives are identified from the factors in the Tucker3 model of the multichannel version of the blurred image. Previous approaches to 2D Gabor-filter-bank-based SF BID relied on 2D representation of the multichannel version of the blurred image and matrix factorization methods such as nonnegative matrix factorization (NMF) and independent component analysis (ICA). Unlike matrix factorization-based methods 3D TF preserves local structure in the image. Moreover, 3D TF based on the PARAFAC model is unique up to permutation and scales under very mild conditions. To achieve this, NMF and ICA respectively require enforcement of sparseness and statistical independence constraints on the original image and its spatial derivatives. These constraints are generally not satisfied. The 3D TF-based SF BID method is demonstrated on an experimental defocused red-green-blue image.

19.
J Mass Spectrom ; 44(9): 1378-88, 2009 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-19670286

RESUMO

The paper presents sparse component analysis (SCA)-based blind decomposition of the mixtures of mass spectra into pure components, wherein the number of mixtures is less than number of pure components. Standard solutions of the related blind source separation (BSS) problem that are published in the open literature require the number of mixtures to be greater than or equal to the unknown number of pure components. Specifically, we have demonstrated experimentally the capability of the SCA to blindly extract five pure components mass spectra from two mixtures only. Two approaches to SCA are tested: the first one based on l(1) norm minimization implemented through linear programming and the second one implemented through multilayer hierarchical alternating least square nonnegative matrix factorization with sparseness constraints imposed on pure components spectra. In contrast to many existing blind decomposition methods no a priori information about the number of pure components is required. It is estimated from the mixtures using robust data clustering algorithm together with pure components concentration matrix. Proposed methodology can be implemented as a part of software packages used for the analysis of mass spectra and identification of chemical compounds.


Assuntos
Misturas Complexas/química , Enedi-Inos/química , Modelos Químicos , Espectrometria de Massas por Ionização por Electrospray , Algoritmos , Cromatografia Líquida de Alta Pressão , Análise por Conglomerados , Software
20.
J Opt Soc Am A Opt Image Sci Vis ; 24(4): 973-83, 2007 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-17361283

RESUMO

A single-frame multichannel blind image deconvolution technique has been formulated recently as a blind source separation problem solved by independent component analysis (ICA). The attractive feature of this approach is that neither origin nor size of the spatially invariant blurring kernel has to be known. To enhance the statistical independence among the hidden variables, we employ multiscale analysis implemented by wavelet packets and use mutual information to locate a subband with the least dependent components, where the basis matrix is learned by means of standard ICA. We show that the proposed algorithm is capable of performing blind deconvolution of nonstationary signals that are not independent and identically distributed processes. The image poses these properties. The algorithm is tested on experimental data and compared with state-of-the-art single-frame blind image deconvolution algorithms. Our good experimental results demonstrate the viability of the proposed concept.


Assuntos
Algoritmos , Artefatos , Inteligência Artificial , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Reconhecimento Automatizado de Padrão/métodos , Análise por Conglomerados , Simulação por Computador , Modelos Estatísticos , Análise de Componente Principal , Análise de Regressão , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...