Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
1.
Math Program ; 176(1-2): 5-37, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-33833473

RESUMO

This paper considers the problem of solving systems of quadratic equations, namely, recovering an object of interest x ♮ ∈ ℝ n from m quadratic equations/samples y i = ( a i ⊤ x ♮ ) 2 , 1 ≤ i ≤ m . This problem, also dubbed as phase retrieval, spans multiple domains including physical sciences and machine learning. We investigate the efficacy of gradient descent (or Wirtinger flow) designed for the nonconvex least squares problem. We prove that under Gaussian designs, gradient descent - when randomly initialized - yields an ϵ-accurate solution in O(log n + log(1/ϵ)) iterations given nearly minimal samples, thus achieving near-optimal computational and sample complexities at once. This provides the first global convergence guarantee concerning vanilla gradient descent for phase retrieval, without the need of (i) carefully-designed initialization, (ii) sample splitting, or (iii) sophisticated saddle-point escaping schemes. All of these are achieved by exploiting the statistical models in analyzing optimization algorithms, via a leave-one-out approach that enables the decoupling of certain statistical dependency between the gradient descent iterates and the data.

2.
Proc IEEE Inst Electr Electron Eng ; 106(8): 1293-1310, 2018 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-30828106

RESUMO

For many modern applications in science and engineering, data are collected in a streaming fashion carrying time-varying information, and practitioners need to process them with a limited amount of memory and computational resources in a timely manner for decision making. This often is coupled with the missing data problem, such that only a small fraction of data attributes are observed. These complications impose significant, and unconventional, constraints on the problem of streaming Principal Component Analysis (PCA) and subspace tracking, which is an essential building block for many inference tasks in signal processing and machine learning. This survey article reviews a variety of classical and recent algorithms for solving this problem with low computational and memory complexities, particularly those applicable in the big data regime with missing data. We illustrate that streaming PCA and subspace tracking algorithms can be understood through algebraic and geometric perspectives, and they need to be adjusted carefully to handle missing data. Both asymptotic and non-asymptotic convergence guarantees are reviewed. Finally, we benchmark the performance of several competitive algorithms in the presence of missing data for both well-conditioned and ill-conditioned systems.

3.
Opt Lett ; 40(13): 2989-92, 2015 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-26125349

RESUMO

Single-molecule localization microscopy achieves sub-diffraction-limit resolution by localizing a sparse subset of stochastically activated emitters in each frame. Its temporal resolution is limited by the maximal emitter density that can be handled by the image reconstruction algorithms. Multiple algorithms have been developed to accurately locate the emitters even when they have significant overlaps. Currently, compressive-sensing-based algorithm (CSSTORM) achieves the highest emitter density. However, CSSTORM is extremely computationally expensive, which limits its practical application. Here, we develop a new algorithm (MempSTORM) based on two-dimensional spectrum analysis. With the same localization accuracy and recall rate, MempSTORM is 100 times faster than CSSTORM with ℓ(1)-homotopy. In addition, MempSTORM can be implemented on a GPU for parallelism, which can further increase its computational speed and make it possible for online super-resolution reconstruction of high-density emitters.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Microscopia
4.
Opt Lett ; 38(19): 3957-60, 2013 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-24081098

RESUMO

We explain a technique that recovers the structure and the modal weights of spatial modes of lasers from a limited number of spatial coherence measurements. Our approach interpolates the unobserved spatial coherence measurements via the low-rank matrix completion algorithm based on nuclear norm minimization and then extracts the set of modes via singular value decomposition. Numerical examples are provided on a variety of lasers to demonstrate the effectiveness of the method, and it is shown that the proposed method can further reduce the number of measurements by a factor of 2 for a moderate data size.

5.
Sci Rep ; 13(1): 21253, 2023 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-38040823

RESUMO

Three dimensional electron back-scattered diffraction (EBSD) microscopy is a critical tool in many applications in materials science, yet its data quality can fluctuate greatly during the arduous collection process, particularly via serial-sectioning. Fortunately, 3D EBSD data is inherently sequential, opening up the opportunity to use transformers, state-of-the-art deep learning architectures that have made breakthroughs in a plethora of domains, for data processing and recovery. To be more robust to errors and accelerate this 3D EBSD data collection, we introduce a two step method that recovers missing slices in an 3D EBSD volume, using an efficient transformer model and a projection algorithm to process the transformer's outputs. Overcoming the computational and practical hurdles of deep learning with scarce high dimensional data, we train this model using only synthetic 3D EBSD data with self-supervision and obtain superior recovery accuracy on real 3D EBSD data, compared to existing methods.

6.
Sci Data ; 9(1): 421, 2022 07 19.
Artigo em Inglês | MEDLINE | ID: mdl-35853958

RESUMO

Despite being crucial to health and quality of life, sleep-especially pediatric sleep-is not yet well understood. This is exacerbated by lack of access to sufficient pediatric sleep data with clinical annotation. In order to accelerate research on pediatric sleep and its connection to health, we create the Nationwide Children's Hospital (NCH) Sleep DataBank and publish it at Physionet and the National Sleep Research Resource (NSRR), which is a large sleep data common with physiological data, clinical data, and tools for analyses. The NCH Sleep DataBank consists of 3,984 polysomnography studies and over 5.6 million clinical observations on 3,673 unique patients between 2017 and 2019 at NCH. The novelties of this dataset include: (1) large-scale sleep dataset suitable for discovering new insights via data mining, (2) explicit focus on pediatric patients, (3) gathered in a real-world clinical setting, and (4) the accompanying rich set of clinical data. The NCH Sleep DataBank is a valuable resource for advancing automatic sleep scoring and real-time sleep disorder prediction, among many other potential scientific discoveries.


Assuntos
Transtornos do Sono-Vigília , Sono , Criança , Bases de Dados Factuais , Humanos , Polissonografia , Qualidade de Vida
7.
Artigo em Inglês | MEDLINE | ID: mdl-33748408

RESUMO

This work studies the denoising of piecewise smooth graph signals that exhibit inhomogeneous levels of smoothness over a graph, where the value at each node can be vector-valued. We extend the graph trend filtering framework to denoising vector-valued graph signals with a family of non-convex regularizers, which exhibit superior recovery performance over existing convex regularizers. Using an oracle inequality, we establish the statistical error rates of first-order stationary points of the proposed non-convex method for generic graphs. Furthermore, we present an ADMM-based algorithm to solve the proposed method and establish its convergence. Numerical experiments are conducted on both synthetic and real-world data for denoising, support recovery, event detection, and semi-supervised classification.

8.
SIAM J Optim ; 30(4): 3098-3121, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-34305368

RESUMO

This paper studies noisy low-rank matrix completion: given partial and noisy entries of a large low-rank matrix, the goal is to estimate the underlying matrix faithfully and efficiently. Arguably one of the most popular paradigms to tackle this problem is convex relaxation, which achieves remarkable efficacy in practice. However, the theoretical support of this approach is still far from optimal in the noisy setting, falling short of explaining its empirical success. We make progress towards demystifying the practical efficacy of convex relaxation vis-à-vis random noise. When the rank and the condition number of the unknown matrix are bounded by a constant, we demonstrate that the convex programming approach achieves near-optimal estimation errors - in terms of the Euclidean loss, the entrywise loss, and the spectral norm loss - for a wide range of noise levels. All of this is enabled by bridging convex relaxation with the nonconvex Burer-Monteiro approach, a seemingly distinct algorithmic paradigm that is provably robust against noise. More specifically, we show that an approximate critical point of the nonconvex formulation serves as an extremely tight approximation of the convex solution, thus allowing us to transfer the desired statistical guarantees of the nonconvex approach to its convex counterpart.

9.
Biomed Opt Express ; 6(3): 902-17, 2015 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-25798314

RESUMO

Single molecule based superresolution techniques (STORM/PALM) achieve nanometer spatial resolution by integrating the temporal information of the switching dynamics of fluorophores (emitters). When emitter density is low for each frame, they are located to the nanometer resolution. However, when the emitter density rises, causing significant overlapping, it becomes increasingly difficult to accurately locate individual emitters. This is particularly apparent in three dimensional (3D) localization because of the large effective volume of the 3D point spread function (PSF). The inability to precisely locate the emitters at a high density causes poor temporal resolution of localization-based superresolution technique and significantly limits its application in 3D live cell imaging. To address this problem, we developed a 3D high-density superresolution imaging platform that allows us to precisely locate the positions of emitters, even when they are significantly overlapped in three dimensional space. Our platform involves a multi-focus system in combination with astigmatic optics and an ℓ 1-Homotopy optimization procedure. To reduce the intrinsic bias introduced by the discrete formulation of compressed sensing, we introduced a debiasing step followed by a 3D weighted centroid procedure, which not only increases the localization accuracy, but also increases the computation speed of image reconstruction. We implemented our algorithms on a graphic processing unit (GPU), which speeds up processing 10 times compared with central processing unit (CPU) implementation. We tested our method with both simulated data and experimental data of fluorescently labeled microtubules and were able to reconstruct a 3D microtubule image with 1000 frames (512×512) acquired within 20 seconds.

10.
IEEE Trans Pattern Anal Mach Intell ; 36(8): 1519-31, 2014 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-26353335

RESUMO

Recent advances have shown a great potential to explore collaborative representations of test samples in a dictionary composed of training samples from all classes in multi-class recognition including sparse representations. In this paper, we present two multi-class classification algorithms that make use of multiple collaborative representations in their formulations, and demonstrate performance gain of exploring this extra degree of freedom. We first present the Collaborative Representation Optimized Classifier (CROC), which strikes a balance between the nearest-subspace classifier, which assigns a test sample to the class that minimizes the distance between the sample and its principal projection in the selected class, and a Collaborative Representation based Classifier (CRC), which assigns a test sample to the class that minimizes the distance between the sample and its collaborative components. Several well-known classifiers become special cases of CROC under different regularization parameters. We show classification performance can be improved by optimally tuning the regularization parameter through cross validation. We then propose the Collaborative Representation based Boosting (CRBoosting) algorithm, which generalizes the CROC to incorporate multiple collaborative representations. Extensive numerical examples are provided with performance comparisons of different choices of collaborative representations, in particular when the test sample is available via compressive measurements.

11.
Artigo em Inglês | MEDLINE | ID: mdl-22254444

RESUMO

In remote monitoring of Electrocardiogram (ECG), it is very important to ensure that the diagnostic integrity of signals is not compromised by sensing artifacts and channel errors. It is also important for the sensors to be extremely power efficient to enable wearable form factors and long battery life. We present an application of Compressive Sensing (CS) as an error mitigation scheme at the application layer for wearable, wireless sensors in diagnostic grade remote monitoring of ECG. In our previous work, we described an approach to mitigate errors due to packet losses by projecting ECG data to a random space and recovering a faithful representation using sparse reconstruction methods. Our contributions in this work are twofold. First, we present an efficient hardware implementation of random projection at the sensor. Second, we validate the diagnostic integrity of the reconstructed ECG after packet loss mitigation. We validate our approach on MIT and AHA databases comprising more than 250,000 normal and abnormal beats using EC57 protocols adopted by the Food and Drug Administration (FDA). We show that sensitivity and positive predictivity of a state-of-the-art ECG arrhythmia classifier is essentially invariant under CS based packet loss mitigation for both normal and abnormal beats even at high packet loss rates. In contrast, the performance degrades significantly in the absence of any error mitigation scheme, particularly for abnormal beats such as Ventricular Ectopic Beats (VEB).


Assuntos
Arritmias Cardíacas/diagnóstico , Artefatos , Eletrocardiografia Ambulatorial/instrumentação , Processamento de Sinais Assistido por Computador/instrumentação , Telemedicina/instrumentação , Telemetria/instrumentação , Desenho de Equipamento , Análise de Falha de Equipamento , Humanos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA