Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Sci Rep ; 14(1): 20066, 2024 Aug 29.
Artículo en Inglés | MEDLINE | ID: mdl-39209864

RESUMEN

Effectively assessing the realism and naturalness of images in virtual (VR) and augmented (AR) reality applications requires Full Reference Image Quality Assessment (FR-IQA) metrics that closely align with human perception. Deep learning-based IQAs that are trained on human-labeled data have recently shown promise in generic computer vision tasks. However, their performance decreases in applications where perfect matches between the reference and the distorted images should not be expected, or whenever distortion patterns are restricted to specific domains. Tackling this issue necessitates training a task-specific neural network, yet generating human-labeled FR-IQAs is costly, and deep learning typically demands substantial labeled data. To address these challenges, we developed ConIQA, a deep learning-based IQA that leverages consistency training and a novel data augmentation method to learn from both labeled and unlabeled data. This makes ConIQA well-suited for contexts with scarce labeled data. To validate ConIQA, we considered the example application of Computer-Generated Holography (CGH) where specific artifacts such as ringing, speckle, and quantization errors routinely occur, yet are not explicitly accounted for by existing IQAs. We developed a new dataset, HQA1k, that comprises 1000 natural images each paired with an image rendered using various popular CGH algorithms, and quality-rated by thirteen human participants. Our results show that ConIQA achieves superior Pearson (0.98), Spearman (0.965), and Kendall's tau (0.86) correlations over fifteen FR-IQA metrics by up to 5%, showcasing significant improvements in aligning with human perception on the HQA1k dataset.

2.
Nat Methods ; 20(9): 1417-1425, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-37679524

RESUMEN

Optical microscopy methods such as calcium and voltage imaging enable fast activity readout of large neuronal populations using light. However, the lack of corresponding advances in online algorithms has slowed progress in retrieving information about neural activity during or shortly after an experiment. This gap not only prevents the execution of real-time closed-loop experiments, but also hampers fast experiment-analysis-theory turnover for high-throughput imaging modalities. Reliable extraction of neural activity from fluorescence imaging frames at speeds compatible with indicator dynamics and imaging modalities poses a challenge. We therefore developed FIOLA, a framework for fluorescence imaging online analysis that extracts neuronal activity from calcium and voltage imaging movies at speeds one order of magnitude faster than state-of-the-art methods. FIOLA exploits algorithms optimized for parallel processing on GPUs and CPUs. We demonstrate reliable and scalable performance of FIOLA on both simulated and real calcium and voltage imaging datasets. Finally, we present an online experimental scenario to provide guidance in setting FIOLA parameters and to highlight the trade-offs of our approach.


Asunto(s)
Calcio , Imagen Óptica , Algoritmos , Microscopía
3.
Nat Neurosci ; 25(12): 1724-1734, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-36424431

RESUMEN

In many areas of the brain, neural populations act as a coordinated network whose state is tied to behavior on a millisecond timescale. Two-photon (2p) calcium imaging is a powerful tool to probe such network-scale phenomena. However, estimating the network state and dynamics from 2p measurements has proven challenging because of noise, inherent nonlinearities and limitations on temporal resolution. Here we describe Recurrent Autoencoder for Discovering Imaged Calcium Latents (RADICaL), a deep learning method to overcome these limitations at the population level. RADICaL extends methods that exploit dynamics in spiking activity for application to deconvolved calcium signals, whose statistics and temporal dynamics are quite distinct from electrophysiologically recorded spikes. It incorporates a new network training strategy that capitalizes on the timing of 2p sampling to recover network dynamics with high temporal precision. In synthetic tests, RADICaL infers the network state more accurately than previous methods, particularly for high-frequency components. In 2p recordings from sensorimotor areas in mice performing a forelimb reach task, RADICaL infers network state with close correspondence to single-trial variations in behavior and maintains high-quality inference even when neuronal populations are substantially reduced.


Asunto(s)
Calcio , Aprendizaje Profundo , Animales , Ratones , Encéfalo , Diagnóstico por Imagen , Dinámica Poblacional
4.
Cell ; 185(18): 3408-3425.e29, 2022 09 01.
Artículo en Inglés | MEDLINE | ID: mdl-35985322

RESUMEN

Genetically encoded voltage indicators are emerging tools for monitoring voltage dynamics with cell-type specificity. However, current indicators enable a narrow range of applications due to poor performance under two-photon microscopy, a method of choice for deep-tissue recording. To improve indicators, we developed a multiparameter high-throughput platform to optimize voltage indicators for two-photon microscopy. Using this system, we identified JEDI-2P, an indicator that is faster, brighter, and more sensitive and photostable than its predecessors. We demonstrate that JEDI-2P can report light-evoked responses in axonal termini of Drosophila interneurons and the dendrites and somata of amacrine cells of isolated mouse retina. JEDI-2P can also optically record the voltage dynamics of individual cortical neurons in awake behaving mice for more than 30 min using both resonant-scanning and ULoVE random-access microscopy. Finally, ULoVE recording of JEDI-2P can robustly detect spikes at depths exceeding 400 µm and report voltage correlations in pairs of neurons.


Asunto(s)
Microscopía , Neuronas , Animales , Interneuronas , Ratones , Microscopía/métodos , Neuronas/fisiología , Fotones , Vigilia
5.
PLoS Comput Biol ; 17(4): e1008806, 2021 04.
Artículo en Inglés | MEDLINE | ID: mdl-33852574

RESUMEN

Voltage imaging enables monitoring neural activity at sub-millisecond and sub-cellular scale, unlocking the study of subthreshold activity, synchrony, and network dynamics with unprecedented spatio-temporal resolution. However, high data rates (>800MB/s) and low signal-to-noise ratios create bottlenecks for analyzing such datasets. Here we present VolPy, an automated and scalable pipeline to pre-process voltage imaging datasets. VolPy features motion correction, memory mapping, automated segmentation, denoising and spike extraction, all built on a highly parallelizable, modular, and extensible framework optimized for memory and speed. To aid automated segmentation, we introduce a corpus of 24 manually annotated datasets from different preparations, brain areas and voltage indicators. We benchmark VolPy against ground truth segmentation, simulations and electrophysiology recordings, and we compare its performance with existing algorithms in detecting spikes. Our results indicate that VolPy's performance in spike extraction and scalability are state-of-the-art.


Asunto(s)
Encéfalo , Procesamiento de Imagen Asistido por Computador/métodos , Neuroimagen/métodos , Neuronas/fisiología , Programas Informáticos , Algoritmos , Automatización , Conjuntos de Datos como Asunto , Fenómenos Electrofisiológicos , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA