Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
1.
BMC Bioinformatics ; 22(1): 433, 2021 Sep 10.
Artículo en Inglés | MEDLINE | ID: mdl-34507520

RESUMEN

BACKGROUND: Imaging data contains a substantial amount of information which can be difficult to evaluate by eye. With the expansion of high throughput microscopy methodologies producing increasingly large datasets, automated and objective analysis of the resulting images is essential to effectively extract biological information from this data. CellProfiler is a free, open source image analysis program which enables researchers to generate modular pipelines with which to process microscopy images into interpretable measurements. RESULTS: Herein we describe CellProfiler 4, a new version of this software with expanded functionality. Based on user feedback, we have made several user interface refinements to improve the usability of the software. We introduced new modules to expand the capabilities of the software. We also evaluated performance and made targeted optimizations to reduce the time and cost associated with running common large-scale analysis pipelines. CONCLUSIONS: CellProfiler 4 provides significantly improved performance in complex workflows compared to previous versions. This release will ensure that researchers will have continued access to CellProfiler's powerful computational tools in the coming years.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Programas Informáticos , Microscopía , Flujo de Trabajo
3.
bioRxiv ; 2024 Jun 04.
Artículo en Inglés | MEDLINE | ID: mdl-38895349

RESUMEN

Deep learning has greatly accelerated research in biological image analysis yet it often requires programming skills and specialized tool installation. Here we present Piximi, a modern, no-programming image analysis tool leveraging deep learning. Implemented as a web application at Piximi.app, Piximi requires no installation and can be accessed by any modern web browser. Its client-only architecture preserves the security of researcher data by running all computation locally. Piximi offers four core modules: a deep learning classifier, an image annotator, measurement modules, and pre-trained deep learning segmentation modules. Piximi is interoperable with existing tools and workflows by supporting import and export of common data and model formats. The intuitive researcher interface and easy access to Piximi allows biological researchers to obtain insights into images within just a few minutes. Piximi aims to bring deep learning-powered image analysis to a broader community by eliminating barriers to entry.

4.
Mol Biol Cell ; 32(9): 823-829, 2021 04 19.
Artículo en Inglés | MEDLINE | ID: mdl-33872058

RESUMEN

Microscopy images are rich in information about the dynamic relationships among biological structures. However, extracting this complex information can be challenging, especially when biological structures are closely packed, distinguished by texture rather than intensity, and/or low intensity relative to the background. By learning from large amounts of annotated data, deep learning can accomplish several previously intractable bioimage analysis tasks. Until the past few years, however, most deep-learning workflows required significant computational expertise to be applied. Here, we survey several new open-source software tools that aim to make deep-learning-based image segmentation accessible to biologists with limited computational experience. These tools take many different forms, such as web apps, plug-ins for existing imaging analysis software, and preconfigured interactive notebooks and pipelines. In addition to surveying these tools, we overview several challenges that remain in the field. We hope to expand awareness of the powerful deep-learning tools available to biologists for image analysis.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Microscopía/métodos , Biología Computacional/métodos , Aprendizaje Profundo , Humanos , Programas Informáticos
5.
IEEE Trans Image Process ; 28(7): 3312-3327, 2019 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-30714918

RESUMEN

Video super-resolution (VSR) has become one of the most critical problems in video processing. In the deep learning literature, recent works have shown the benefits of using adversarial-based and perceptual losses to improve the performance on various image restoration tasks; however, these have yet to be applied for video super-resolution. In this paper, we propose a generative adversarial network (GAN)-based formulation for VSR. We introduce a new generator network optimized for the VSR problem, named VSRResNet, along with new discriminator architecture to properly guide VSRResNet during the GAN training. We further enhance our VSR GAN formulation with two regularizers, a distance loss in feature-space and pixel-space, to obtain our final VSRResFeatGAN model. We show that pre-training our generator with the mean-squared-error loss only quantitatively surpasses the current state-of-the-art VSR models. Finally, we employ the PercepDist metric to compare the state-of-the-art VSR models. We show that this metric more accurately evaluates the perceptual quality of SR solutions obtained from neural networks, compared with the commonly used PSNR/SSIM metrics. Finally, we show that our proposed model, the VSRResFeatGAN model, outperforms the current state-of-the-art SR models, both quantitatively and qualitatively.

6.
Front Syst Neurosci ; 13: 13, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-30983978

RESUMEN

Somatosensation is composed of two distinct modalities: touch, arising from sensors in the skin, and proprioception, resulting primarily from sensors in the muscles, combined with these same cutaneous sensors. In contrast to the wealth of information about touch, we know quite less about the nature of the signals giving rise to proprioception at the cortical level. Likewise, while there is considerable interest in developing encoding models of touch-related neurons for application to brain machine interfaces, much less emphasis has been placed on an analogous proprioceptive interface. Here we investigate the use of Artificial Neural Networks (ANNs) to model the relationship between the firing rates of single neurons in area 2, a largely proprioceptive region of somatosensory cortex (S1) and several types of kinematic variables related to arm movement. To gain a better understanding of how these kinematic variables interact to create the proprioceptive responses recorded in our datasets, we train ANNs under different conditions, each involving a different set of input and output variables. We explore the kinematic variables that provide the best network performance, and find that the addition of information about joint angles and/or muscle lengths significantly improves the prediction of neural firing rates. Our results thus provide new insight regarding the complex representations of the limb motion in S1: that the firing rates of neurons in area 2 may be more closely related to the activity of peripheral sensors than it is to extrinsic hand position. In addition, we conduct numerical experiments to determine the sensitivity of ANN models to various choices of training design and hyper-parameters. Our results provide a baseline and new tools for future research that utilizes machine learning to better describe and understand the activity of neurons in S1.

7.
Artículo en Inglés | MEDLINE | ID: mdl-30292729

RESUMEN

BACKGROUND: Cannabis consumption is widespread across the world, and the co-occurrence of cannabis use and alcohol consumption is common. The study of background noise - resting-state neural activity, in the absence of stimulation - is an approach that could enable the neurotoxicity of these substances to be explored. Preliminary results have shown that delta-9-tetrahydrocannabinol (Δ9-THC) causes an increase in neural noise in the brain. Neurons in the brain and the retina share a neurotransmission system and have similar anatomical and functional properties. Retinal function, evaluated using an electroretinogram (ERG), may therefore reflect central neurochemistry. This study analyses retinal background noise in a population of regular co-occurrent cannabis and alcohol consumers. METHODS: We recorded the flash ERGs of 26 healthy controls and 45 regular cannabis consumers, separated into two groups based on their alcohol consumption: less than or equal to 4 glasses per week (CU ≤ 4) or strictly >4 glasses per week (CU >4). In order to extract the background noise, the Fourier transform of the pseudo-periodic and sinusoidal signals of the 3.0 flicker-response sequence was calculated. This sequence represents the vertical transmission of the signal from cones to bipolar cells. The magnitude of the background noise is defined as the average of the magnitudes of the two neighbouring harmonics: harmonic -1 (low frequency noise) and harmonic +1 (high frequency noise). RESULTS: The magnitude of harmonic -1 was significantly increased between the groups CU > 4 (6.78 (±1.24)) and CU ≤ 4 (5.69 (±1.80)) among regular users of cannabis and alcohol. A significant increase in the average magnitude of the two harmonics was found between the groups CU > 4 (5.12 (±0.92)) and CU ≤ 4 (4.36 (±1.14)). No significant difference was observed with regard to the magnitude of the harmonic +1. CONCLUSIONS: The increase in background noise may reflect the neurotoxicity of cannabis, potentiated by alcohol consumption, on retinal neurons dynamic. This neural disruption of the response generated by retinal stimulation may be attributable to altered neurotransmitter release.


Asunto(s)
Consumo de Bebidas Alcohólicas/fisiopatología , Uso de la Marihuana , Retina/fisiopatología , Adulto , Cannabis , Electrorretinografía , Femenino , Humanos , Masculino , Estimulación Luminosa , Retina/efectos de los fármacos , Visión Ocular/efectos de los fármacos , Visión Ocular/fisiología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA