Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 26
Filtrar
Más filtros













Base de datos
Intervalo de año de publicación
1.
Nucleic Acids Res ; 51(9): 4674-4690, 2023 05 22.
Artículo en Inglés | MEDLINE | ID: mdl-37070176

RESUMEN

In response to different stimuli many transcription factors (TFs) display different activation dynamics that trigger the expression of specific sets of target genes, suggesting that promoters have a way to decode dynamics. Here, we use optogenetics to directly manipulate the nuclear localization of a synthetic TF in mammalian cells without affecting other processes. We generate pulsatile or sustained TF dynamics and employ live cell microscopy and mathematical modelling to analyse the behaviour of a library of reporter constructs. We find decoding of TF dynamics occurs only when the coupling between TF binding and transcription pre-initiation complex formation is inefficient and that the ability of a promoter to decode TF dynamics gets amplified by inefficient translation initiation. Using the knowledge acquired, we build a synthetic circuit that allows obtaining two gene expression programs depending solely on TF dynamics. Finally, we show that some of the promoter features identified in our study can be used to distinguish natural promoters that have previously been experimentally characterized as responsive to either sustained or pulsatile p53 and NF-κB signals. These results help elucidate how gene expression is regulated in mammalian cells and open up the possibility to build complex synthetic circuits steered by TF dynamics.


Asunto(s)
Regulación de la Expresión Génica , Regiones Promotoras Genéticas , Factores de Transcripción , Animales , Mamíferos , FN-kappa B/genética , FN-kappa B/metabolismo , Unión Proteica , Factores de Transcripción/genética , Factores de Transcripción/metabolismo
2.
Nat Commun ; 13(1): 7420, 2022 12 02.
Artículo en Inglés | MEDLINE | ID: mdl-36456557

RESUMEN

Our knowledge about neuronal activity in the sensorimotor cortex relies primarily on stereotyped movements that are strictly controlled in experimental settings. It remains unclear how results can be carried over to less constrained behavior like that of freely moving subjects. Toward this goal, we developed a self-paced behavioral paradigm that encouraged rats to engage in different movement types. We employed bilateral electrophysiological recordings across the entire sensorimotor cortex and simultaneous paw tracking. These techniques revealed behavioral coupling of neurons with lateralization and an anterior-posterior gradient from the premotor to the primary sensory cortex. The structure of population activity patterns was conserved across animals despite the severe under-sampling of the total number of neurons and variations in electrode positions across individuals. We demonstrated cross-subject and cross-session generalization in a decoding task through alignments of low-dimensional neural manifolds, providing evidence of a conserved neuronal code.


Asunto(s)
Corteza Sensoriomotora , Ratas , Animales , Neuronas , Electrofisiología Cardíaca , Electrodos , Generalización Psicológica
3.
Neuron ; 110(13): 2080-2093.e10, 2022 07 06.
Artículo en Inglés | MEDLINE | ID: mdl-35609615

RESUMEN

The impact of spontaneous movements on neuronal activity has created the need to quantify behavior. We present a versatile framework to directly capture the 3D motion of freely definable body points in a marker-free manner with high precision and reliability. Combining the tracking with neural recordings revealed multiplexing of information in the motor cortex neurons of freely moving rats. By integrating multiple behavioral variables into a model of the neural response, we derived a virtual head fixation for which the influence of specific body movements was removed. This strategy enabled us to analyze the behavior of interest (e.g., front paw movements). Thus, we unveiled an unexpectedly large fraction of neurons in the motor cortex with tuning to the paw movements, which was previously masked by body posture tuning. Once established, our framework can be efficiently applied to large datasets while minimizing the experimental workload caused by animal training and manual labeling.


Asunto(s)
Corteza Motora , Movimiento , Animales , Corteza Motora/fisiología , Neuronas Motoras/fisiología , Movimiento/fisiología , Postura/fisiología , Ratas , Reproducibilidad de los Resultados
4.
Radiat Oncol ; 17(1): 65, 2022 Apr 02.
Artículo en Inglés | MEDLINE | ID: mdl-35366918

RESUMEN

Automatic prostate tumor segmentation is often unable to identify the lesion even if multi-parametric MRI data is used as input, and the segmentation output is difficult to verify due to the lack of clinically established ground truth images. In this work we use an explainable deep learning model to interpret the predictions of a convolutional neural network (CNN) for prostate tumor segmentation. The CNN uses a U-Net architecture which was trained on multi-parametric MRI data from 122 patients to automatically segment the prostate gland and prostate tumor lesions. In addition, co-registered ground truth data from whole mount histopathology images were available in 15 patients that were used as a test set during CNN testing. To be able to interpret the segmentation results of the CNN, heat maps were generated using the Gradient Weighted Class Activation Map (Grad-CAM) method. The CNN achieved a mean Dice Sorensen Coefficient 0.62 and 0.31 for the prostate gland and the tumor lesions -with the radiologist drawn ground truth and 0.32 with whole-mount histology ground truth for tumor lesions. Dice Sorensen Coefficient between CNN predictions and manual segmentations from MRI and histology data were not significantly different. In the prostate the Grad-CAM heat maps could differentiate between tumor and healthy prostate tissue, which indicates that the image information in the tumor was essential for the CNN segmentation.


Asunto(s)
Imágenes de Resonancia Magnética Multiparamétrica , Neoplasias de la Próstata , Humanos , Imagen por Resonancia Magnética/métodos , Masculino , Redes Neurales de la Computación , Neoplasias de la Próstata/diagnóstico por imagen
5.
Nat Commun ; 13(1): 2056, 2022 04 19.
Artículo en Inglés | MEDLINE | ID: mdl-35440631

RESUMEN

Several tissues contain cells with multiple motile cilia that generate a fluid or particle flow to support development and organ functions; defective motility causes human disease. Developmental cues orient motile cilia, but how cilia are locked into their final position to maintain a directional flow is not understood. Here we find that the actin cytoskeleton is highly dynamic during early development of multiciliated cells (MCCs). While apical actin bundles become increasingly more static, subapical actin filaments are nucleated from the distal tip of ciliary rootlets. Anchorage of these subapical actin filaments requires the presence of microridge-like structures formed during MCC development, and the activity of Nonmuscle Myosin II. Optogenetic manipulation of Ezrin, a core component of the microridge actin-anchoring complex, or inhibition of Myosin Light Chain Kinase interfere with rootlet anchorage and orientation. These observations identify microridge-like structures as an essential component of basal body rootlet anchoring in MCCs.


Asunto(s)
Actinas , Cilios , Citoesqueleto de Actina , Cuerpos Basales , Cilios/fisiología , Citoesqueleto , Humanos
6.
Development ; 148(21)2021 11 01.
Artículo en Inglés | MEDLINE | ID: mdl-34739029

RESUMEN

Genome editing simplifies the generation of new animal models for congenital disorders. However, the detailed and unbiased phenotypic assessment of altered embryonic development remains a challenge. Here, we explore how deep learning (U-Net) can automate segmentation tasks in various imaging modalities, and we quantify phenotypes of altered renal, neural and craniofacial development in Xenopus embryos in comparison with normal variability. We demonstrate the utility of this approach in embryos with polycystic kidneys (pkd1 and pkd2) and craniofacial dysmorphia (six1). We highlight how in toto light-sheet microscopy facilitates accurate reconstruction of brain and craniofacial structures within X. tropicalis embryos upon dyrk1a and six1 loss of function or treatment with retinoic acid inhibitors. These tools increase the sensitivity and throughput of evaluating developmental malformations caused by chemical or genetic disruption. Furthermore, we provide a library of pre-trained networks and detailed instructions for applying deep learning to the reader's own datasets. We demonstrate the versatility, precision and scalability of deep neural network phenotyping on embryonic disease models. By combining light-sheet microscopy and deep learning, we provide a framework for higher-throughput characterization of embryonic model organisms. This article has an associated 'The people behind the papers' interview.


Asunto(s)
Aprendizaje Profundo , Desarrollo Embrionario/genética , Fenotipo , Animales , Anomalías Craneofaciales/embriología , Anomalías Craneofaciales/genética , Anomalías Craneofaciales/patología , Modelos Animales de Enfermedad , Procesamiento de Imagen Asistido por Computador , Ratones , Microscopía , Mutación , Redes Neurales de la Computación , Trastornos del Neurodesarrollo/genética , Trastornos del Neurodesarrollo/patología , Enfermedades Renales Poliquísticas/embriología , Enfermedades Renales Poliquísticas/genética , Enfermedades Renales Poliquísticas/patología , Proteínas de Xenopus/genética , Xenopus laevis
7.
IEEE Trans Pattern Anal Mach Intell ; 43(4): 1369-1379, 2021 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-31869780

RESUMEN

The ability to understand visual information from limited labeled data is an important aspect of machine learning. While image-level classification has been extensively studied in a semi-supervised setting, dense pixel-level classification with limited data has only drawn attention recently. In this work, we propose an approach for semi-supervised semantic segmentation that learns from limited pixel-wise annotated samples while exploiting additional annotation-free images. The proposed approach relies on adversarial training with a feature matching loss to learn from unlabeled images. It uses two network branches that link semi-supervised classification with semi-supervised segmentation including self-training. The dual-branch approach reduces both the low-level and the high-level artifacts typical when training with few labels. The approach attains significant improvement over existing methods, especially when trained with very few labeled samples. On several standard benchmarks-PASCAL VOC 2012, PASCAL-Context, and Cityscapes-the approach achieves new state-of-the-art in semi-supervised learning.

8.
Nat Methods ; 16(4): 351, 2019 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-30804552

RESUMEN

In the version of this paper originally published, one of the affiliations for Dominic Mai was incorrect: "Center for Biological Systems Analysis (ZBSA), Albert-Ludwigs-University, Freiburg, Germany" should have been "Life Imaging Center, Center for Biological Systems Analysis, Albert-Ludwigs-University, Freiburg, Germany." This change required some renumbering of subsequent author affiliations. These corrections have been made in the PDF and HTML versions of the article, as well as in any cover sheets for associated Supplementary Information.

9.
Nat Methods ; 16(1): 67-70, 2019 01.
Artículo en Inglés | MEDLINE | ID: mdl-30559429

RESUMEN

U-Net is a generic deep-learning solution for frequently occurring quantification tasks such as cell detection and shape measurements in biomedical image data. We present an ImageJ plugin that enables non-machine-learning experts to analyze their data with U-Net on either a local computer or a remote server/cloud service. The plugin comes with pretrained models for single-cell segmentation and allows for U-Net to be adapted to new tasks on the basis of a few annotated samples.


Asunto(s)
Recuento de Células , Aprendizaje Profundo , Nube Computacional , Redes Neurales de la Computación , Diseño de Software
10.
Artículo en Inglés | MEDLINE | ID: mdl-30334779

RESUMEN

Models for computer vision are commonly defined either w.r.t. low-level concepts such as pixels that are to be grouped, or w.r.t. high-level concepts such as semantic objects that are to be detected and tracked. Combining bottom-up grouping with top-down detection and tracking, although highly desirable, is a challenging problem. We state this joint problem as a co-clustering problem that is principled and tractable by existing algorithms. We demonstrate the effectiveness of this approach by combining bottom-up motion segmentation by grouping of point trajectories with high-level multiple object tracking by clustering of bounding boxes. We show that solving the joint problem is beneficial at the low-level, in terms of the FBMS59 motion segmentation benchmark, and at the high-level, in terms of the Multiple Object Tracking benchmarks MOT15, MOT16 and the MOT17 challenge, and is state-of-the-art in some metrics.

11.
Nat Methods ; 14(12): 1141-1152, 2017 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-29083403

RESUMEN

We present a combined report on the results of three editions of the Cell Tracking Challenge, an ongoing initiative aimed at promoting the development and objective evaluation of cell segmentation and tracking algorithms. With 21 participating algorithms and a data repository consisting of 13 data sets from various microscopy modalities, the challenge displays today's state-of-the-art methodology in the field. We analyzed the challenge results using performance measures for segmentation and tracking that rank all participating methods. We also analyzed the performance of all of the algorithms in terms of biological measures and practical usability. Although some methods scored high in all technical aspects, none obtained fully correct solutions. We found that methods that either take prior information into account using learning strategies or analyze cells in a global spatiotemporal video context performed better than other methods under the segmentation and tracking scenarios included in the challenge.


Asunto(s)
Algoritmos , Rastreo Celular/métodos , Interpretación de Imagen Asistida por Computador , Benchmarking , Línea Celular , Humanos
12.
IEEE Trans Pattern Anal Mach Intell ; 39(4): 692-705, 2017 04.
Artículo en Inglés | MEDLINE | ID: mdl-27187944

RESUMEN

We train generative 'up-convolutional' neural networks which are able to generate images of objects given object style, viewpoint, and color. We train the networks on rendered 3D models of chairs, tables, and cars. Our experiments show that the networks do not merely learn all images by heart, but rather find a meaningful representation of 3D models allowing them to assess the similarity of different models, interpolate between given views to generate the missing ones, extrapolate views, and invent new objects not present in the training set by recombining training instances, or even two different object classes. Moreover, we show that such generative networks can be used to find correspondences between different objects from the dataset, outperforming existing approaches on this task.

13.
IEEE Trans Med Imaging ; 35(5): 1344-1351, 2016 05.
Artículo en Inglés | MEDLINE | ID: mdl-27071165

RESUMEN

Numerous scientific fields rely on elaborate but partly suboptimal data processing pipelines. An example is diffusion magnetic resonance imaging (diffusion MRI), a non-invasive microstructure assessment method with a prominent application in neuroimaging. Advanced diffusion models providing accurate microstructural characterization so far have required long acquisition times and thus have been inapplicable for children and adults who are uncooperative, uncomfortable, or unwell. We show that the long scan time requirements are mainly due to disadvantages of classical data processing. We demonstrate how deep learning, a group of algorithms based on recent advances in the field of artificial neural networks, can be applied to reduce diffusion MRI data processing to a single optimized step. This modification allows obtaining scalar measures from advanced models at twelve-fold reduced scan time and detecting abnormalities without using diffusion models. We set a new state of the art by estimating diffusion kurtosis measures from only 12 data points and neurite orientation dispersion and density measures from only 8 data points. This allows unprecedentedly fast and robust protocols facilitating clinical routine and demonstrates how classical data processing can be streamlined by means of deep learning.


Asunto(s)
Interpretación de Imagen Asistida por Computador/métodos , Aprendizaje Automático , Imagen por Resonancia Magnética/métodos , Redes Neurales de la Computación , Encéfalo/diagnóstico por imagen , Humanos , Factores de Tiempo
14.
IEEE Trans Med Imaging ; 35(7): 1636-46, 2016 07.
Artículo en Inglés | MEDLINE | ID: mdl-26829786

RESUMEN

Brain magnetic resonance imaging (MRI) in patients with Multiple Sclerosis (MS) shows regions of signal abnormalities, named plaques or lesions. The spatial lesion distribution plays a major role for MS diagnosis. In this paper we present a 3D MS-lesion segmentation method based on an adaptive geometric brain model. We model the topological properties of the lesions and brain tissues in order to constrain the lesion segmentation to the white matter. As a result, the method is independent of an MRI atlas. We tested our method on the MICCAI MS grand challenge proposed in 2008 and achieved competitive results. In addition, we used an in-house dataset of 15 MS patients, for which we achieved best results in most distances in comparison to atlas based methods. Besides classical segmentation distances, we motivate and formulate a new distance to evaluate the quality of the lesion segmentation, while being robust with respect to minor inconsistencies at the boundary level of the ground truth annotation.


Asunto(s)
Sustancia Blanca , Encéfalo , Humanos , Imagen por Resonancia Magnética , Esclerosis Múltiple
15.
IEEE Trans Pattern Anal Mach Intell ; 38(9): 1734-47, 2016 09.
Artículo en Inglés | MEDLINE | ID: mdl-26540673

RESUMEN

Deep convolutional networks have proven to be very successful in learning task specific features that allow for unprecedented performance on various computer vision tasks. Training of such networks follows mostly the supervised learning paradigm, where sufficiently many input-output pairs are required for training. Acquisition of large training sets is one of the key challenges, when approaching a new task. In this paper, we aim for generic feature learning and present an approach for training a convolutional network using only unlabeled data. To this end, we train the network to discriminate between a set of surrogate classes. Each surrogate class is formed by applying a variety of transformations to a randomly sampled 'seed' image patch. In contrast to supervised network training, the resulting feature representation is not class specific. It rather provides robustness to the transformations that have been applied during training. This generic feature representation allows for classification results that outperform the state of the art for unsupervised learning on several popular datasets (STL-10, CIFAR-10, Caltech-101, Caltech-256). While features learned with our approach cannot compete with class specific features from supervised training on a classification task, we show that they are advantageous on geometric matching problems, where they also outperform the SIFT descriptor.

16.
IEEE Trans Pattern Anal Mach Intell ; 36(6): 1187-200, 2014 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-26353280

RESUMEN

Motion is a strong cue for unsupervised object-level grouping. In this paper, we demonstrate that motion will be exploited most effectively, if it is regarded over larger time windows. Opposed to classical two-frame optical flow, point trajectories that span hundreds of frames are less susceptible to short-term variations that hinder separating different objects. As a positive side effect, the resulting groupings are temporally consistent over a whole video shot, a property that requires tedious post-processing in the vast majority of existing approaches. We suggest working with a paradigm that starts with semi-dense motion cues first and that fills up textureless areas afterwards based on color. This paper also contributes the Freiburg-Berkeley motion segmentation (FBMS) dataset, a large, heterogeneous benchmark with 59 sequences and pixel-accurate ground truth annotation of moving objects.

17.
Nat Methods ; 9(7): 735-42, 2012 Jun 17.
Artículo en Inglés | MEDLINE | ID: mdl-22706672

RESUMEN

Precise three-dimensional (3D) mapping of a large number of gene expression patterns, neuronal types and connections to an anatomical reference helps us to understand the vertebrate brain and its development. We developed the Virtual Brain Explorer (ViBE-Z), a software that automatically maps gene expression data with cellular resolution to a 3D standard larval zebrafish (Danio rerio) brain. ViBE-Z enhances the data quality through fusion and attenuation correction of multiple confocal microscope stacks per specimen and uses a fluorescent stain of cell nuclei for image registration. It automatically detects 14 predefined anatomical landmarks for aligning new data with the reference brain. ViBE-Z performs colocalization analysis in expression databases for anatomical domains or subdomains defined by any specific pattern; here we demonstrate its utility for mapping neurons of the dopaminergic system. The ViBE-Z database, atlas and software are provided via a web interface.


Asunto(s)
Encéfalo , Bases de Datos Genéticas , Expresión Génica , Imagenología Tridimensional/métodos , Pez Cebra , Animales , Encéfalo/embriología , Encéfalo/metabolismo , Encéfalo/ultraestructura , Desarrollo Embrionario/genética , Larva , Neuronas/metabolismo , Neuronas/ultraestructura , Programas Informáticos , Pez Cebra/embriología , Pez Cebra/genética
18.
IEEE Trans Image Process ; 21(4): 1863-73, 2012 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-22203719

RESUMEN

We propose an algorithm for 3-D multiview deblurring using spatially variant point spread functions (PSFs). The algorithm is applied to multiview reconstruction of volumetric microscopy images. It includes registration and estimation of the PSFs using irregularly placed point markers (beads). We formulate multiview deblurring as an energy minimization problem subject to L1-regularization. Optimization is based on the regularized Lucy-Richardson algorithm, which we extend to deal with our more general model. The model parameters are chosen in a profound way by optimizing them on a realistic training set. We quantitatively and qualitatively compare with existing methods and show that our method provides better signal-to-noise ratio and increases the resolution of the reconstructed images.


Asunto(s)
Algoritmos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Imagenología Tridimensional/métodos , Microscopía Fluorescente/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Inteligencia Artificial , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
19.
IEEE Trans Pattern Anal Mach Intell ; 34(8): 1563-75, 2012 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-22201055

RESUMEN

We present a method for densely computing local rotation invariant image descriptors in volumetric images. The descriptors are based on a transformation to the harmonic domain, which we compute very efficiently via differential operators. We show that this fast voxelwise computation is restricted to a family of basis functions that have certain differential relationships. Building upon this finding, we propose local descriptors based on the Gaussian Laguerre and spherical Gabor basis functions and show how the coefficients can be computed efficiently by recursive differentiation. We exemplarily demonstrate the effectiveness of such dense descriptors in a detection and classification task on biological 3D images. In a direct comparison to existing volumetric features, among them 3D SIFT, our descriptors reveal superior performance.


Asunto(s)
Algoritmos , Inteligencia Artificial , Imagenología Tridimensional/métodos , Animales , Arabidopsis/citología , Simulación por Computador , Bases de Datos Factuales , Procesamiento de Imagen Asistido por Computador/métodos , Meristema/citología , Modelos Teóricos , Distribución Normal , Reconocimiento de Normas Patrones Automatizadas , Células Vegetales/diagnóstico por imagen , Ultrasonografía
20.
IEEE Trans Pattern Anal Mach Intell ; 34(3): 493-505, 2012 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-21808082

RESUMEN

We propose a probabilistic formulation of joint silhouette extraction and 3D reconstruction given a series of calibrated 2D images. Instead of segmenting each image separately in order to construct a 3D surface consistent with the estimated silhouettes, we compute the most probable 3D shape that gives rise to the observed color information. The probabilistic framework, based on Bayesian inference, enables robust 3D reconstruction by optimally taking into account the contribution of all views. We solve the arising maximum a posteriori shape inference in a globally optimal manner by convex relaxation techniques in a spatially continuous representation. For an interactively provided user input in the form of scribbles specifying foreground and background regions, we build corresponding color distributions as multivariate Gaussians and find a volume occupancy that best fits to this data in a variational sense. Compared to classical methods for silhouette-based multiview reconstruction, the proposed approach does not depend on initialization and enjoys significant resilience to violations of the model assumptions due to background clutter, specular reflections, and camera sensor perturbations. In experiments on several real-world data sets, we show that exploiting a silhouette coherency criterion in a multiview setting allows for dramatic improvements of silhouette quality over independent 2D segmentations without any significant increase of computational efforts. This results in more accurate visual hull estimation, needed by a multitude of image-based modeling approaches. We made use of recent advances in parallel computing with a GPU implementation of the proposed method generating reconstructions on volume grids of more than 20 million voxels in up to 4.41 seconds.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA