Your browser doesn't support javascript.
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.496
Filtrar
1.
PLoS One ; 15(3): e0229651, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32126113

RESUMO

Though traditional thresholding methods are simple and efficient, they may result in poor segmentation results because only image's brightness information is taken into account in the procedure of threshold selection. Considering the contextual information between pixels can improve segmentation accuracy. To to this, a new thresholding method is proposed in this paper. The proposed method constructs a new two dimensional histogram using brightness of a pixel and local relative entropy of it's neighbor pixels. The local relative entropy (LRE) measures the brightness difference between a pixel and it's neighbor pixels. The two dimensional histogram, consisting of gray level and LRE, can reflect the contextual information between pixels to a certain extent. The optimal thresholding vector is obtained via minimizing cross entropy criteria. Experimental results show that the proposed method can achieve more accurate segmentation results than other thresholding methods.


Assuntos
Inteligência Artificial , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Inteligência Artificial/estatística & dados numéricos , Cor , Entropia , Humanos , Processamento de Imagem Assistida por Computador/estatística & dados numéricos , Conceitos Matemáticos
2.
PLoS One ; 15(3): e0229526, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32150547

RESUMO

In diffusion MRI, the Ensemble Average diffusion Propagator (EAP) provides relevant micro-structural information and meaningful descriptive maps of the white matter previously obscured by traditional techniques like Diffusion Tensor Imaging (DTI). The direct estimation of the EAP, however, requires a dense sampling of the Cartesian q-space involving a huge amount of samples (diffusion gradients) for proper reconstruction. A collection of more efficient techniques have been proposed in the last decade based on parametric representations of the EAP, but they still imply acquiring a large number of diffusion gradients with different b-values (shells). Paradoxically, this has come together with an effort to find scalar measures gathering all the q-space micro-structural information probed in one single index or set of indices. Among them, the return-to-origin (RTOP), return-to-plane (RTPP), and return-to-axis (RTAP) probabilities have rapidly gained popularity. In this work, we propose the so-called "Apparent Measures Using Reduced Acquisitions" (AMURA) aimed at computing scalar indices that can mimic the sensitivity of state of the art EAP-based measures to micro-structural changes. AMURA drastically reduces both the number of samples needed and the computational complexity of the estimation of diffusion properties by assuming the diffusion anisotropy is roughly independent from the radial direction. This simplification allows us to compute closed-form expressions from single-shell information, so that AMURA remains compatible with standard acquisition protocols commonly used even in clinical practice. Additionally, the analytical form of AMURA-based measures, as opposed to the iterative, non-linear reconstruction ubiquitous to full EAP techniques, turns the newly introduced apparent RTOP, RTPP, and RTAP both robust and efficient to compute.


Assuntos
Imagem de Difusão por Ressonância Magnética/métodos , Interpretação de Imagem Assistida por Computador/métodos , Interpretação de Imagem Assistida por Computador/estatística & dados numéricos , Algoritmos , Encéfalo/diagnóstico por imagem , Imagem de Difusão por Ressonância Magnética/estatística & dados numéricos , Imagem de Tensor de Difusão/métodos , Aumento da Imagem/métodos , Processamento de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/estatística & dados numéricos , Imagem por Ressonância Magnética/métodos , Substância Branca/diagnóstico por imagem
3.
PLoS One ; 15(3): e0229560, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32176698

RESUMO

PURPOSE: Image texture is increasingly used to discriminate tissues and lesions in PET/CT. For quantification or in computer-aided diagnosis, textural feature analysis must produce robust and comparable values. Because statistical feature values depend on image count statistics, we investigated in depth the stability of Haralick features values as functions of acquisition duration, and for common image resolutions and reconstructions. METHODS: A homogeneous cylindrical phantom containing 9.6 kBq/ml Ge-68 was repeatedly imaged on a Siemens Biograph mCT, with acquisition durations ranging from three seconds to three hours. Images with 1.5, 2, and 4 mm isometrically spaced voxels were reconstructed with filtered back-projection (FBP), ordered subset expectation maximization (OSEM), and the Siemens TrueX algorithm. We analysed Haralick features derived from differently quantized (3 to 8-bit) grey level co-occurrence matrices (GLCMs) as functions of exposure E, which we defined as the product of activity concentration in a volume of interest (VOI) and acquisition duration. The VOI was a 50 mm wide cube at the centre of the phantom. Feature stability was defined for df/dE → 0. RESULTS: The most stable feature values occurred in low resolution FBPs, whereas some feature values from 1.5 mm TrueX reconstructions ranged over two orders of magnitude. Within the same reconstructions, most feature value-exposure curves reached stable plateaus at similar exposures, regardless of GLCM quantization. With 8-bit GLCM, median time to stability was 16 s and 22 s for FBPs, 18 s and 125 s for OSEM, and 23 s, 45 s, and 76 s for PSF reconstructions, with longer durations for higher resolutions. Stable exposures coincided in OSEM and TrueX reconstructions with image noise distributions converging to a Gaussian. In FBP, the occurrence of stable values coincided the disappearance of negatives image values in the VOI. CONCLUSIONS: Haralick feature values depend strongly on exposure, but invariance exists within defined domains of exposure. Here, we present an easily replicable procedure to identify said stable exposure domains, where image noise does not substantially add to textural feature values. Only by imaging at predetermined feature-invariant exposure levels and by adjusting exposure to expected activity concentrations, can textural features have a quantitative use in PET/CT. The necessary exposure levels are attainable by modern PET/CT systems in clinical routine.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/estatística & dados numéricos , Tomografia Computadorizada com Tomografia por Emissão de Pósitrons/métodos , Algoritmos , Animais , Fluordesoxiglucose F18 , Humanos , Imagens de Fantasmas/estatística & dados numéricos , Tomografia Computadorizada com Tomografia por Emissão de Pósitrons/normas , Tomografia Computadorizada com Tomografia por Emissão de Pósitrons/estatística & dados numéricos , Tomografia por Emissão de Pósitrons/métodos , Compostos Radiofarmacêuticos
4.
Nat Commun ; 11(1): 872, 2020 02 13.
Artigo em Inglês | MEDLINE | ID: mdl-32054847

RESUMO

Natural scenes sparsely activate neurons in the primary visual cortex (V1). However, how sparsely active neurons reliably represent complex natural images and how the information is optimally decoded from these representations have not been revealed. Using two-photon calcium imaging, we recorded visual responses to natural images from several hundred V1 neurons and reconstructed the images from neural activity in anesthetized and awake mice. A single natural image is linearly decodable from a surprisingly small number of highly responsive neurons, and the remaining neurons even degrade the decoding. Furthermore, these neurons reliably represent the image across trials, regardless of trial-to-trial response variability. Based on our results, diverse, partially overlapping receptive fields ensure sparse and reliable representation. We suggest that information is reliably represented while the corresponding neuronal patterns change across trials and collecting only the activity of highly responsive neurons is an optimal decoding strategy for the downstream neurons.


Assuntos
Células Receptoras Sensoriais/fisiologia , Córtex Visual/citologia , Córtex Visual/fisiologia , Percepção Visual/fisiologia , Animais , Feminino , Processamento de Imagem Assistida por Computador/estatística & dados numéricos , Masculino , Camundongos , Camundongos Endogâmicos C57BL , Camundongos Transgênicos , Microscopia de Fluorescência por Excitação Multifotônica , Reconhecimento Visual de Modelos/fisiologia , Estimulação Luminosa
5.
Comput Math Methods Med ; 2019: 5450373, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31885682

RESUMO

In the field of cell and molecular biology, green fluorescent protein (GFP) images provide functional information embodying the molecular distribution of biological cells while phase-contrast images maintain structural information with high resolution. Fusion of GFP and phase-contrast images is of high significance to the study of subcellular localization, protein functional analysis, and genetic expression. This paper proposes a novel algorithm to fuse these two types of biological images via generative adversarial networks (GANs) by carefully taking their own characteristics into account. The fusion problem is modelled as an adversarial game between a generator and a discriminator. The generator aims to create a fused image that well extracts the functional information from the GFP image and the structural information from the phase-contrast image at the same time. The target of the discriminator is to further improve the overall similarity between the fused image and the phase-contrast image. Experimental results demonstrate that the proposed method can outperform several representative and state-of-the-art image fusion methods in terms of both visual quality and objective evaluation.


Assuntos
Algoritmos , Proteínas de Fluorescência Verde/metabolismo , Processamento de Imagem Assistida por Computador/métodos , Microscopia de Contraste de Fase/métodos , Biologia Celular , Biologia Computacional , Aprendizado Profundo , Processamento de Imagem Assistida por Computador/estatística & dados numéricos , Microscopia de Contraste de Fase/estatística & dados numéricos , Modelos Biológicos
6.
PLoS Comput Biol ; 15(12): e1006997, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31856159

RESUMO

Magnetic resonance tomography typically applies the Fourier transform to k-space signals repeatedly acquired from a frequency encoded spatial region of interest, therefore requiring a stationary object during scanning. Any movement of the object results in phase errors in the recorded signal, leading to deformed images, phantoms, and artifacts, since the encoded information does not originate from the intended region of the object. However, if the type and magnitude of movement is known instantaneously, the scanner or the reconstruction algorithm could be adjusted to compensate for the movement, directly allowing high quality imaging with non-stationary objects. This would be an enormous boon to studies that tie cell metabolomics to spontaneous organism behaviour, eliminating the stress otherwise necessitated by restraining measures such as anesthesia or clamping. In the present theoretical study, we use a phantom of the animal model C. elegans to examine the feasibility to automatically predict its movement and position, and to evaluate the impact of movement prediction, within a sufficiently long time horizon, on image reconstruction. For this purpose, we use automated image processing to annotate body parts in freely moving C. elegans, and predict their path of movement. We further introduce an MRI simulation platform based on bright field videos of the moving worm, combined with a stack of high resolution transmission electron microscope (TEM) slice images as virtual high resolution phantoms. A phantom provides an indication of the spatial distribution of signal-generating nuclei on a particular imaging slice. We show that adjustment of the scanning to the predicted movements strongly reduces distortions in the resulting image, opening the door for implementation in a high-resolution NMR scanner.


Assuntos
Imagem por Ressonância Magnética/métodos , Algoritmos , Animais , Caenorhabditis elegans/anatomia & histologia , Caenorhabditis elegans/fisiologia , Biologia Computacional , Simulação por Computador , Estudos de Viabilidade , Humanos , Processamento de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/estatística & dados numéricos , Imagem por Ressonância Magnética/estatística & dados numéricos , Modelos Biológicos , Movimento (Física) , Movimento , Imagens de Fantasmas
7.
Elife ; 82019 10 22.
Artigo em Inglês | MEDLINE | ID: mdl-31637999

RESUMO

High-content phenotypic screening has become the approach of choice for drug discovery due to its ability to extract drug-specific multi-layered data. In the field of epigenetics, such screening methods have suffered from a lack of tools sensitive to selective epigenetic perturbations. Here we describe a novel approach, Microscopic Imaging of Epigenetic Landscapes (MIEL), which captures the nuclear staining patterns of epigenetic marks and employs machine learning to accurately distinguish between such patterns. We validated the MIEL platform across multiple cells lines and using dose-response curves, to insure the fidelity and robustness of this approach for high content high throughput drug discovery. Focusing on noncytotoxic glioblastoma treatments, we demonstrated that MIEL can identify and classify epigenetically active drugs. Furthermore, we show MIEL was able to accurately rank candidate drugs by their ability to produce desired epigenetic alterations consistent with increased sensitivity to chemotherapeutic agents or with induction of glioblastoma differentiation.


Assuntos
Antineoplásicos/uso terapêutico , Biomarcadores Tumorais/genética , Descoberta de Drogas/métodos , Epigênese Genética/efeitos dos fármacos , Ensaios de Triagem em Larga Escala , Histonas/genética , Proteínas de Neoplasias/genética , Biomarcadores Tumorais/metabolismo , Neoplasias Encefálicas/tratamento farmacológico , Neoplasias Encefálicas/genética , Neoplasias Encefálicas/metabolismo , Neoplasias Encefálicas/patologia , Linhagem Celular Tumoral , Núcleo Celular/efeitos dos fármacos , Núcleo Celular/genética , Núcleo Celular/metabolismo , Relação Dose-Resposta a Droga , Glioblastoma/tratamento farmacológico , Glioblastoma/genética , Glioblastoma/metabolismo , Glioblastoma/patologia , Histonas/metabolismo , Humanos , Processamento de Imagem Assistida por Computador/estatística & dados numéricos , Aprendizado de Máquina , Microscopia de Fluorescência , Proteínas de Neoplasias/metabolismo
8.
PLoS One ; 14(9): e0221203, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31568494

RESUMO

With the introduction of multi-camera systems in modern plant phenotyping new opportunities for combined multimodal image analysis emerge. Visible light (VIS), fluorescence (FLU) and near-infrared images enable scientists to study different plant traits based on optical appearance, biochemical composition and nutrition status. A straightforward analysis of high-throughput image data is hampered by a number of natural and technical factors including large variability of plant appearance, inhomogeneous illumination, shadows and reflections in the background regions. Consequently, automated segmentation of plant images represents a big challenge and often requires an extensive human-machine interaction. Combined analysis of different image modalities may enable automatisation of plant segmentation in "difficult" image modalities such as VIS images by utilising the results of segmentation of image modalities that exhibit higher contrast between plant and background, i.e. FLU images. For efficient segmentation and detection of diverse plant structures (i.e. leaf tips, flowers), image registration techniques based on feature point (FP) matching are of particular interest. However, finding reliable feature points and point pairs for differently structured plant species in multimodal images can be challenging. To address this task in a general manner, different feature point detectors should be considered. Here, a comparison of seven different feature point detectors for automated registration of VIS and FLU plant images is performed. Our experimental results show that straightforward image registration using FP detectors is prone to errors due to too large structural difference between FLU and VIS modalities. We show that structural image enhancement such as background filtering and edge image transformation significantly improves performance of FP algorithms. To overcome the limitations of single FP detectors, combination of different FP methods is suggested. We demonstrate application of our enhanced FP approach for automated registration of a large amount of FLU/VIS images of developing plant species acquired from high-throughput phenotyping experiments.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Plantas/anatomia & histologia , Algoritmos , Clorofila/metabolismo , Fluorescência , Humanos , Processamento de Imagem Assistida por Computador/estatística & dados numéricos , Iluminação , Fenótipo , Fotografação/métodos , Desenvolvimento Vegetal , Folhas de Planta/anatomia & histologia , Folhas de Planta/metabolismo , Plantas/metabolismo
9.
Dev Psychol ; 55(9): 1908-1920, 2019 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-31464494

RESUMO

Empathic responding-the capacity to understand, resonate with, and respond sensitively to others' emotional experiences-is a complex human faculty that calls upon multiple social, emotional, and cognitive capacities and their underlying neural systems. Emerging evidence in adults has suggested that the hippocampus and its associated network may play an important role in empathic responding, possibly via processes such as memory of emotional events, but the contribution of this structure in early childhood is unknown. We examined concurrent associations between empathic responding and hippocampal volume in a sample of 78 children (ages 4-8 years). Larger bilateral hippocampal volume (adjusted for intracranial volume) predicted greater observed empathic responses toward an experimenter in distress, but only for boys. The association was not driven by a specific subregion of the hippocampus (head, body, tail), nor did it vary with age. Empathic responding was not significantly related to amygdala volume, suggesting specificity of relations with the hippocampus. Results support the proposal that hippocampal structure contributes to individual differences in children's empathic responding, consistent with research in adults. Findings shed light on an understudied structure in the complex neural systems supporting empathic responding and raise new questions regarding sex differences in the neurodevelopment of empathy in early childhood. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Assuntos
Emoções/fisiologia , Empatia/fisiologia , Hipocampo , Criança , Pré-Escolar , Feminino , Humanos , Processamento de Imagem Assistida por Computador/estatística & dados numéricos , Imagem por Ressonância Magnética , Masculino , Fatores Sexuais , Comportamento Social
10.
Comput Methods Programs Biomed ; 179: 104976, 2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-31443856

RESUMO

BACKGROUND AND OBJECTIVE: There has been growing interest in using functional connectivity patterns, determined from fMRI data to characterize groups of individuals exhibiting common traits. However, the present challenge lies in efficient and accurate identification of distinct patterns observed consistently across multiple subjects. Existing approaches either impose strong assumptions, require aligning images before processing, or require data-intensive machine learning algorithms with manually labeled training datasets. In this paper, we propose a more principled and flexible approach to address this. METHODS: Our approach redefines the problem of estimating the group-representative functional network as an image segmentation problem. After employing an improved clustering-based ICA scheme to pre-process the dataset of individual functional network images, we use a maximum a posteriori-Markov random field (MAP-MRF) framework to solve the image segmentation problem. In this framework, we propose a probabilistic model of the individual pixels of the fMRI data, with the model involving a latent group-representative functional network image. Given an observed dataset, we apply a novel and efficient variational Bayes algorithm to recover the associated latent group image. Our methodology seeks to overcome limitations in more traditional schemes by exploiting spatial relationships underlying the connectivity maps and accounting for uncertainty in the estimation process. RESULTS: We validate our approach using synthetic, simulated and real data. First, we generate datasets from the proposed forward model with subject-specific binary masking and measurement noise, as well as from a variant of the model without measurement noise. We use both datasets to evaluate our model, along with two algorithms: coordinate-ascent algorithm and variational Bayes algorithm. We conclude that our proposed model with variational Bayes outperforms other competitors, even under model-misspecification. Using variational Bayes offers a significant improvement in performance, with almost no additional computational overhead. We next test our approach on simulated fMRI data. We show our approach is robust to initialization and can recover a solution close to the ground truth. Finally, we apply our proposed methodology along with baselines to a real dataset of fMRI recordings of individuals from two groups, a control group and a group suffering from depression, with recordings made while individuals were subjected to musical stimuli. Our methodology is able to identify group differences that are less clear under competing methods. CONCLUSIONS: Our model-based approach demonstrates the advantage of probabilistic models and modern algorithms that account for uncertainty in accurate identification of group-representative connectivity maps. The variational Bayes methodology yields highly accurate results without increasing the computational load compared to traditional methods. In addition, it is robust to model misspecification, and increases the ability to avoid local optima in the solution.


Assuntos
Conectoma/estatística & dados numéricos , Neuroimagem Funcional/estatística & dados numéricos , Imagem por Ressonância Magnética/estatística & dados numéricos , Algoritmos , Teorema de Bayes , Análise por Conglomerados , Biologia Computacional , Simulação por Computador , Depressão/diagnóstico por imagem , Humanos , Interpretação de Imagem Assistida por Computador/estatística & dados numéricos , Processamento de Imagem Assistida por Computador/estatística & dados numéricos , Aprendizado de Máquina , Cadeias de Markov , Modelos Estatísticos
11.
Scanning ; 2019: 4235865, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31281562

RESUMO

This research presented an accurate and efficient contour length estimation method developed for DNA digital curves acquired from Atomic Force Microscopy (AFM) images. This automation method is calibrated against different AFM resolutions and ideal to be extended to all different kinds of biopolymer samples, encompassing all different sample stiffnesses. The methodology considers the digital curve local geometric relationship, as these digital shape segments and pixel connections represent the actual morphology of the biopolymer sample as it is being imaged from the AFM scanning. In order to incorporate the true local geometry relationship that is embedded in the continuous form of the original sample, one needs to find this geometry counterpart in the digitized image. This counterpart is realized by taking the skeleton backbone of the sample contour and by using these digitized pixels' connection relationship to find its local shape representation. In this research, one uses the 8-connect Freeman Chain Code (CC) to describe the directional connection between DNA image pixels, in order to account for the local shapes of four connected pixels. The result is a novel shape number (SN) system derived from CC, which is a fully automated algorithm that can be applied to DNA samples of any length for accurate estimation, with efficient computational cost. This shape-wise consideration is weighted to modify the local length with great precision, accounting for all the different morphologies of the biopolymer sample, and resulted with accurate length estimation, as the error falls below 0.07%, an order of magnitude improvement compared to previous findings.


Assuntos
DNA de Cadeia Simples/ultraestrutura , Processamento de Imagem Assistida por Computador/estatística & dados numéricos , Microscopia de Força Atômica/estatística & dados numéricos , Algoritmos , Soluções/química
12.
Transplant Proc ; 51(7): 2387-2390, 2019 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-31324483

RESUMO

PURPOSE: Estimation of graft volume is critical in living donor liver transplantation (LDLT). In this study, we aimed to evaluate the accuracy of software-aided automated computer tomography (CT) volumetry in the preoperative assessment of graft size for LDLT and to compare this method with manual volumetry. MATERIALS AND METHODS: Forty-one donors (27 men; 14 women) with a mean age in years ± standard deviation (28.4 ± 6.6) underwent contrast-enhanced CT prior to graft removal for LDLT. A liver transplant surgeon determined the weights of liver grafts using automated 3-dimensional volumetry software, and an abdominal radiologist specializing in liver imaging independently and blindly used the commercial interactive volumetry-assisted software on a viewing workstation to determine the liver volume on CT images. Both results were then compared to the weights of actual grafts obtained during surgery. Intraclass correlation coefficients were used to assess the consistency of numerical measurements and Pearson correlation coefficients were calculated to detect a linear relationship between numerical variables. To compare correlation coefficients, z scores were used. RESULTS: Regarding the right and left lobe graft volume estimation by the surgeon, there was a positive correlation between the results and actual graft weight (r = 0.834; P = .001; and r = 0.587; P = .001, respectively). Likewise, graft volume estimation by the radiologist for the right and left lobe was also positively correlated with the actual graft weight (r = 0.819; P = .001 and r = 0.626, P = .001, respectively). There was no significant difference between correlation coefficients (P = .836). CONCLUSION: Volumetric measurement of donor graft using 3-dimensional software provides comparable results to manual CT calculation of liver volume.


Assuntos
Tomografia Computadorizada de Feixe Cônico/estatística & dados numéricos , Processamento de Imagem Assistida por Computador/estatística & dados numéricos , Imageamento Tridimensional/estatística & dados numéricos , Fígado/diagnóstico por imagem , Transplantes/diagnóstico por imagem , Adulto , Tomografia Computadorizada de Feixe Cônico/métodos , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Fígado/patologia , Transplante de Fígado , Doadores Vivos , Masculino , Pessoa de Meia-Idade , Tamanho do Órgão , Software , Transplantes/patologia
13.
PLoS One ; 14(7): e0219052, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31356649

RESUMO

The stereo correspondence problem exists because false matches between the images from multiple sensors camouflage the true (veridical) matches. True matches are correspondences between image points that have the same generative source; false matches are correspondences between similar image points that have different sources. This problem of selecting true matches among false ones must be overcome by both biological and artificial stereo systems in order for them to be useful depth sensors. The proposed re-examination of this fundamental issue shows that false matches form a symmetrical pattern in the array of all possible matches, with true matches forming the axis of symmetry. The patterning of false matches can therefore be used to locate true matches and derive the depth profile of the surface that gave rise to them. This reverses the traditional strategy, which treats false matches as noise. The new approach is particularly well-suited to extract the 3-D locations and shapes of camouflaged surfaces and to work in scenes characterized by high degrees of clutter. We demonstrate that the symmetry of false-match signals can be exploited to identify surfaces in random-dot stereograms. This strategy permits novel depth computations for target detection, localization, and identification by machine-vision systems, accounts for physiological and psychophysical findings that are otherwise puzzling and makes possible new ways for combining stereo and motion signals.


Assuntos
Percepção de Profundidade/fisiologia , Imageamento Tridimensional/estatística & dados numéricos , Algoritmos , Animais , Simulação por Computador , Humanos , Processamento de Imagem Assistida por Computador/estatística & dados numéricos , Estimulação Luminosa , Psicofísica , Disparidade Visual/fisiologia
14.
Proc Natl Acad Sci U S A ; 116(30): 15007-15012, 2019 07 23.
Artigo em Inglês | MEDLINE | ID: mdl-31292253

RESUMO

High-resolution structural information is essential to understand protein function. Protein-structure determination needs a considerable amount of protein, which can be challenging to produce, often involving harsh and lengthy procedures. In contrast, the several thousand to a few million protein particles required for structure determination by cryogenic electron microscopy (cryo-EM) can be provided by miniaturized systems. Here, we present a microfluidic method for the rapid isolation of a target protein and its direct preparation for cryo-EM. Less than 1 µL of cell lysate is required as starting material to solve the atomic structure of the untagged, endogenous human 20S proteasome. Our work paves the way for high-throughput structure determination of proteins from minimal amounts of cell lysate and opens more opportunities for the isolation of sensitive, endogenous protein complexes.


Assuntos
Microscopia Crioeletrônica/métodos , Processamento de Imagem Assistida por Computador/estatística & dados numéricos , Complexo de Endopeptidases do Proteassoma/ultraestrutura , Subunidades Proteicas/química , Biotinilação , Microscopia Crioeletrônica/instrumentação , Células HeLa , Humanos , Imageamento Tridimensional , Fragmentos Fab das Imunoglobulinas/química , Técnicas Analíticas Microfluídicas/métodos , Complexo de Endopeptidases do Proteassoma/química , Complexo de Endopeptidases do Proteassoma/isolamento & purificação , Conformação Proteica , Subunidades Proteicas/isolamento & purificação , Vitrificação
15.
PLoS One ; 14(6): e0218931, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31246999

RESUMO

Endosomes are subcellular organelles which serve as important transport compartments in eukaryotic cells. Fluorescence microscopy is a widely applied technology to study endosomes at the subcellular level. In general, a microscopy image can contain a large number of organelles and endosomes in particular. Detecting and annotating endosomes in fluorescence microscopy images is a critical part in the study of subcellular trafficking processes. Such annotation is usually performed by human inspection, which is time-consuming and prone to inaccuracy if carried out by inexperienced analysts. This paper proposes a two-stage method for automated detection of ring-like endosomes. The method consists of a localization stage cascaded by an identification stage. Given a test microscopy image, the localization stage generates a voting-map by locally comparing the query endosome patches and the test image based on a bag-of-words model. Using the voting-map, a number of candidate patches of endosomes are determined. Subsequently, in the identification stage, a support vector machine (SVM) is trained using the endosome patches and the background pattern patches. Each of the candidate patches is classified by the SVM to rule out those patches of endosome-like background patterns. The performance of the proposed method is evaluated with real microscopy images of human myeloid endothelial cells. It is shown that the proposed method significantly outperforms several state-of-the-art competing methods using multiple performance metrics.


Assuntos
Endossomos/ultraestrutura , Processamento de Imagem Assistida por Computador/métodos , Microscopia de Fluorescência/métodos , Algoritmos , Células Endoteliais/ultraestrutura , Humanos , Processamento de Imagem Assistida por Computador/estatística & dados numéricos , Microscopia de Fluorescência/estatística & dados numéricos , Máquina de Vetores de Suporte
16.
PLoS One ; 14(6): e0218086, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31188894

RESUMO

The evaluation of large amounts of digital image data is of growing importance for biology, including for the exploration and monitoring of marine habitats. However, only a tiny percentage of the image data collected is evaluated by marine biologists who manually interpret and annotate the image contents, which can be slow and laborious. In order to overcome the bottleneck in image annotation, two strategies are increasingly proposed: "citizen science" and "machine learning". In this study, we investigated how the combination of citizen science, to detect objects, and machine learning, to classify megafauna, could be used to automate annotation of underwater images. For this purpose, multiple large data sets of citizen science annotations with different degrees of common errors and inaccuracies observed in citizen science data were simulated by modifying "gold standard" annotations done by an experienced marine biologist. The parameters of the simulation were determined on the basis of two citizen science experiments. It allowed us to analyze the relationship between the outcome of a citizen science study and the quality of the classifications of a deep learning megafauna classifier. The results show great potential for combining citizen science with machine learning, provided that the participants are informed precisely about the annotation protocol. Inaccuracies in the position of the annotation had the most substantial influence on the classification accuracy, whereas the size of the marking and false positive detections had a smaller influence.


Assuntos
Ciência do Cidadão/métodos , Aprendizado Profundo , Processamento de Imagem Assistida por Computador/estatística & dados numéricos , Biologia Marinha/métodos , Animais , Organismos Aquáticos , Artrópodes/anatomia & histologia , Artrópodes/classificação , Cnidários/anatomia & histologia , Cnidários/classificação , Equinodermos/anatomia & histologia , Equinodermos/classificação , Humanos , Imageamento Tridimensional , Biologia Marinha/instrumentação , Moluscos/anatomia & histologia , Moluscos/classificação , Poríferos/anatomia & histologia , Poríferos/classificação
17.
PLoS Comput Biol ; 15(5): e1007012, 2019 05.
Artigo em Inglês | MEDLINE | ID: mdl-31083649

RESUMO

Neuronal synapses transmit electrochemical signals between cells through the coordinated action of presynaptic vesicles, ion channels, scaffolding and adapter proteins, and membrane receptors. In situ structural characterization of numerous synaptic proteins simultaneously through multiplexed imaging facilitates a bottom-up approach to synapse classification and phenotypic description. Objective automation of efficient and reliable synapse detection within these datasets is essential for the high-throughput investigation of synaptic features. Convolutional neural networks can solve this generalized problem of synapse detection, however, these architectures require large numbers of training examples to optimize their thousands of parameters. We propose DoGNet, a neural network architecture that closes the gap between classical computer vision blob detectors, such as Difference of Gaussians (DoG) filters, and modern convolutional networks. DoGNet is optimized to analyze highly multiplexed microscopy data. Its small number of training parameters allows DoGNet to be trained with few examples, which facilitates its application to new datasets without overfitting. We evaluate the method on multiplexed fluorescence imaging data from both primary mouse neuronal cultures and mouse cortex tissue slices. We show that DoGNet outperforms convolutional networks with a low-to-moderate number of training examples, and DoGNet is efficiently transferred between datasets collected from separate research groups. DoGNet synapse localizations can then be used to guide the segmentation of individual synaptic protein locations and spatial extents, revealing their spatial organization and relative abundances within individual synapses. The source code is publicly available: https://github.com/kulikovv/dognet.


Assuntos
Modelos Neurológicos , Redes Neurais de Computação , Sinapses/fisiologia , Sinapses/ultraestrutura , Animais , Córtex Cerebral/fisiologia , Córtex Cerebral/ultraestrutura , Biologia Computacional , Simulação por Computador , Bases de Dados Factuais , Processamento de Imagem Assistida por Computador/estatística & dados numéricos , Camundongos , Microscopia de Fluorescência por Excitação Multifotônica/métodos , Microscopia de Fluorescência por Excitação Multifotônica/estatística & dados numéricos , Proteínas do Tecido Nervoso/metabolismo , Neurônios/fisiologia , Neurônios/ultraestrutura , Software , Transmissão Sináptica/fisiologia
18.
Nat Methods ; 16(6): 471-477, 2019 06.
Artigo em Inglês | MEDLINE | ID: mdl-31086343

RESUMO

The demand for high-throughput data collection in electron microscopy is increasing for applications in structural and cellular biology. Here we present a combination of software tools that enable automated acquisition guided by image analysis for a variety of transmission electron microscopy acquisition schemes. SerialEM controls microscopes and detectors and can trigger automated tasks at multiple positions with high flexibility. Py-EM interfaces with SerialEM to enact specimen-specific image-analysis pipelines that enable feedback microscopy. As example applications, we demonstrate dose reduction in cryo-electron microscopy experiments, fully automated acquisition of every cell in a plastic section and automated targeting on serial sections for 3D volume imaging across multiple grids.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/estatística & dados numéricos , Microscopia Eletrônica de Transmissão/métodos , Software , Humanos , Microscopia Eletrônica de Transmissão/instrumentação
19.
J Neuropsychiatry Clin Neurosci ; 31(4): 368-377, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31117908

RESUMO

OBJECTIVE: Working memory impairments represent a core cognitive deficit in schizophrenia, predictive of patients' daily functioning, and one that is unaffected by current treatments. To address this, working memory is included in the MATRICS Consensus Cognitive Battery (MCCB), a standardized cognitive battery designed to facilitate drug development targeting cognitive symptoms. However, the neurobiology underlying these deficits in MCCB working memory is currently unknown, mirroring the poor understanding in general of working memory deficits in schizophrenia. METHODS: Twenty-eight participants with schizophrenia were administered working memory tests from the MCCB and examined with resting-state functional MRI. Intrinsic connectivity networks were estimated with independent component analysis. Each voxel's time series was correlated with each network time series, creating a feature vector for voxel-level connectivity analysis. This feature vector was associated with working memory by using the distance covariance statistic. RESULTS: The neurobiology of MCCB working memory tests largely followed the multicomponent model of working memory but revealed unexpected differences. The dorsolateral prefrontal cortex was not associated with working memory. The central executive system was instead associated with delocalized right and left executive control networks. The phonologic loop within the multicomponent model, a subsystem involved in storing linguistic information, was associated with connectivity to the left temporoparietal junction and inferior frontal gyrus. However, connections to the language network did not predict working memory test performance. CONCLUSIONS: These results provide supporting evidence for the multicomponent model of working memory in terms of the biology underlying MCCB findings.


Assuntos
Disfunção Cognitiva/fisiopatologia , Função Executiva/fisiologia , Memória de Curto Prazo/fisiologia , Rede Nervosa/fisiopatologia , Esquizofrenia/complicações , Adulto , Feminino , Humanos , Processamento de Imagem Assistida por Computador/estatística & dados numéricos , Imagem por Ressonância Magnética , Masculino , Testes Neuropsicológicos , Psicologia do Esquizofrênico
20.
PLoS One ; 14(4): e0214852, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30973907

RESUMO

In this paper, we put forward a real-time multiple GPUs (multi-GPU) accelerated virtual-reality interaction simulation framework where the reconstructed objects from camera images interact with virtual deformable objects. Firstly, based on an extended voxel-based visual hull (VbVH) algorithm, we design an image-based 3D reconstruction platform for real objects. Then, an improved hybrid deformation model, which couples the geometry constrained fast lattice shape matching method (FLSM) and total Lagrangian explicit dynamics (TLED) algorithm, is proposed to achieve efficient and stable simulation of the virtual objects' elastic deformations. Finally, one-way virtual-reality interactions including soft tissues' virtual cutting with bleeding effects are successfully simulated. Moreover, with the purpose of significantly improving the computational efficiency of each time step, we propose an entire multi-GPU implementation method of the framework using compute unified device architecture (CUDA). The experiment results demonstrate that our multi-GPU accelerated virtual-reality interaction framework achieves real-time performance under the moderate calculation scale, which is a new effective 3D interaction technique for virtual reality applications.


Assuntos
Gráficos por Computador , Realidade Virtual , Algoritmos , Gráficos por Computador/estatística & dados numéricos , Simulação por Computador , Sistemas Computacionais , Instrução por Computador , Humanos , Processamento de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/estatística & dados numéricos , Imageamento Tridimensional/métodos , Imageamento Tridimensional/estatística & dados numéricos , Modelos Anatômicos , Procedimentos Cirúrgicos Operatórios/educação , Interface Usuário-Computador
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA