Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
J Neurosci ; 41(41): 8577-8588, 2021 10 13.
Artigo em Inglês | MEDLINE | ID: mdl-34413204

RESUMO

Neuronal ensembles are groups of neurons with coordinated activity that could represent sensory, motor, or cognitive states. The study of how neuronal ensembles are built, recalled, and involved in the guiding of complex behaviors has been limited by the lack of experimental and analytical tools to reliably identify and manipulate neurons that have the ability to activate entire ensembles. Such pattern completion neurons have also been proposed as key elements of artificial and biological neural networks. Indeed, the relevance of pattern completion neurons is highlighted by growing evidence that targeting them can activate neuronal ensembles and trigger behavior. As a method to reliably detect pattern completion neurons, we use conditional random fields (CRFs), a type of probabilistic graphical model. We apply CRFs to identify pattern completion neurons in ensembles in experiments using in vivo two-photon calcium imaging from primary visual cortex of male mice and confirm the CRFs predictions with two-photon optogenetics. To test the broader applicability of CRFs we also analyze publicly available calcium imaging data (Allen Institute Brain Observatory dataset) and demonstrate that CRFs can reliably identify neurons that predict specific features of visual stimuli. Finally, to explore the scalability of CRFs we apply them to in silico network simulations and show that CRFs-identified pattern completion neurons have increased functional connectivity. These results demonstrate the potential of CRFs to characterize and selectively manipulate neural circuits.SIGNIFICANCE STATEMENT We describe a graph theory method to identify and optically manipulate neurons with pattern completion capability in mouse cortical circuits. Using calcium imaging and two-photon optogenetics in vivo we confirm that key neurons identified by this method can recall entire neuronal ensembles. This method could be broadly applied to manipulate neuronal ensemble activity to trigger behavior or for therapeutic applications in brain prostheses.


Assuntos
Modelos Neurológicos , Neurônios/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Probabilidade , Córtex Visual/fisiologia , Animais , Masculino , Camundongos , Camundongos Endogâmicos C57BL , Microscopia de Fluorescência por Excitação Multifotônica/métodos , Neurônios/química , Optogenética/métodos , Estimulação Luminosa/métodos , Córtex Visual/química , Córtex Visual/citologia
2.
PLoS One ; 14(6): e0218183, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31194825

RESUMO

The blooms of Noctiluca in the Gulf of Oman and the Arabian Sea have been intensifying in recent years, posing now a threat to regional fisheries and the long-term health of an ecosystem supporting a coastal population of nearly 120 million people. We present the results of a local-scale data analysis to investigate the onset and patterns of the Noctiluca blooms, which form annually during the winter monsoon in the Gulf of Oman and in the Arabian Sea. Our approach combines methods in physical and biological oceanography with machine learning techniques. In particular, we present a robust algorithm, the variable-length Linear Dynamic Systems (vLDS) model, that extracts the causal factors and latent dynamics at the local-scale along each individual drifter trajectory, and demonstrate its effectiveness by using it to generate predictive plots for all variables and test macroscopic scientific hypotheses. The vLDS model is a new algorithm specifically designed to analyze the irregular dataset from surface velocity drifters, in which the multivariate time series trajectories are having variable or unequal lengths. The test results provide local-scale statistical evidence to support and check the macroscopic physical and biological Oceanography hypotheses on the Noctiluca blooms; it also helps identify complementary local trajectory-scale dynamics that might not be visible or discoverable at the macroscopic scale. The vLDS model also exhibits a generalization capability (as a machine learning methodology) to investigate important causal factors and hidden dynamics associated with ocean biogeochemical processes and phenomena at the population-level and local trajectory-scale.


Assuntos
Algoritmos , Dinoflagellida/crescimento & desenvolvimento , Oceanos e Mares , Monitoramento Ambiental , Modelos Lineares , Modelos Biológicos , Água do Mar
3.
Bioinformatics ; 22(22): 2753-60, 2006 Nov 15.
Artigo em Inglês | MEDLINE | ID: mdl-16966363

RESUMO

MOTIVATION: Drawing inferences from large, heterogeneous sets of biological data requires a theoretical framework that is capable of representing, e.g. DNA and protein sequences, protein structures, microarray expression data, various types of interaction networks, etc. Recently, a class of algorithms known as kernel methods has emerged as a powerful framework for combining diverse types of data. The support vector machine (SVM) algorithm is the most popular kernel method, due to its theoretical underpinnings and strong empirical performance on a wide variety of classification tasks. Furthermore, several recently described extensions allow the SVM to assign relative weights to various datasets, depending upon their utilities in performing a given classification task. RESULTS: In this work, we empirically investigate the performance of the SVM on the task of inferring gene functional annotations from a combination of protein sequence and structure data. Our results suggest that the SVM is quite robust to noise in the input datasets. Consequently, in the presence of only two types of data, an SVM trained from an unweighted combination of datasets performs as well or better than a more sophisticated algorithm that assigns weights to individual data types. Indeed, for this simple case, we can demonstrate empirically that no solution is significantly better than the naive, unweighted average of the two datasets. On the other hand, when multiple noisy datasets are included in the experiment, then the naive approach fares worse than the weighted approach. Our results suggest that for many applications, a naive unweighted sum of kernels may be sufficient. AVAILABILITY: http://noble.gs.washington.edu/proj/seqstruct


Assuntos
Biologia Computacional/métodos , Proteínas/química , Proteômica/métodos , Algoritmos , Bases de Dados de Proteínas , Proteínas Fúngicas/química , Modelos Estatísticos , Reconhecimento Automatizado de Padrão , Curva ROC , Alinhamento de Sequência , Análise de Sequência de Proteína , Software
4.
IEEE Trans Pattern Anal Mach Intell ; 27(10): 1675-9, 2005 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-16238002

RESUMO

Principal Component Analysis (PCA) is extensively used in computer vision and image processing. Since it provides the optimal linear subspace in a least-square sense, it has been used for dimensionality reduction and subspace analysis in various domains. However, its scalability is very limited because of its inherent computational complexity. We introduce a new framework for applying PCA to visual data which takes advantage of the spatio-temporal correlation and localized frequency variations that are typically found in such data. Instead of applying PCA to the whole volume of data (complete set of images), we partition the volume into a set of blocks and apply PCA to each block. Then, we group the subspaces corresponding to the blocks and merge them together. As a result, we not only achieve greater efficiency in the resulting representation of the visual data, but also successfully scale PCA to handle large data sets. We present a thorough analysis of the computational complexity and storage benefits of our approach. We apply our algorithm to several types of videos. We show that, in addition to its storage and speed benefits, the algorithm results in a useful representation of the visual data.


Assuntos
Algoritmos , Inteligência Artificial , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Armazenamento e Recuperação da Informação/métodos , Reconhecimento Automatizado de Padrão/métodos , Análise de Componente Principal , Simulação por Computador , Modelos Estatísticos
5.
Stud Health Technol Inform ; 111: 414-7, 2005.
Artigo em Inglês | MEDLINE | ID: mdl-15718770

RESUMO

BACKGROUND: Simulated environments present challenges to both clinical experts and novices in laparoscopic surgery. Experts and novices may have different expectations when confronted with a novel simulated environment. The LapSim is a computer-based virtual reality laparoscopic trainer. Our aim was to analyze the performance of experienced basic laparoscopists and novices during their first exposure to the LapSim Basic Skill set and Dissection module. METHODS: Experienced basic laparoscopists (n=16) were defined as attending surgeons and chief residents who performed >30 laparoscopic cholecystectomies. Novices (n=13) were surgical residents with minimal laparoscopic experience. None of the subjects had used a computer-based laparoscopic simulator in the past. Subjects were given one practice session on the LapSim tutorial and dissection module and were supervised throughout the testing. Instrument motion, completion time, and errors were recorded by the LapSim. A Performance Score (PS) was calculated using the sum of total errors and time to task completion. A Relative Efficiency Score (RES) was calculated using the sum of the path lengths and angular path lengths for each hand expressed as a ratio of the subject's score to the worst score achieved among the subjects. All groups were compared using the Kruskal-Wallis and Mann-Whitney U-test. RESULTS: Novices achieved better PS and/or RES in Instrument Navigation, Suturing, and Dissection (p<0.05). There was no difference in the PS and RES between experts and novices in the remaining skills. CONCLUSION: Novices tended to have better performance compared to the experienced basic laparoscopists during their first exposure to the LapSim Basic Skill set and Dissection module.


Assuntos
Simulação por Computador , Laparoscopia , Análise e Desempenho de Tarefas , Interface Usuário-Computador , Competência Clínica , Humanos , Capacitação em Serviço , Internato e Residência
6.
Stud Health Technol Inform ; 111: 418-21, 2005.
Artigo em Inglês | MEDLINE | ID: mdl-15718771

RESUMO

BACKGROUND: There currently exist several training modules to improve performance during video-assisted surgery. The unique characteristics of robotic surgery make these platforms an inadequate environment for the development and assessment of robotic surgical performance. METHODS: Expert surgeons (n=4) (>50 clinical robotic procedures and >2 years of clinical robotic experience) were compared to novice surgeons (n=17) (<5 clinical cases and limited laboratory experience) using the da Vinci Surgical System. Seven drills were designed to simulate clinical robotic surgical tasks. Performance score was calculated by the equation Time to Completion + (minor error) x 5 + (major error) x 10. The Robotic Learning Curve (RLC) was expressed as a trend line of the performance scores corresponding to each repeated drill. RESULTS: Performance scores for experts were better than novices in all 7 drills (p<0.05). The RLC for novices reflected an improvement in scores (p<0.05). In contrast, experts demonstrated a flat RLC for 6 drills and an improvement in one drill (p=0.027). CONCLUSION: This new drill set provides a framework for performance assessment during robotic surgery. The inclusion of particular drills and their role in training robotic surgeons of the future awaits larger validation studies.


Assuntos
Robótica , Cirurgia Assistida por Computador/métodos , Análise e Desempenho de Tarefas , Competência Clínica , Humanos , Internato e Residência
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA