Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 38
Filtrar
Mais filtros

Bases de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
J Comput Neurosci ; 42(1): 1-10, 2017 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-27629590

RESUMO

Neuron modeling may be said to have originated with the Hodgkin and Huxley action potential model in 1952 and Rall's models of integrative activity of dendrites in 1964. Over the ensuing decades, these approaches have led to a massive development of increasingly accurate and complex data-based models of neurons and neuronal circuits. ModelDB was founded in 1996 to support this new field and enhance the scientific credibility and utility of computational neuroscience models by providing a convenient venue for sharing them. It has grown to include over 1100 published models covering more than 130 research topics. It is actively curated and developed to help researchers discover and understand models of interest. ModelDB also provides mechanisms to assist running models both locally and remotely, and has a graphical tool that enables users to explore the anatomical and biophysical properties that are represented in a model. Each of its capabilities is undergoing continued refinement and improvement in response to user experience. Large research groups (Allen Brain Institute, EU Human Brain Project, etc.) are emerging that collect data across multiple scales and integrate that data into many complex models, presenting new challenges of scale. We end by predicting a future for neuroscience increasingly fueled by new technology and high performance computation, and increasingly in need of comprehensive user-friendly databases such as ModelDB to provide the means to integrate the data for deeper insights into brain function in health and disease.


Assuntos
Bases de Dados Factuais , Modelos Neurológicos , Neurociências , Encéfalo , Humanos , Neurônios
2.
BMC Med Inform Decis Mak ; 17(1): 111, 2017 Jul 19.
Artigo em Inglês | MEDLINE | ID: mdl-28724368

RESUMO

BACKGROUND: The US Veterans Administration (VA) has developed a robust and mature computational infrastructure in support of its electronic health record (EHR). Web technology offers a powerful set of tools for structuring clinical decision support (CDS) around clinical care. This paper describes informatics challenges and design issues that were confronted in the process of building three Web-based CDS systems in the context of the VA EHR. METHODS: Over the course of several years, we implemented three Web-based CDS systems that extract patient data from the VA EHR environment to provide patient-specific CDS. These were 1) the VACS (Veterans Aging Cohort Study) Index Calculator which estimates prognosis for HIV+ patients, 2) Neuropath/CDS which assists in the medical management of patients with neuropathic pain, and 3) TRIM (Tool to Reduce Inappropriate Medications) which identifies potentially inappropriate medications in older adults and provides recommendations for improving the medication regimen. RESULTS: The paper provides an overview of the VA EHR environment and discusses specific informatics issues/challenges that arose in the context of each of the three Web-based CDS systems. We discuss specific informatics methods and provide details of approaches that may be useful within this setting. CONCLUSIONS: Informatics issues and challenges relating to data access and data availability arose because of the particular architecture of the national VA infrastructure and the need to link to that infrastructure from local Web-based CDS systems. Idiosyncrasies of VA patient data, especially the medication data, also posed challenges. Other issues related to specific functional needs of individual CDS systems. The goal of this paper is to describe these issues so that our experience may serve as a useful foundation to assist others who wish to build such systems in the future.


Assuntos
Sistemas de Apoio a Decisões Clínicas , Registros Eletrônicos de Saúde/estatística & dados numéricos , United States Department of Veterans Affairs , Sistemas de Apoio a Decisões Clínicas/normas , Humanos , Estados Unidos
3.
Brief Bioinform ; 10(4): 345-53, 2009 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-19505888

RESUMO

As the number of neuroscience databases increases, the need for neuroscience data integration grows. This paper reviews and compares several approaches, including the Neuroscience Database Gateway (NDG), Neuroscience Information Framework (NIF) and Entrez Neuron, which enable neuroscience database annotation and integration. These approaches cover a range of activities spanning from registry, discovery and integration of a wide variety of neuroscience data sources. They also provide different user interfaces for browsing, querying and displaying query results. In Entrez Neuron, for example, four different facets or tree views (neuron, neuronal property, gene and drug) are used to hierarchically organize concepts that can be used for querying a collection of ontologies. The facets are also used to define the structure of the query results.


Assuntos
Sistemas de Gerenciamento de Base de Dados , Bases de Dados Factuais , Armazenamento e Recuperação da Informação/métodos , Neurociências/métodos , Armazenamento e Recuperação da Informação/tendências , Internet , Software , Interface Usuário-Computador , Vocabulário Controlado
4.
Neuroinformatics ; 5(2): 105-14, 2007.
Artigo em Inglês | MEDLINE | ID: mdl-17873372

RESUMO

Brain odor maps are reconstructed flat images that describe the spatial activity patterns in the glomerular layer of the olfactory bulbs in animals exposed to different odor stimuli. We have developed a software application, OdorMapComparer, to carry out quantitative analyses and comparisons of the fMRI odor maps. This application is an open-source window program that first loads two odor map images being compared. It allows image transformations including scaling, flipping, rotating, and warping so that the two images can be appropriately aligned to each other. It performs simple subtraction, addition, and average of signals in the two images. It also provides comparative statistics including the normalized correlation (NC) and spatial correlation coefficient. Experimental studies showed that the rodent fMRI odor maps for aliphatic aldehydes displayed spatial activity patterns that are similar in gross outlines but somewhat different in specific subregions. Analyses with OdorMapComparer indicate that the similarity between odor maps decreases with increasing difference in the length of carbon chains. For example, the map of butanal is more closely related to that of pentanal (with a NC = 0.617) than to that of octanal (NC = 0.082), which is consistent with animal behavioral studies. The study also indicates that fMRI odor maps are statistically odor-specific and repeatable across both the intra- and intersubject trials. OdorMapComparer thus provides a tool for quantitative, statistical analyses and comparisons of fMRI odor maps in a fashion that is integrated with the overall odor mapping techniques.


Assuntos
Mapeamento Encefálico/métodos , Encéfalo/fisiologia , Imageamento por Ressonância Magnética , Odorantes , Olfato/fisiologia , Aldeídos , Animais , Encéfalo/irrigação sanguínea , Processamento de Imagem Assistida por Computador , Masculino , Oxigênio/sangue , Ratos , Reprodutibilidade dos Testes , Software
5.
J Am Med Inform Assoc ; 14(3): 355-60, 2007.
Artigo em Inglês | MEDLINE | ID: mdl-17329721

RESUMO

This paper describes NeuroExtract, a pilot system which facilitates the integrated retrieval of Internet-based information relevant to the neurosciences. The approach involved extracting descriptive metadata from the sources using domain-specific queries; retrieving, processing, and organizing the data into structured text files; searching the data files using text-based queries; and, providing the results in a Web page along with descriptions to entries and URL links to the original sources. NeuroExtract has been implemented for three bioscience resources, SWISSPROT, GEO, and PDB, which provide neuroscience-related information as sub-topics. We discuss several issues that arose in the course of NeuroExtract's implementation. This project is a first step in exploring how this general approach might be used, in conjunction with other query mediation approaches, to facilitate the integration of many Internet-accessible resources relevant to the neurosciences.


Assuntos
Bases de Dados como Assunto , Armazenamento e Recuperação da Informação/métodos , Neurociências , Interface Usuário-Computador , Internet , Projetos Piloto
6.
J Am Med Inform Assoc ; 14(1): 86-93, 2007.
Artigo em Inglês | MEDLINE | ID: mdl-17068350

RESUMO

Data sparsity and schema evolution issues affecting clinical informatics and bioinformatics communities have led to the adoption of vertical or object-attribute-value-based database schemas to overcome limitations posed when using conventional relational database technology. This paper explores these issues and discusses why biomedical data are difficult to model using conventional relational techniques. The authors propose a solution to these obstacles based on a relational database engine using a sparse, column-store architecture. The authors provide benchmarks comparing the performance of queries and schema-modification operations using three different strategies: (1) the standard conventional relational design; (2) past approaches used by biomedical informatics researchers; and (3) their sparse, column-store architecture. The performance results show that their architecture is a promising technique for storing and processing many types of data that are not handled well by the other two semantic data models.


Assuntos
Sistemas de Gerenciamento de Base de Dados , Bases de Dados como Assunto/organização & administração , Armazenamento e Recuperação da Informação , Biologia Computacional , Humanos , Sistemas Computadorizados de Registros Médicos , Neurologia , Software
7.
J Biomed Inform ; 40(6): 750-60, 2007 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-17625973

RESUMO

Genome-wide association studies can help identify multi-gene contributions to disease. As the number of high-density genomic markers tested increases, however, so does the number of loci associated with disease by chance. Performing a brute-force test for the interaction of four or more high-density genomic loci is unfeasible given the current computational limitations. Heuristics must be employed to limit the number of statistical tests performed. In this paper we explore the use of biological domain knowledge to supplement statistical analysis and data mining methods to identify genes and pathways associated with disease. We describe Pathway/SNP, a software application designed to help evaluate the association between pathways and disease. Pathway/SNP integrates domain knowledge--SNP, gene and pathway annotation from multiple sources--with statistical and data mining algorithms into a tool that can be used to explore the etiology of complex diseases.


Assuntos
Mapeamento Cromossômico/métodos , Bases de Dados Genéticas , Predisposição Genética para Doença/genética , Armazenamento e Recuperação da Informação/métodos , Análise de Sequência com Séries de Oligonucleotídeos/métodos , Polimorfismo de Nucleotídeo Único/genética , Proteoma/genética , Inteligência Artificial , Biomarcadores/análise , Interpretação Estatística de Dados , Humanos , Integração de Sistemas
8.
J Biomed Inform ; 40(1): 73-9, 2007 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-16650809

RESUMO

Computational biology and bioinformatics (CBB), the terms often used interchangeably, represent a rapidly evolving biological discipline. With the clear potential for discovery and innovation, and the need to deal with the deluge of biological data, many academic institutions are committing significant resources to develop CBB research and training programs. Yale formally established an interdepartmental Ph.D. program in CBB in May 2003. This paper describes Yale's program, discussing the scope of the field, the program's goals and curriculum, as well as a number of issues that arose in implementing the program. (Further updated information is available from the program's website, www.cbb.yale.edu.)


Assuntos
Biologia Computacional/educação , Biologia Computacional/organização & administração , Educação de Pós-Graduação/organização & administração , Educação Profissionalizante/organização & administração , Universidades/organização & administração , Connecticut , Currículo , Pesquisa/organização & administração
9.
J Am Geriatr Soc ; 65(10): 2265-2271, 2017 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-28804870

RESUMO

OBJECTIVES: To examine the effect of the Tool to Reduce Inappropriate Medications (TRIM), a web tool linking an electronic health record (EHR) to a clinical decision support system, on medication communication and prescribing. DESIGN: Randomized clinical trial. SETTING: Primary care clinics at a Veterans Affairs Medical Center. PARTICIPANTS: Veterans aged 65 and older prescribed seven or more medications randomized to receipt of TRIM or usual care (N = 128). INTERVENTION: TRIM extracts information on medications and chronic conditions from the EHR and contains data entry screens for information obtained from brief chart review and telephonic patient assessment. These data serve as input for automated algorithms identifying medication reconciliation discrepancies, potentially inappropriate medications (PIMs), and potentially inappropriate regimens. Clinician feedback reports summarize discrepancies and provide recommendations for deprescribing. Patient feedback reports summarize discrepancies and self-reported medication problems. MEASUREMENTS: Primary: subscales of the Patient Assessment of Care for Chronic Conditions (PACIC) related to shared decision-making; clinician and patient communication. Secondary: changes in medications. RESULTS: 29.7% of TRIM participants and 15.6% of control participants provided the highest PACIC ratings; this difference was not significant. Adjusting for covariates and clustering of patients within clinicians, TRIM was associated with significantly more-active patient communication and facilitative clinician communication and with more medication-related communication among patients and clinicians. TRIM was significantly associated with correction of medication discrepancies but had no effect on number of medications or reduction in PIMs. CONCLUSION: TRIM improved communication about medications and accuracy of documentation. Although there was no association with prescribing, the small sample size provided limited power to examine medication-related outcomes.


Assuntos
Doença Crônica/tratamento farmacológico , Sistemas de Apoio a Decisões Clínicas , Desprescrições , Reconciliação de Medicamentos/métodos , Lista de Medicamentos Potencialmente Inapropriados , Software , Idoso , Idoso de 80 Anos ou mais , Comunicação , Registros Eletrônicos de Saúde , Feminino , Humanos , Masculino , Polimedicação , Estados Unidos , United States Department of Veterans Affairs , Veteranos
10.
J Am Med Inform Assoc ; 13(4): 432-7, 2006.
Artigo em Inglês | MEDLINE | ID: mdl-16622167

RESUMO

The present study described an open source application, ResourceLog, that allows website administrators to record and analyze the usage of online resources. The application includes four components: logging, data mining, administrative interface, and back-end database. The logging component is embedded in the host website. It extracts and streamlines information about the Web visitors, the scripts, and dynamic parameters from each page request. The data mining component runs as a set of scheduled tasks that identify visitors of interest, such as those who have heavily used the resources. The identified visitors will be automatically subjected to a voluntary user survey. The usage of the website content can be monitored through the administrative interface and subjected to statistical analyses. As a pilot project, ResourceLog has been implemented in SenseLab, a Web-based neuroscience database system. ResourceLog provides a robust and useful tool to aid system evaluation of a resource-driven Web application, with a focus on determining the effectiveness of data sharing in the field and with the general public.


Assuntos
Bibliometria , Serviços de Informação/estatística & dados numéricos , Internet/estatística & dados numéricos , Software , Bases de Dados como Assunto/estatística & dados numéricos , Projetos Piloto
11.
Pharmacotherapy ; 36(6): 694-701, 2016 06.
Artigo em Inglês | MEDLINE | ID: mdl-27041466

RESUMO

STUDY OBJECTIVE: To create a clinical decision support system (CDSS) for evaluating problems with medications among older outpatients based on a broad set of criteria. DESIGN: Web-based CDSS development. SETTING: Primary care clinics at a Veterans Affairs medical center. PARTICIPANTS: Forty veterans 65 years and older who were prescribed seven or more medications that included those for treatment of diabetes mellitus and hypertension. MEASUREMENTS AND MAIN RESULTS: The Tool to Reduce Inappropriate Medications (TRIM) uses a program to extract age, medications, and chronic conditions from the electronic health record to identify high-risk patients and as input for evaluating the medication regimen. Additional health variables obtained through chart review and direct patient assessment are entered into a Web-based program. Based on a series of algorithms, TRIM generates feedback reports for clinicians. TRIM identified medication reconciliation discrepancies in 98% (39/40) of veterans, potentially inappropriate medications in 58% (23/40), potential problems with feasibility (based on poor adherence and/or cognitive impairment) in 25% (10/40), potential overtreatment of hypertension in 50% (20/40), potential overtreatment of diabetes in 43% (17/40), inappropriate dosing of renally excreted medications in 5% (2/40), and patient-reported adverse reactions in 5% (2/40). CONCLUSION: This evaluation of TRIM demonstrated that data elements can be extracted from the electronic health record to identify older primary care patients at risk for potentially problematic medication regimens. Supplemented with chart review and direct patient assessment, these data can be processed through clinical algorithms that identify potential problems and generate patient-specific feedback reports. Additional work is necessary to assess the effects of TRIM on medication deprescribing.


Assuntos
Sistemas de Apoio a Decisões Clínicas/instrumentação , Prescrição Inadequada/prevenção & controle , Erros de Medicação/prevenção & controle , Idoso , Idoso de 80 Anos ou mais , Algoritmos , Doença Crônica , Humanos , Masculino , Polimedicação
12.
J Am Med Inform Assoc ; 12(1): 90-8, 2005.
Artigo em Inglês | MEDLINE | ID: mdl-15492032

RESUMO

The rapid advances in high-throughput biotechnologies such as DNA microarrays and mass spectrometry have generated vast amounts of data ranging from gene expression to proteomics data. The large size and complexity involved in analyzing such data demand a significant amount of computing power. High-performance computation (HPC) is an attractive and increasingly affordable approach to help meet this challenge. There is a spectrum of techniques that can be used to achieve computational speedup with varying degrees of impact in terms of how drastic a change is required to allow the software to run on an HPC platform. This paper describes a high- productivity/low-maintenance (HP/LM) approach to HPC that is based on establishing a collaborative relationship between the bioinformaticist and HPC expert that respects the former's codes and minimizes the latter's efforts. The goal of this approach is to make it easy for bioinformatics researchers to continue to make iterative refinements to their programs, while still being able to take advantage of HPC. The paper describes our experience applying these HP/LM techniques in four bioinformatics case studies: (1) genome-wide sequence comparison using Blast, (2) identification of biomarkers based on statistical analysis of large mass spectrometry data sets, (3) complex genetic analysis involving ordinal phenotypes, (4) large-scale assessment of the effect of possible errors in analyzing microarray data. The case studies illustrate how the HP/LM approach can be applied to a range of representative bioinformatics applications and how the approach can lead to significant speedup of computationally intensive bioinformatics applications, while making only modest modifications to the programs themselves.


Assuntos
Biologia Computacional , Metodologias Computacionais , Sequência de Aminoácidos , Espectrometria de Massas , Análise em Microsséries , Fenótipo , Análise de Sequência
13.
Genomics Proteomics Bioinformatics ; 13(1): 25-35, 2015 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-25712262

RESUMO

We report a significantly-enhanced bioinformatics suite and database for proteomics research called Yale Protein Expression Database (YPED) that is used by investigators at more than 300 institutions worldwide. YPED meets the data management, archival, and analysis needs of a high-throughput mass spectrometry-based proteomics research ranging from a single laboratory, group of laboratories within and beyond an institution, to the entire proteomics community. The current version is a significant improvement over the first version in that it contains new modules for liquid chromatography-tandem mass spectrometry (LC-MS/MS) database search results, label and label-free quantitative proteomic analysis, and several scoring outputs for phosphopeptide site localization. In addition, we have added both peptide and protein comparative analysis tools to enable pairwise analysis of distinct peptides/proteins in each sample and of overlapping peptides/proteins between all samples in multiple datasets. We have also implemented a targeted proteomics module for automated multiple reaction monitoring (MRM)/selective reaction monitoring (SRM) assay development. We have linked YPED's database search results and both label-based and label-free fold-change analysis to the Skyline Panorama repository for online spectra visualization. In addition, we have built enhanced functionality to curate peptide identifications into an MS/MS peptide spectral library for all of our protein database search identification results.


Assuntos
Cromatografia Líquida/métodos , Biologia Computacional/métodos , Bases de Dados de Proteínas , Fragmentos de Peptídeos/análise , Proteoma/análise , Proteômica/métodos , Espectrometria de Massas em Tandem/métodos , Humanos
14.
Neuroinformatics ; 1(2): 149-65, 2003.
Artigo em Inglês | MEDLINE | ID: mdl-15046238

RESUMO

The requirements for neuroinformatics to make a significant impact on neuroscience are not simply technical--the hardware, software, and protocols for collaborative research--they also include the legal and policy frameworks within which projects operate. This is not least because the creation of large collaborative scientific databases amplifies the complicated interactions between proprietary, for-profit R&D and public "open science." In this paper, we draw on experiences from the field of genomics to examine some of the likely consequences of these interactions in neuroscience. Facilitating the widespread sharing of data and tools for neuroscientific research will accelerate the development of neuroinformatics. We propose approaches to overcome the cultural and legal barriers that have slowed these developments to date. We also draw on legal strategies employed by the Free Software community, in suggesting frameworks neuroinformatics might adopt to reinforce the role of public-science databases, and propose a mechanism for identifying and allowing "open science" uses for data whilst still permitting flexible licensing for secondary commercial research.


Assuntos
Biologia Computacional/legislação & jurisprudência , Biologia Computacional/organização & administração , Bases de Dados Factuais/legislação & jurisprudência , Neurociências/tendências , Formulação de Políticas , Humanos
15.
J Am Med Inform Assoc ; 11(4): 294-9, 2004.
Artigo em Inglês | MEDLINE | ID: mdl-15064294

RESUMO

This report describes XDesc (eXperiment Description), a pilot project that serves as a case study exploring the degree to which an informatics capability developed in a clinical application can be ported for use in the biosciences. In particular, XDesc uses the Entity-Attribute-Value database implementation (including a great deal of metadata-based functionality) developed in TrialDB, a clinical research database, for use in describing the samples used in microarray experiments stored in the Yale Microarray Database (YMD). XDesc was linked successfully to both TrialDB and YMD, and was used to describe the data in three different microarray research projects involving Drosophila. In the process, a number of new desirable capabilities were identified in the bioscience domain. These were implemented on a pilot basis in XDesc, and subsequently "folded back" into TrialDB itself, enhancing its capabilities for dealing with clinical data. This case study provides a concrete example of how informatics research and development in clinical and bioscience domains has the potential for synergy and for cross-fertilization.


Assuntos
Biologia Computacional/métodos , Informática Médica/métodos , Análise de Sequência com Séries de Oligonucleotídeos , Medicina Clínica , Bases de Dados Factuais , Sistemas de Informação , Projetos Piloto , Integração de Sistemas , Interface Usuário-Computador , Vocabulário Controlado
16.
J Am Med Inform Assoc ; 10(5): 444-53, 2003.
Artigo em Inglês | MEDLINE | ID: mdl-12807806

RESUMO

The EAV/CR framework, designed for database support of rapidly evolving scientific domains, utilizes metadata to facilitate schema maintenance and automatic generation of Web-enabled browsing interfaces to the data. EAV/CR is used in SenseLab, a neuroscience database that is part of the national Human Brain Project. This report describes various enhancements to the framework. These include (1) the ability to create "portals" that present different subsets of the schema to users with a particular research focus, (2) a generic XML-based protocol to assist data extraction and population of the database by external agents, (3) a limited form of ad hoc data query, and (4) semantic descriptors for interclass relationships and links to controlled vocabularies such as the UMLS.


Assuntos
Sistemas de Gerenciamento de Base de Dados , Bases de Dados como Assunto/organização & administração , Internet , Linguagens de Programação , Semântica , Software , Vocabulário Controlado
17.
J Am Med Inform Assoc ; 9(5): 491-9, 2002.
Artigo em Inglês | MEDLINE | ID: mdl-12223501

RESUMO

This case study describes a project that explores issues of quality of service (QoS) relevant to the next-generation Internet (NGI), using the PathMaster application in a testbed environment. PathMaster is a prototype computer system that analyzes digitized cell images from cytology specimens and compares those images against an image database, returning a ranked set of "similar" cell images from the database. To perform NGI testbed evaluations, we used a cluster of nine parallel computation workstations configured as three subclusters using Cisco routers. This architecture provides a local "simulated Internet" in which we explored the following QoS strategies: (1) first-in-first-out queuing, (2) priority queuing, (3) weighted fair queuing, (4) weighted random early detection, and (5) traffic shaping. The study describes the results of using these strategies with a distributed version of the PathMaster system in the presence of different amounts of competing network traffic and discusses certain of the issues that arise. The goal of the study is to help introduce NGI QoS issues to the Medical Informatics community and to use the PathMaster NGI testbed to illustrate concretely certain of the QoS issues that arise.


Assuntos
Redes de Comunicação de Computadores/normas , Citometria por Imagem/normas , Processamento de Imagem Assistida por Computador/normas , Internet , Biologia Celular , Computadores , Sistemas Inteligentes , Controle de Qualidade , Software
18.
J Am Med Inform Assoc ; 11(6): 523-34, 2004.
Artigo em Inglês | MEDLINE | ID: mdl-15298995

RESUMO

Query Integrator System (QIS) is a database mediator framework intended to address robust data integration from continuously changing heterogeneous data sources in the biosciences. Currently in the advanced prototype stage, it is being used on a production basis to integrate data from neuroscience databases developed for the SenseLab project at Yale University with external neuroscience and genomics databases. The QIS framework uses standard technologies and is intended to be deployable by administrators with a moderate level of technological expertise: It comes with various tools, such as interfaces for the design of distributed queries. The QIS architecture is based on a set of distributed network-based servers, data source servers, integration servers, and ontology servers, that exchange metadata as well as mappings of both metadata and data elements to elements in an ontology. Metadata version difference determination coupled with decomposition of stored queries is used as the basis for partial query recovery when the schema of data sources alters.


Assuntos
Sistemas de Gerenciamento de Base de Dados , Bases de Dados como Assunto , Integração de Sistemas , Redes de Comunicação de Computadores , Genômica , Armazenamento e Recuperação da Informação , Aplicações da Informática Médica , Projetos Piloto , Software , Interface Usuário-Computador , Vocabulário Controlado
19.
J Am Med Inform Assoc ; 11(3): 167-72, 2004.
Artigo em Inglês | MEDLINE | ID: mdl-14764617

RESUMO

In 2002-2003, the American College of Medical Informatics (ACMI) undertook a study of the future of informatics training. This project capitalized on the rapidly expanding interest in the role of computation in basic biological research, well characterized in the National Institutes of Health (NIH) Biomedical Information Science and Technology Initiative (BISTI) report. The defining activity of the project was the three-day 2002 Annual Symposium of the College. A committee, comprised of the authors of this report, subsequently carried out activities, including interviews with a broader informatics and biological sciences constituency, collation and categorization of observations, and generation of recommendations. The committee viewed biomedical informatics as an interdisciplinary field, combining basic informational and computational sciences with application domains, including health care, biological research, and education. Consequently, effective training in informatics, viewed from a national perspective, should encompass four key elements: (1). curricula that integrate experiences in the computational sciences and application domains rather than just concatenating them; (2). diversity among trainees, with individualized, interdisciplinary cross-training allowing each trainee to develop key competencies that he or she does not initially possess; (3). direct immersion in research and development activities; and (4). exposure across the wide range of basic informational and computational sciences. Informatics training programs that implement these features, irrespective of their funding sources, will meet and exceed the challenges raised by the BISTI report, and optimally prepare their trainees for careers in a field that continues to evolve.


Assuntos
Biologia Computacional/educação , Informática Médica/educação , Currículo , Sociedades Médicas , Estados Unidos
20.
J Integr Neurosci ; 1(2): 117-28, 2002 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-15011281

RESUMO

There is significant interest amongst neuroscientists in sharing neuroscience data and analytical tools. The exchange of neuroscience data and tools between groups affords the opportunity to differently re-analyze previously collected data, encourage new neuroscience interpretations and foster otherwise uninitiated collaborations, and provide a framework for the further development of theoretically based models of brain function. Data sharing will ultimately reduce experimental and analytical error. Many small Internet accessible database initiatives have been developed and specialized analytical software and modeling tools are distributed within different fields of neuroscience. However, in addition large-scale international collaborations are required which involve new mechanisms of coordination and funding. Provided sufficient government support is given to such international initiatives, sharing of neuroscience data and tools can play a pivotal role in human brain research and lead to innovations in neuroscience, informatics and treatment of brain disorders. These innovations will enable application of theoretical modeling techniques to enhance our understanding of the integrative aspects of neuroscience. This article, authored by a multinational working group on neuroinformatics established by the Organization for Economic Co-operation and Development (OECD), articulates some of the challenges and lessons learned to date in efforts to achieve international collaborative neuroscience.


Assuntos
Biologia Computacional , Redes de Comunicação de Computadores , Bases de Dados Factuais , Neurociências , Comportamento Cooperativo , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA