Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 42
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Opt Lett ; 45(20): 5684-5687, 2020 Oct 15.
Artigo em Inglês | MEDLINE | ID: mdl-33057258

RESUMO

Standard microscopes offer a variety of settings to help improve the visibility of different specimens to the end microscope user. Increasingly, however, digital microscopes are used to capture images for automated interpretation by computer algorithms (e.g., for feature classification, detection, or segmentation), often without any human involvement. In this work, we investigate an approach to jointly optimize multiple microscope settings, together with a classification network, for improved performance with such automated tasks. We explore the interplay between optimization of programmable illumination and pupil transmission, using experimentally imaged blood smears for automated malaria parasite detection, to show that multi-element "learned sensing" outperforms its single-element counterpart. While not necessarily ideal for human interpretation, the network's resulting low-resolution microscope images (20X-comparable) offer a machine learning network sufficient contrast to match the classification performance of corresponding high-resolution imagery (100X-comparable), pointing a path toward accurate automation over large fields-of-view.

2.
F1000Res ; 62017.
Artigo em Inglês | MEDLINE | ID: mdl-29333230

RESUMO

Millions of life scientists across the world rely on bioinformatics data resources for their research projects. Data resources can be very expensive, especially those with a high added value as the expert-curated knowledgebases. Despite the increasing need for such highly accurate and reliable sources of scientific information, most of them do not have secured funding over the near future and often depend on short-term grants that are much shorter than their planning horizon. Additionally, they are often evaluated as research projects rather than as research infrastructure components. In this work, twelve funding models for data resources are described and applied on the case study of the Universal Protein Resource (UniProt), a key resource for protein sequences and functional information knowledge. We show that most of the models present inconsistencies with open access or equity policies, and that while some models do not allow to cover the total costs, they could potentially be used as a complementary income source. We propose the Infrastructure Model as a sustainable and equitable model for all core data resources in the life sciences. With this model, funding agencies would set aside a fixed percentage of their research grant volumes, which would subsequently be redistributed to core data resources according to well-defined selection criteria. This model, compatible with the principles of open science, is in agreement with several international initiatives such as the Human Frontiers Science Program Organisation (HFSPO) and the OECD Global Science Forum (GSF) project. Here, we have estimated that less than 1% of the total amount dedicated to research grants in the life sciences would be sufficient to cover the costs of the core data resources worldwide, including both knowledgebases and deposition databases.

3.
Stud Health Technol Inform ; 245: 1004-1008, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-29295252

RESUMO

Accessing online health content of high quality and reliability presents challenges. Laypersons cannot easily differentiate trustworthy content from misinformed or manipulated content. This article describes complementary approaches for members of the general public and health professionals to find trustworthy content with as little bias as possible. These include the Khresmoi health search engine (K4E), the Health On the Net Code of Conduct (HONcode) and health trust indicator Web browser extensions.


Assuntos
Internet , Ferramenta de Busca , Informática Aplicada à Saúde dos Consumidores , Humanos , Reprodutibilidade dos Testes
4.
Stud Health Technol Inform ; 228: 700-4, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27577475

RESUMO

The Health On the Net Foundation (HON) was born in 1996, during the beginning of the World Wide Web, from a collective decision by health specialists, led by the late Jean-Raoul Scherrer, who anticipated the need for online trustworthy health information. Because the Internet is a free space that everyone shares, a search for quality information is like a shot in the dark: neither will reliably hit their target. Thus, HON was created to promote deployment of useful and reliable online health information, and to enable its appropriate and efficient use. Two decades on, HON is the oldest and most valued quality marker for online health information. The organization has maintained its reputation through dynamic measures, innovative endeavors and dedication to upholding key values and goals. This paper provides an overview of the HON Foundation, and its activities, challenges, and achievements over the years.


Assuntos
Informação de Saúde ao Consumidor , Confiabilidade dos Dados , Gestão da Informação em Saúde , Armazenamento e Recuperação da Informação , Internet , Fundações , Humanos
5.
F1000Res ; 52016.
Artigo em Inglês | MEDLINE | ID: mdl-27803796

RESUMO

The core mission of ELIXIR is to build a stable and sustainable infrastructure for biological information across Europe. At the heart of this are the data resources, tools and services that ELIXIR offers to the life-sciences community, providing stable and sustainable access to biological data. ELIXIR aims to ensure that these resources are available long-term and that the life-cycles of these resources are managed such that they support the scientific needs of the life-sciences, including biological research. ELIXIR Core Data Resources are defined as a set of European data resources that are of fundamental importance to the wider life-science community and the long-term preservation of biological data. They are complete collections of generic value to life-science, are considered an authority in their field with respect to one or more characteristics, and show high levels of scientific quality and service. Thus, ELIXIR Core Data Resources are of wide applicability and usage. This paper describes the structures, governance and processes that support the identification and evaluation of ELIXIR Core Data Resources. It identifies key indicators which reflect the essence of the definition of an ELIXIR Core Data Resource and support the promotion of excellence in resource development and operation. It describes the specific indicators in more detail and explains their application within ELIXIR's sustainability strategy and science policy actions, and in capacity building, life-cycle management and technical actions. The identification process is currently being implemented and tested for the first time. The findings and outcome will be evaluated by the ELIXIR Scientific Advisory Board in March 2017. Establishing the portfolio of ELIXIR Core Data Resources and ELIXIR Services is a key priority for ELIXIR and publicly marks the transition towards a cohesive infrastructure.

6.
Nucleic Acids Res ; 42(Web Server issue): W436-41, 2014 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-24792157

RESUMO

The SIB Swiss Institute of Bioinformatics (www.isb-sib.ch) was created in 1998 as an institution to foster excellence in bioinformatics. It is renowned worldwide for its databases and software tools, such as UniProtKB/Swiss-Prot, PROSITE, SWISS-MODEL, STRING, etc, that are all accessible on ExPASy.org, SIB's Bioinformatics Resource Portal. This article provides an overview of the scientific and training resources SIB has consistently been offering to the life science community for more than 15 years.


Assuntos
Biologia Computacional , Bases de Dados de Compostos Químicos , Software , Evolução Biológica , Bioestatística , Desenho de Fármacos , Genômica , Humanos , Internet , Conformação Proteica , Proteômica , Biologia de Sistemas
7.
IEEE Trans Pattern Anal Mach Intell ; 36(8): 1532-45, 2014 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-26353336

RESUMO

Multi-resolution image features may be approximated via extrapolation from nearby scales, rather than being computed explicitly. This fundamental insight allows us to design object detection algorithms that are as accurate, and considerably faster, than the state-of-the-art. The computational bottleneck of many modern detectors is the computation of features at every scale of a finely-sampled image pyramid. Our key insight is that one may compute finely sampled feature pyramids at a fraction of the cost, without sacrificing performance: for a broad family of features we find that features computed at octave-spaced scale intervals are sufficient to approximate features on a finely-sampled pyramid. Extrapolation is inexpensive as compared to direct feature computation. As a result, our approximation yields considerable speedups with negligible loss in detection accuracy. We modify three diverse visual recognition systems to use fast feature pyramids and show results on both pedestrian detection (measured on the Caltech, INRIA, TUD-Brussels and ETH data sets) and general object detection (measured on the PASCAL VOC). The approach is general and is widely applicable to vision algorithms requiring fine-grained multi-scale analysis. Our approximation is valid for images with broad spectra (most natural images) and fails for images with narrow band-pass spectra (e.g., periodic textures).

8.
Proteomics ; 9(10): 2648-55, 2009 May.
Artigo em Inglês | MEDLINE | ID: mdl-19391179

RESUMO

The identification and characterization of peptides from MS/MS data represents a critical aspect of proteomics. It has been the subject of extensive research in bioinformatics resulting in the generation of a fair number of identification software tools. Most often, only one program with a specific and unvarying set of parameters is selected for identifying proteins. Hence, a significant proportion of the experimental spectra do not match the peptide sequences in the screened database due to inappropriate parameters or scoring schemes. The Swiss protein identification toolbox (swissPIT) project provides the scientific community with an expandable multitool platform for automated in-depth analysis of MS data also able to handle data from high-throughput experiments. With swissPIT many problems have been solved: The missing standards for input and output formats (A), creation of analysis workflows (B), unified result visualization (C), and simplicity of the user interface (D). Currently, swissPIT supports four different programs implementing two different search strategies to identify MS/MS spectra. Conceived to handle the calculation-intensive needs of each of the programs, swissPIT uses the distributed resources of a Swiss-wide computer Grid (http://www.swing-grid.ch).


Assuntos
Proteínas/análise , Proteômica/métodos , Software , Espectrometria de Massas em Tandem , Redes de Comunicação de Computadores , Processamento de Proteína Pós-Traducional , Análise de Sequência de Proteína
9.
Methods Mol Biol ; 519: 515-31, 2009.
Artigo em Inglês | MEDLINE | ID: mdl-19381607

RESUMO

Protein identification is a key aspect in the investigation of proteomes. Typically, in a 2-DE gel-based proteomics analysis, the spots are enzymatically digested and the resulting peptide masses are measured, producing mass spectra. Peptides can also be isolated and fragmented within the mass spectrometer, leading to tandem mass spectra. For protein and peptide identification, an algorithm matches the mass spectra and other empirical information against a protein database to define if a protein is already known or novel. A variety of different programs for protein identification with database interrogation has been developed. This chapter focuses on the use of the software Aldente and Phenyx, for MS and tandem MS identification, respectively.


Assuntos
Algoritmos , Bases de Dados de Proteínas , Proteínas/análise , Sequência de Aminoácidos , Animais , Biologia Computacional/métodos , Eletroforese em Gel Bidimensional , Humanos , Armazenamento e Recuperação da Informação , Dados de Sequência Molecular , Proteômica/métodos , Análise de Sequência de Proteína , Espectrometria de Massas em Tandem
10.
Methods Mol Biol ; 519: 533-9, 2009.
Artigo em Inglês | MEDLINE | ID: mdl-19381608

RESUMO

With the development of the Internet, a growing number of two-dimensional electrophoresis (2DE) databases have become available (60 in 2009, for a total of 425 image maps). By linking the two components constituting 2DE databases, gel images and protein information, the active hypertext links provide a powerful tool for data integration, in addition to navigation from one database to another.This chapter shows how to prepare the necessary files to build a federated 2DE database in order to make it available over the Internet and how to further update it.


Assuntos
Bases de Dados de Proteínas , Eletroforese em Gel Bidimensional , Internet , Armazenamento e Recuperação da Informação/métodos , Proteínas/análise , Software , Interface Usuário-Computador
11.
Proteomics ; 8(23-24): 4907-9, 2008 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-19072735

RESUMO

Bioinformatics tools may assist scientists in all steps of a typical 2-DE gel analysis workflow, that is, from the description of the sample preparation protocols, going through the gel image analysis and protein identification, to the publication of Internet-ready 2-DE gel databases. This short communication highlights in a single and summarised view, this workflow and the current bioinformatics solutions developed by the Proteome Informatics Group at the Swiss Institute of Bioinformatics.


Assuntos
Biologia Computacional/instrumentação , Eletroforese em Gel Bidimensional , Revisão da Pesquisa por Pares
12.
Anal Chem ; 80(22): 8799-806, 2008 Nov 15.
Artigo em Inglês | MEDLINE | ID: mdl-18947195

RESUMO

Protein-protein interactions are key to function and regulation of many biological pathways. To facilitate characterization of protein-protein interactions using mass spectrometry, a new data acquisition/analysis pipeline was designed. The goal for this pipeline was to provide a generic strategy for identifying cross-linked peptides from single LC/MS/MS data sets, without using specialized cross-linkers or custom-written software. To achieve this, each peptide in the pair of cross-linked peptides was considered to be "post-translationally" modified with an unknown mass at an unknown amino acid. This allowed use of an open-modification search engine, Popitam, to interpret the tandem mass spectra of cross-linked peptides. False positives were reduced and database selectivity increased by acquiring precursors and fragments at high mass accuracy. Additionally, a high-charge-state-driven data acquisition scheme was utilized to enrich data sets for cross-linked peptides. This open-modification search based pipeline was shown to be useful for characterizing both chemical as well as native cross-links in proteins. The pipeline was validated by characterizing the known interactions in the chemically cross-linked CYP2E1-b5 complex. Utility of this method in identifying native cross-links was demonstrated by mapping disulfide bridges in RcsF, an outer membrane lipoprotein involved in Rcs phosphorelay.


Assuntos
Reagentes de Ligações Cruzadas/farmacologia , Proteínas/metabolismo , Sequência de Aminoácidos , Proteínas de Bactérias/metabolismo , Domínio Catalítico , Citocromo P-450 CYP2E1/química , Citocromo P-450 CYP2E1/metabolismo , Citocromos b5/química , Citocromos b5/metabolismo , Dissulfetos/metabolismo , Humanos , Espectrometria de Massas , Dados de Sequência Molecular , Peptídeos/química , Peptídeos/metabolismo , Ligação Proteica/efeitos dos fármacos , Reprodutibilidade dos Testes
13.
J Proteomics ; 71(2): 249-51, 2008 Jul 21.
Artigo em Inglês | MEDLINE | ID: mdl-18590991

RESUMO

The HUPO Proteomics Standards Initiative (PSI) defines standards for data representation in proteomics to facilitate data exchange and comparison, and quality assessment. A set of minimum reporting requirements, called MIAPE (for Minimum Information About a Proteomics Experiment) is provided to ensure consistency of data set annotation. Like the MIAME reporting requirements for transcriptomics, it is anticipated that journal editors will soon require such annotation for published data sets, simplifying further mining of data. Therefore, tools for data entry and public repositories for long-term storage will be needed. MIAPEGelDB is a public repository and a web-based data entry tool for documents conforming to the MIAPE gel electrophoresis guidelines. It aims to guide authors through the publication of the minimal set of information for their proteomics experiments using a clear, sequential interface. After publication by their author, documents in MIAPEGelDB can be viewed in HTML or plain text formats, and further used through stable URL links from remote resources. MIAPEGelDB is accessible at: http://miapegeldb.expasy.org/.


Assuntos
Bases de Dados de Proteínas , Eletroforese em Gel Bidimensional , Proteômica , Animais , Bases de Dados Genéticas , Humanos , Armazenamento e Recuperação da Informação , Internet , Proteômica/métodos , Proteômica/normas , Software
14.
J Proteomics ; 71(2): 245-8, 2008 Jul 21.
Artigo em Inglês | MEDLINE | ID: mdl-18617148

RESUMO

Since it was launched in 1993, the ExPASy server has been and is still a reference in the proteomics world. ExPASy users access various databases, many dedicated tools, and lists of resources, among other services. A significant part of resources available is devoted to two-dimensional electrophoresis data. Our latest contribution to the expansion of the pool of on-line proteomics data is the World-2DPAGE Constellation, accessible at http://world-2dpage.expasy.org/. It is composed of the established WORLD-2DPAGE List of 2-D PAGE database servers, the World-2DPAGE Portal that queries simultaneously world-wide proteomics databases, and the recently created World-2DPAGE Repository. The latter component is a public standards-compliant repository for gel-based proteomics data linked to protein identifications published in the literature. It has been set up using the Make2D-DB package, a software tool that helps building SWISS-2DPAGE-like databases on one's own Web site. The lack of necessary informatics infrastructure to build and run a dedicated website is no longer an obstacle to make proteomics data publicly accessible on the Internet.


Assuntos
Bases de Dados de Proteínas , Proteômica , Animais , Eletroforese em Gel Bidimensional , Humanos , Internet , Mapeamento de Peptídeos
15.
J Am Soc Mass Spectrom ; 19(6): 891-901, 2008 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-18417358

RESUMO

The advantages and disadvantages of acquiring tandem mass spectra by collision-induced dissociation (CID) of peptides in linear ion trap Fourier-transform hybrid instruments are described. These instruments offer the possibility to transfer fragment ions from the linear ion trap to the FT-based analyzer for analysis with both high resolution and high mass accuracy. In addition, performing CID during the transfer of ions from the linear ion trap (LTQ) to the FT analyzer is also possible in instruments containing an additional collision cell (i.e., the "C-trap" in the LTQ-Orbitrap), resulting in tandem mass spectra over the full m/z range and not limited by the ejection q value of the LTQ. Our results show that these scan modes have lower duty cycles than tandem mass spectra acquired in the LTQ with nominal mass resolution, and typically result in fewer peptide identifications during data-dependent analysis of complex samples. However, the higher measured mass accuracy and resolution provides more specificity and hence provides a lower false positive ratio for the same number of true positives during database search of peptide tandem mass spectra. In addition, the search for modified and unexpected peptides is greatly facilitated with this data acquisition mode. It is therefore concluded that acquisition of tandem mass spectral data with high measured mass accuracy and resolution is a competitive alternative to "classical" data acquisition strategies, especially in situations of complex searches from large databases, searches for modified peptides, or for peptides resulting from unspecific cleavages.


Assuntos
Mapeamento de Peptídeos/métodos , Espectrometria de Massas por Ionização por Electrospray/métodos , Espectroscopia de Infravermelho com Transformada de Fourier/métodos , Íons , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
16.
Bioinformatics ; 24(11): 1416-7, 2008 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-18436540

RESUMO

The identification and characterization of peptides from tandem mass spectrometry (MS/MS) data represents a critical aspect of proteomics. Today, tandem MS analysis is often performed by only using a single identification program achieving identification rates between 10-50% (Elias and Gygi, 2007). Beside the development of new analysis tools, recent publications describe also the pipelining of different search programs to increase the identification rate (Hartler et al., 2007; Keller et al., 2005). The Swiss Protein Identification Toolbox (swissPIT) follows this approach, but goes a step further by providing the user an expandable multi-tool platform capable of executing workflows to analyze tandem MS-based data. One of the major problems in proteomics is the absent of standardized workflows to analyze the produced data. This includes the pre-processing part as well as the final identification of peptides and proteins. The main idea of swissPIT is not only the usage of different identification tool in parallel, but also the meaningful concatenation of different identification strategies at the same time. The swissPIT is open source software but we also provide a user-friendly web platform, which demonstrates the capabilities of our software and which is available at http://swisspit.cscs.ch upon request for account.


Assuntos
Algoritmos , Espectrometria de Massas/métodos , Mapeamento de Peptídeos/métodos , Proteínas/química , Análise de Sequência de Proteína/métodos , Software , Sequência de Aminoácidos , Dados de Sequência Molecular
17.
Appl Environ Microbiol ; 73(17): 5653-6, 2007 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-17601805

RESUMO

Stable isotope labeling of amino acids in cell culture was used for Bifidobacterium longum. A comprehensive proteomic strategy was developed and validated by designing an appropriate semidefined medium that allows stable replacement of natural leucine by [(13)C6]leucine. Using this strategy, proteins having variations of at least 50% in their expression rates can be quantified with great confidence.


Assuntos
Bifidobacterium/crescimento & desenvolvimento , Bifidobacterium/metabolismo , Marcação por Isótopo/métodos , Leucina/metabolismo , Proteômica/métodos , Sequência de Aminoácidos , Proteínas de Bactérias/química , Proteínas de Bactérias/metabolismo , Técnicas Bacteriológicas , Isótopos de Carbono/química , Isótopos de Carbono/metabolismo , Meios de Cultura , Dados de Sequência Molecular , Peptídeos/química , Peptídeos/metabolismo , Proteoma
18.
Stud Health Technol Inform ; 126: 13-22, 2007.
Artigo em Inglês | MEDLINE | ID: mdl-17476043

RESUMO

Biomarker detection is one of the greatest challenges in Clinical Proteomics. Today, great hopes are placed into tandem mass spectrometry (MS/MS) to discover potential biomarkers. MS/MS is a technique that allows large scale data analysis, including the identification, characterization, and quantification of molecules. Especially the identification process, that implies to compare experimental spectra with theoretical amino acid sequences stored in specialized databases, has been subject for extensive research in bioinformatics since many years. Dozens of identification programs have been developed addressing different aspects of the identification process but in general, clinicians are only using a single tools for their data analysis along with a single set of specific parameters. Hence, a significant proportion of the experimental spectra do not lead to a confident identification score due to inappropriate parameters or scoring schemes of the applied analysis software. The swissPIT (Swiss Protein Identification Toolbox) project was initiated to provide the scientific community with an expandable multi-tool platform for automated and in-depth analysis of mass spectrometry data. The swissPIT uses multiple identification tools to automatic analyze mass spectra. The tools are concatenated as analysis workflows. In order to realize these calculation-intensive workflows we are using the Swiss Bio Grid infrastructure. A first version of the web-based front-end is available (http://www.swisspit.cscs.ch) and can be freely accessed after requesting an account. The source code of the project will be also made available in near future.


Assuntos
Informática Médica , Proteômica/métodos , Espectrometria de Massas em Tandem/métodos , Biomarcadores/análise , Bases de Dados de Proteínas , Humanos , Análise de Sequência de Proteína , Software , Estatística como Assunto/métodos , Suíça
20.
Proteomics Clin Appl ; 1(8): 900-15, 2007 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-21136743

RESUMO

On-line databases targeted towards protein contents in biological fluids are scarce. Consequently, the investigation of proteins identified in a biological fluid most importantly depends on crosschecking information gathered from less specific resources. This review summarises the key databases and tools for collecting information on tissue specificity or expression profiles. It also emphasises the high connectivity between databases fruitfully used to corroborate and piece information together. Finally, selected issues related to appropriate bioinformatics tools in the context of clinical applications are succinctly discussed.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA