Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 22
Filtrar
Más filtros




Base de datos
Asunto de la revista
Intervalo de año de publicación
1.
Biodivers Data J ; 9: e67671, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34690512

RESUMEN

BACKGROUND: OpenBiodiv is a biodiversity knowledge graph containing a synthetic linked open dataset, OpenBiodiv-LOD, which combines knowledge extracted from academic literature with the taxonomic backbone used by the Global Biodiversity Information Facility. The linked open data is modelled according to the OpenBiodiv-O ontology integrating semantic resource types from recognised biodiversity and publishing ontologies with OpenBiodiv-O resource types, introduced to capture the semantics of resources not modelled before. NEW INFORMATION: We introduce the new release of the OpenBiodiv-LOD attained through information extraction and modelling of additional biodiversity entities. It was achieved by further developments to OpenBiodiv-O, the data storage infrastructure and the workflow and accompanying R software packages used for transformation of academic literature into Resource Description Framework (RDF). We discuss how to utilise the LOD in biodiversity informatics and give examples by providing solutions to several competency questions. We investigate performance issues that arise due to the large amount of inferred statements in the graph and conclude that OWL-full inference is impractical for the project and that unnecessary inference should be avoided.

2.
Gigascience ; 10(5)2021 05 13.
Artículo en Inglés | MEDLINE | ID: mdl-33983435

RESUMEN

BACKGROUND: Data papers have emerged as a powerful instrument for open data publishing, obtaining credit, and establishing priority for datasets generated in scientific experiments. Academic publishing improves data and metadata quality through peer review and increases the impact of datasets by enhancing their visibility, accessibility, and reusability. OBJECTIVE: We aimed to establish a new type of article structure and template for omics studies: the omics data paper. To improve data interoperability and further incentivize researchers to publish well-described datasets, we created a prototype workflow for streamlined import of genomics metadata from the European Nucleotide Archive directly into a data paper manuscript. METHODS: An omics data paper template was designed by defining key article sections that encourage the description of omics datasets and methodologies. A metadata import workflow, based on REpresentational State Transfer services and Xpath, was prototyped to extract information from the European Nucleotide Archive, ArrayExpress, and BioSamples databases. FINDINGS: The template and workflow for automatic import of standard-compliant metadata into an omics data paper manuscript provide a mechanism for enhancing existing metadata through publishing. CONCLUSION: The omics data paper structure and workflow for import of genomics metadata will help to bring genomic and other omics datasets into the spotlight. Promoting enhanced metadata descriptions and enforcing manuscript peer review and data auditing of the underlying datasets brings additional quality to datasets. We hope that streamlined metadata reuse for scholarly publishing encourages authors to create enhanced metadata descriptions in the form of data papers to improve both the quality of their metadata and its findability and accessibility.


Asunto(s)
Genómica , Metadatos , Bases de Datos Factuales , Revisión por Pares , Flujo de Trabajo
3.
Biomicrofluidics ; 11(5): 054110, 2017 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-29034054

RESUMEN

Circulating tumor cells (CTCs) have shown potential for cancer diagnosis and prognosis. Affinity-based CTC isolation methods have been proved to be efficient for CTC detection in clinical blood samples. One of the popular choices for affinity-based CTC isolation is to immobilize capture agents onto an array of microposts in microchannels, providing high CTC capture efficiency due to enhanced interactions between tumor cells and capture agents on the microposts. However, how the cells interact with microposts under different flow conditions and what kind of capture pattern results from the interactions have not been fully investigated; a full understanding of these interactions will help to design devices and choose experimental conditions for higher CTC capture effeciency. We report our study on their interaction and cell distribution patterns around microposts under different flow conditions. Human acute lymphoblastic leukemia cells (CCRF-CEM) were used as target cancer cells in this study, while the Sgc8 aptamer that has specific binding with CCRF-CEM cells was employed as a capture agent. We investigated the effects of flow rates and micropost shapes on the cell capture efficiency and capture patterns on microposts. While a higher flow rate decreased cell capture efficiency, we found that the capture pattern around microposts also changed, with much more cells captured in the front half of a micropost than at the back half. We also found the ratio of cells captured on microposts to the cells captured by both microposts and channel walls increased as a function of the flow rate. We compared circular microposts with an elliptical shape and found that the geometry affected the capture distribution around microposts. In addition, we have developed a theoretical model to simulate the interactions between tumor cells and micropost surfaces, and the simulation results are in agreement with our experimental observation.

4.
Microsyst Nanoeng ; 3: 17055, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-31057881

RESUMEN

We report using an airbrush to pattern a number of reagents, including small molecules, proteins, DNA, and conductive microparticles, onto a variety of mechanical substrates such as paper and glass. Airbrushing is more economical and easier to perform than many other patterning methods available (for example, inkjet printing). In this work, we investigated the controllable parameters that affect patterned line width and studied their mechanisms of action, and we provide examples of possible patterns. This airbrushing approach allowed us to pattern lines and dot arrays from hundreds of µm to tens of mm with length scales comparable to those of other patterning methods. Two applications, enzymatic assays and DNA hybridization, were chosen to demonstrate the compatibility of the method with biomolecules. This airbrushing method holds promise in making paper-based platforms less expensive and more accessible.

6.
Zookeys ; (550): 233-46, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-26877662

RESUMEN

Collaborative effort among four lead indexes of taxon names and nomenclatural acts (International Plant Name Index (IPNI), Index Fungorum, MycoBank and ZooBank) and the journals PhytoKeys, MycoKeys and ZooKeys to create an automated, pre-publication, registration workflow, based on a server-to-server, XML request/response model. The registration model for ZooBank uses the TaxPub schema, which is an extension to the Journal Tag Publishing Suite (JATS) of the National Library of Medicine (NLM). The indexing or registration model of IPNI and Index Fungorum will use the Taxonomic Concept Transfer Schema (TCS) as a basic standard for the workflow. Other journals and publishers who intend to implement automated, pre-publication, registration of taxon names and nomenclatural acts can also use the open sample XML formats and links to schemas and relevant information published in the paper.

7.
Biodivers Data J ; (3): e5063, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-26023286

RESUMEN

Specimen data in taxonomic literature are among the highest quality primary biodiversity data. Innovative cybertaxonomic journals are using workflows that maintain data structure and disseminate electronic content to aggregators and other users; such structure is lost in traditional taxonomic publishing. Legacy taxonomic literature is a vast repository of knowledge about biodiversity. Currently, access to that resource is cumbersome, especially for non-specialist data consumers. Markup is a mechanism that makes this content more accessible, and is especially suited to machine analysis. Fine-grained XML (Extensible Markup Language) markup was applied to all (37) open-access articles published in the journal Zootaxa containing treatments on spiders (Order: Araneae). The markup approach was optimized to extract primary specimen data from legacy publications. These data were combined with data from articles containing treatments on spiders published in Biodiversity Data Journal where XML structure is part of the routine publication process. A series of charts was developed to visualize the content of specimen data in XML-tagged taxonomic treatments, either singly or in aggregate. The data can be filtered by several fields (including journal, taxon, institutional collection, collecting country, collector, author, article and treatment) to query particular aspects of the data. We demonstrate here that XML markup using GoldenGATE can address the challenge presented by unstructured legacy data, can extract structured primary biodiversity data which can be aggregated with and jointly queried with data from other Darwin Core-compatible sources, and show how visualization of these data can communicate key information contained in biodiversity literature. We complement recent studies on aspects of biodiversity knowledge using XML structured data to explore 1) the time lag between species discovry and description, and 2) the prevelence of rarity in species descriptions.

10.
Biodivers Data J ; (1): e1013, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-24723752

RESUMEN

We demonstrate how a classical taxonomic description of a new species can be enhanced by applying new generation molecular methods, and novel computing and imaging technologies. A cave-dwelling centipede, Eupolybothrus cavernicolus Komericki & Stoev sp. n. (Chilopoda: Lithobiomorpha: Lithobiidae), found in a remote karst region in Knin, Croatia, is the first eukaryotic species for which, in addition to the traditional morphological description, we provide a fully sequenced transcriptome, a DNA barcode, detailed anatomical X-ray microtomography (micro-CT) scans, and a movie of the living specimen to document important traits of its ex-situ behaviour. By employing micro-CT scanning in a new species for the first time, we create a high-resolution morphological and anatomical dataset that allows virtual reconstructions of the specimen and subsequent interactive manipulation to test the recently introduced 'cybertype' notion. In addition, the transcriptome was recorded with a total of 67,785 scaffolds, having an average length of 812 bp and N50 of 1,448 bp (see GigaDB). Subsequent annotation of 22,866 scaffolds was conducted by tracing homologs against current available databases, including Nr, SwissProt and COG. This pilot project illustrates a workflow of producing, storing, publishing and disseminating large data sets associated with a description of a new taxon. All data have been deposited in publicly accessible repositories, such as GigaScience GigaDB, NCBI, BOLD, Morphbank and Morphosource, and the respective open licenses used ensure their accessibility and re-usability.

11.
Biodivers Data J ; (1): e977, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-24723771

RESUMEN

A total of 294 species from 31 families have been found in Galichitsa Mt. Of them, 85 species are new to the mountain, while 20 are also new to the fauna of FYR of Macedonia. According to their current distribution the established species can be assigned to 17 zoogeographical categories, grouped into 5 complexes (Cosmopolitan, Holarctic, European, Mediterranean, Endemics of Balkans). Dominant are Holarctic species (66.0%) followed by European (16.5%) and Mediterranean (9.3%). The endemics (6.2%) and Southeast European species (1.7%) emphasize the local character of this fauna, but its low percentage suggests an important process of colonization.

15.
PhytoKeys ; (9): 1-13, 2012.
Artículo en Inglés | MEDLINE | ID: mdl-22371687

RESUMEN

The paper describes a pilot project to convert a conventional floristic checklist, written in a standard word processing program, into structured data in the Darwin Core Archive format. After peer-review and editorial acceptance, the final revised version of the checklist was converted into Darwin Core Archive by means of regular expressions and published thereafter in both human-readable form as traditional botanical publication and Darwin Core Archive data files. The data were published and indexed through the Global Biodiversity Information Facility (GBIF) Integrated Publishing Toolkit (IPT) and significant portions of the text of the paper were used to describe the metadata on IPT. After publication, the data will become available through the GBIF infrastructure and can be re-used on their own or collated with other data.

18.
Zookeys ; (150): 89-116, 2011.
Artículo en Inglés | MEDLINE | ID: mdl-22207808

RESUMEN

We review the three most widely used XML schemas used to mark-up taxonomic texts, TaxonX, TaxPub and taXMLit. These are described from the viewpoint of their development history, current status, implementation, and use cases. The concept of "taxon treatment" from the viewpoint of taxonomy mark-up into XML is discussed. TaxonX and taXMLit are primarily designed for legacy literature, the former being more lightweight and with a focus on recovery of taxon treatments, the latter providing a much more detailed set of tags to facilitate data extraction and analysis. TaxPub is an extension of the National Library of Medicine Document Type Definition (NLM DTD) for taxonomy focussed on layout and recovery and, as such, is best suited for mark-up of new publications and their archiving in PubMedCentral. All three schemas have their advantages and shortcomings and can be used for different purposes.

19.
Zookeys ; (90): 1-12, 2011 Apr 14.
Artículo en Inglés | MEDLINE | ID: mdl-21594104

RESUMEN

Scholarly publishing and citation practices have developed largely in the absence of versioned documents. The digital age requires new practices to combine the old and the new. We describe how the original published source and a versioned wiki page based on it can be reconciled and combined into a single citation reference. We illustrate the citation mechanism by way of practical examples focusing on journal and wiki publishing of taxon treatments. Specifically, we discuss mechanisms for permanent cross-linking between the static original publication and the dynamic, versioned wiki, as well as for automated export of journal content to the wiki, to reduce the workload on authors, for combining the journal and the wiki citation and for integrating it with the attribution of wiki contributors.

20.
Zookeys ; (50): 17-28, 2010 Jun 30.
Artículo en Inglés | MEDLINE | ID: mdl-21594114

RESUMEN

We describe a method to publish nomenclatural acts described in taxonomic websites (Scratchpads) that are formally registered through publication in a printed journal (ZooKeys). This method is fully compliant with the zoological nomenclatural code. Our approach supports manuscript creation (via a Scratchpad), electronic act registration (via ZooBank), online and print publication (in the journal ZooKeys) and simultaneous dissemination (ZooKeys and Scratchpads) for nomenclatorial acts including new species descriptions. The workflow supports the generation of manuscripts directly from a database and is illustrated by two sample papers published in the present issue.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA