Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 19 de 19
Filtrar
1.
Sensors (Basel) ; 23(11)2023 May 27.
Artigo em Inglês | MEDLINE | ID: mdl-37299850

RESUMO

The Internet of Things (IoT) technology is growing rapidly, while the IoT devices are being deployed massively. However, interoperability with information systems remains a major challenge for this accelerated device deployment. Furthermore, most of the time, IoT information is presented as Time Series (TS), and while the majority of the studies in the literature focus on the prediction, compression, or processing of TS, no standardized representation format has emerged. Moreover, apart from interoperability, IoT networks contain multiple constrained devices which are designed with limitations, e.g., processing power, memory, or battery life. Therefore, in order to reduce the interoperability challenges and increase the lifetime of IoT devices, this article introduces a new format for TS based on CBOR. The format exploits the compactness of CBOR by leveraging delta values to represent measurements, employing tags to represent variables, and utilizing templates to convert the TS data representation into the appropriate format for the cloud-based application. Moreover, we introduce a new refined and structured metadata to represent additional information for the measurements, then we provide a Concise Data Definition Language (CDDL) code to validate the CBOR structures against our proposal, and finally, we present a detailed performance evaluation to validate the adaptability and the extensibility of our approach. Our performance evaluation results show that the actual data sent by IoT devices can be reduced by between 88% and 94% compared to JavaScript Object Notation (JSON), between 82% and 91% compared to Concise Binary Object Representation (CBOR) and ASN.1, and between 60% and 88% compared to Protocol buffers. At the same time, it can reduce Time-on-Air by between 84% and 94% when a Low Power Wide Area Networks (LPWAN) technology such as LoRaWAN is employed, leading to a 12-fold increase in battery life compared to CBOR format or between a 9-fold and 16-fold increase when compared to Protocol buffers and ASN.1, respectively. In addition, the proposed metadata represent an additional 0.5% of the overall data transmitted in cases where networks such as LPWAN or Wi-Fi are employed. Finally, the proposed template and data format provide a compact representation of TS that can significantly reduce the amount of data transmitted containing the same information, extend the battery life of IoT devices, and improve their lifetime. Moreover, the results show that the proposed approach is effective for different data types and it can be integrated seamlessly into existing IoT systems.


Assuntos
Compressão de Dados , Internet das Coisas , Fatores de Tempo , Fontes de Energia Elétrica , Idioma
2.
J Comput Chem ; 43(12): 879-887, 2022 05 05.
Artigo em Inglês | MEDLINE | ID: mdl-35322441

RESUMO

The ThermoML Archive is a subset of Thermodynamics Research Center (TRC) data holdings corresponding to cooperation between NIST TRC and five journals: Journal of Chemical Engineering and Data (ISSN: 1520-5134), The Journal of Chemical Thermodynamics (ISSN: 1096-3626), Fluid Phase Equilibria (ISSN: 0378-3812), Thermochimica Acta (ISSN: 0040-6031), and International Journal of Thermophysics (ISSN: 1572-9567). Data from initial cooperation (around 2003) through the 2019 calendar year are included. The archive has undergone a major update with the goal of improving the FAIRness and user experience of the data provided by the service. The web application provides comprehensive property browsing and searching capabilities; searching relies on a RESTful API provided by the Cordra software for managing digital objects. JSON files with a schema derived from ThermoML are provided as an additional serialization to lower the barrier to programmatic consumption of the information, for stakeholders who may have a preference of JSON over XML. The ThermoML and JSON files for all available entries can be downloaded from data.nist.gov (https://data.nist.gov/od/id/mds2-2422).


Assuntos
Software
3.
J Comput Chem ; 42(6): 458-464, 2021 03 05.
Artigo em Inglês | MEDLINE | ID: mdl-33368350

RESUMO

IOData is a free and open-source Python library for parsing, storing, and converting various file formats commonly used by quantum chemistry, molecular dynamics, and plane-wave density-functional-theory software programs. In addition, IOData supports a flexible framework for generating input files for various software packages. While designed and released for stand-alone use, its original purpose was to facilitate the interoperability of various modules in the HORTON and ChemTools software packages with external (third-party) molecular quantum chemistry and solid-state density-functional-theory packages. IOData is designed to be easy to use, maintain, and extend; this is why we wrote IOData in Python and adopted many principles of modern software development, including comprehensive documentation, extensive testing, continuous integration/delivery protocols, and package management. This article is the official release note of the IOData library.

4.
J Digit Imaging ; 32(5): 832-840, 2019 10.
Artigo em Inglês | MEDLINE | ID: mdl-30511282

RESUMO

The US Department of Veterans Affairs has been acquiring store and forward digital diabetic retinopathy surveillance retinal fundus images for remote reading since 2007. There are 900+ retinal cameras at 756 acquisition sites. These images are manually read remotely at 134 sites. A total of 2.1 million studies have been performed in the teleretinal imaging program. The human workload for reading images is rapidly growing. It would be ideal to develop an automated computer algorithm that detects multiple eye diseases as this would help standardize interpretations and improve efficiency of the image readers. Deep learning algorithms for detection of diabetic retinopathy in retinal fundus photographs have been developed and there are needs for additional image data to validate this work. To further this research, the Atlanta VA Health Care System (VAHCS) has extracted 112,000 DICOM diabetic retinopathy surveillance images (13,000 studies) that can be subsequently used for the validation of automated algorithms. An extensive amount of associated clinical information was added to the DICOM header of each exported image to facilitate correlation of the image with the patient's medical condition. The clinical information was saved as a JSON object and stored in a single Unlimited Text (VR = UT) DICOM data element. This paper describes the methodology used for this project and the results of applying this methodology.


Assuntos
Retinopatia Diabética/diagnóstico por imagem , Sistemas de Informação em Radiologia/estatística & dados numéricos , United States Department of Veterans Affairs , Fundo de Olho , Humanos , Retina/diagnóstico por imagem , Estados Unidos
5.
BMC Bioinformatics ; 19(1): 30, 2018 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-29390967

RESUMO

BACKGROUND: Application Programming Interfaces (APIs) are now widely used to distribute biological data. And many popular biological APIs developed by many different research teams have adopted Javascript Object Notation (JSON) as their primary data format. While usage of a common data format offers significant advantages, that alone is not sufficient for rich integrative queries across APIs. RESULTS: Here, we have implemented JSON for Linking Data (JSON-LD) technology on the BioThings APIs that we have developed, MyGene.info , MyVariant.info and MyChem.info . JSON-LD provides a standard way to add semantic context to the existing JSON data structure, for the purpose of enhancing the interoperability between APIs. We demonstrated several use cases that were facilitated by semantic annotations using JSON-LD, including simpler and more precise query capabilities as well as API cross-linking. CONCLUSIONS: We believe that this pattern offers a generalizable solution for interoperability of APIs in the life sciences.


Assuntos
Armazenamento e Recuperação da Informação/métodos , Software , Disciplinas das Ciências Biológicas , Bases de Dados Factuais , Humanos , Internet
6.
BMC Bioinformatics ; 18(1): 175, 2017 Mar 17.
Artigo em Inglês | MEDLINE | ID: mdl-28302053

RESUMO

BACKGROUND: The Biological Magnetic Resonance Data Bank (BMRB) is a public repository of Nuclear Magnetic Resonance (NMR) spectroscopic data of biological macromolecules. It is an important resource for many researchers using NMR to study structural, biophysical, and biochemical properties of biological macromolecules. It is primarily maintained and accessed in a flat file ASCII format known as NMR-STAR. While the format is human readable, the size of most BMRB entries makes computer readability and explicit representation a practical requirement for almost any rigorous systematic analysis. RESULTS: To aid in the use of this public resource, we have developed a package called nmrstarlib in the popular open-source programming language Python. The nmrstarlib's implementation is very efficient, both in design and execution. The library has facilities for reading and writing both NMR-STAR version 2.1 and 3.1 formatted files, parsing them into usable Python dictionary- and list-based data structures, making access and manipulation of the experimental data very natural within Python programs (i.e. "saveframe" and "loop" records represented as individual Python dictionary data structures). Another major advantage of this design is that data stored in original NMR-STAR can be easily converted into its equivalent JavaScript Object Notation (JSON) format, a lightweight data interchange format, facilitating data access and manipulation using Python and any other programming language that implements a JSON parser/generator (i.e., all popular programming languages). We have also developed tools to visualize assigned chemical shift values and to convert between NMR-STAR and JSONized NMR-STAR formatted files. Full API Reference Documentation, User Guide and Tutorial with code examples are also available. We have tested this new library on all current BMRB entries: 100% of all entries are parsed without any errors for both NMR-STAR version 2.1 and version 3.1 formatted files. We also compared our software to three currently available Python libraries for parsing NMR-STAR formatted files: PyStarLib, NMRPyStar, and PyNMRSTAR. CONCLUSIONS: The nmrstarlib package is a simple, fast, and efficient library for accessing data from the BMRB. The library provides an intuitive dictionary-based interface with which Python programs can read, edit, and write NMR-STAR formatted files and their equivalent JSONized NMR-STAR files. The nmrstarlib package can be used as a library for accessing and manipulating data stored in NMR-STAR files and as a command-line tool to convert from NMR-STAR file format into its equivalent JSON file format and vice versa, and to visualize chemical shift values. Furthermore, the nmrstarlib implementation provides a guide for effectively JSONizing other older scientific formats, improving the FAIRness of data in these formats.


Assuntos
Bases de Dados Factuais , Software , Espectroscopia de Ressonância Magnética
7.
Mol Biol Evol ; 33(8): 2167-9, 2016 08.
Artigo em Inglês | MEDLINE | ID: mdl-27189542

RESUMO

Model-based phylogenetic reconstructions increasingly consider spatial or phenotypic traits in conjunction with sequence data to study evolutionary processes. Alongside parameter estimation, visualization of ancestral reconstructions represents an integral part of these analyses. Here, we present a complete overhaul of the spatial phylogenetic reconstruction of evolutionary dynamics software, now called SpreaD3 to emphasize the use of data-driven documents, as an analysis and visualization package that primarily complements Bayesian inference in BEAST (http://beast.bio.ed.ac.uk, last accessed 9 May 2016). The integration of JavaScript D3 libraries (www.d3.org, last accessed 9 May 2016) offers novel interactive web-based visualization capacities that are not restricted to spatial traits and extend to any discrete or continuously valued trait for any organism of interest.


Assuntos
Evolução Biológica , Biologia Computacional/métodos , Teorema de Bayes , Gráficos por Computador , Simulação por Computador , Evolução Molecular , Internet , Fenótipo , Filogenia , Software
8.
Drug Discov Today ; 29(4): 103944, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38460570

RESUMO

The Allotrope Foundation (AF) started as a group of pharmaceutical companies, instrument, and software vendors that set out to simplify the exchange of data in the laboratory. After a decade of work, they released products that have found adoption in various companies. Most recently, the Allotrope Simple Model (ASM) was developed to speed up and widen the adoption. As a result, the Foundation has recently added chemical companies and, importantly, is reworking its business model to lower the entry barrier for smaller companies. Here, we present the proceedings from the Allotrope Connect Fall 2023 conference and summarize the technical and organizational developments at the Foundation since 2020.


Assuntos
Comércio , Empresa de Pequeno Porte
9.
Biol Methods Protoc ; 9(1): bpae017, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38566774

RESUMO

Object-oriented programming (OOP) embodies a software development paradigm grounded in representing real-world entities as objects, facilitating a more efficient and structured modelling approach. In this article, we explore the synergy between OOP principles and the TypeScript (TS) programming language to create a JSON-formatted database designed for storing arrays of biological features. This fusion of technologies fosters a controlled and modular code script, streamlining the integration, manipulation, expansion, and analysis of biological data, all while enhancing syntax for improved human readability, such as through the use of dot notation. We advocate for biologists to embrace Git technology, akin to the practices of programmers and coders, for initiating versioned and collaborative projects. Leveraging the widely accessible and acclaimed IDE, Visual Studio Code, provides an additional advantage. Not only does it support running a Node.js environment, which is essential for running TS, but it also efficiently manages GitHub versioning. We provide a use case involving taxonomic data structure, focusing on angiosperm legume plants. This method is characterized by its simplicity, as the tools employed are both fully accessible and free of charge, and it is widely adopted by communities of professional programmers. Moreover, we are dedicated to facilitating practical implementation and comprehension through a comprehensive tutorial, a readily available pre-built database at GitHub, and a new package at npm.

10.
Online J Public Health Inform ; 16: e56237, 2024 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-39088253

RESUMO

BACKGROUND: Metadata describe and provide context for other data, playing a pivotal role in enabling findability, accessibility, interoperability, and reusability (FAIR) data principles. By providing comprehensive and machine-readable descriptions of digital resources, metadata empower both machines and human users to seamlessly discover, access, integrate, and reuse data or content across diverse platforms and applications. However, the limited accessibility and machine-interpretability of existing metadata for population health data hinder effective data discovery and reuse. OBJECTIVE: To address these challenges, we propose a comprehensive framework using standardized formats, vocabularies, and protocols to render population health data machine-readable, significantly enhancing their FAIRness and enabling seamless discovery, access, and integration across diverse platforms and research applications. METHODS: The framework implements a 3-stage approach. The first stage is Data Documentation Initiative (DDI) integration, which involves leveraging the DDI Codebook metadata and documentation of detailed information for data and associated assets, while ensuring transparency and comprehensiveness. The second stage is Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM) standardization. In this stage, the data are harmonized and standardized into the OMOP CDM, facilitating unified analysis across heterogeneous data sets. The third stage involves the integration of Schema.org and JavaScript Object Notation for Linked Data (JSON-LD), in which machine-readable metadata are generated using Schema.org entities and embedded within the data using JSON-LD, boosting discoverability and comprehension for both machines and human users. We demonstrated the implementation of these 3 stages using the Integrated Disease Surveillance and Response (IDSR) data from Malawi and Kenya. RESULTS: The implementation of our framework significantly enhanced the FAIRness of population health data, resulting in improved discoverability through seamless integration with platforms such as Google Dataset Search. The adoption of standardized formats and protocols streamlined data accessibility and integration across various research environments, fostering collaboration and knowledge sharing. Additionally, the use of machine-interpretable metadata empowered researchers to efficiently reuse data for targeted analyses and insights, thereby maximizing the overall value of population health resources. The JSON-LD codes are accessible via a GitHub repository and the HTML code integrated with JSON-LD is available on the Implementation Network for Sharing Population Information from Research Entities website. CONCLUSIONS: The adoption of machine-readable metadata standards is essential for ensuring the FAIRness of population health data. By embracing these standards, organizations can enhance diverse resource visibility, accessibility, and utility, leading to a broader impact, particularly in low- and middle-income countries. Machine-readable metadata can accelerate research, improve health care decision-making, and ultimately promote better health outcomes for populations worldwide.

11.
Micromachines (Basel) ; 15(1)2023 Dec 31.
Artigo em Inglês | MEDLINE | ID: mdl-38258214

RESUMO

Microscale gas chromatography (µGC) systems are miniaturized instruments that typically incorporate one or several microfabricated fluidic elements; such systems are generally well suited for the automated sampling and analysis of gas-phase chemicals. Advanced µGC systems may incorporate more than 15 elements and operate these elements in different coordinated sequences to execute complex operations. In particular, the control software must manage the sampling and analysis operations of the µGC system in a time-sensitive manner; while operating multiple control loops, it must also manage error conditions, data acquisition, and user interactions when necessary. To address these challenges, this work describes the investigation of multithreaded control software and its evaluation with a representative µGC system. The µGC system is based on a progressive cellular architecture that uses multiple µGC cells to efficiently broaden the range of chemical analytes, with each cell incorporating multiple detectors. Implemented in Python language version 3.7.3 and executed by an embedded single-board computer, the control software enables the concurrent control of heaters, pumps, and valves while also gathering data from thermistors, pressure sensors, capacitive detectors, and photoionization detectors. A graphical user interface (UI) that operates on a laptop provides visualization of control parameters in real time. In experimental evaluations, the control software provided successful operation and readout for all the components, including eight sets of thermistors and heaters that form temperature feedback loops, two sets of pressure sensors and tunable gas pumps that form pressure head feedback loops, six capacitive detectors, three photoionization detectors, six valves, and an additional fixed-flow gas pump. A typical run analyzing 18 chemicals is presented. Although the operating system does not guarantee real-time operation, the relative standard deviations of the control loop timings were <0.5%. The control software successfully supported >1000 µGC runs that analyzed various chemical mixtures.

12.
F1000Res ; 11: 475, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35707001

RESUMO

The web tool Adamant has been developed to systematically collect research metadata as early as the conception of the experiment. Adamant enables a continuous, consistent, and transparent research data management (RDM) process, which is a key element of good scientific practice ensuring the path to Findable, Accessible, Interoperable, Reusable (FAIR) research data. It simplifies the creation of on-demand metadata schemas and the collection of metadata according to established or new standards. The approach is based on JavaScript Object Notation (JSON) schema, where any valid schema can be presented as an interactive web-form. Furthermore, Adamant eases the integration of numerous available RDM methods and software tools into the everyday research activities of especially small independent laboratories. A programming interface allows programmatic integration with other software tools such as electronic lab books or repositories. The user interface (UI) of Adamant is designed to be as user friendly as possible. Each UI element is self-explanatory and intuitive to use, which makes it accessible for users that have little to no experience with JSON format and programming in general. Several examples of research data management workflows that can be implemented using Adamant are introduced. Adamant (client-only version) is available from: https://plasma-mds.github.io/adamant.


Assuntos
Gerenciamento de Dados , Metadados , Humanos , Software , Fluxo de Trabalho
13.
Drug Discov Today ; 27(1): 207-214, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34332096

RESUMO

Standardizing data is crucial for preserving and exchanging scientific information. In particular, recording the context in which data were created ensures that information remains findable, accessible, interoperable, and reusable. Here, we introduce the concept of self-reporting data assets (SRDAs), which preserve data and contextual information. SRDAs are an abstract concept, which requires a suitable data format for implementation. Four promising data formats or languages are popularly used to represent data in pharma: JCAMP-DX, JSON, AnIML, and, more recently, the Allotrope Data Format (ADF). Here, we evaluate these four options in common use cases within the pharmaceutical industry using multiple criteria. The evaluation shows that ADF is the most suitable format for the implementation of SRDAs.


Assuntos
Confiabilidade dos Dados , Curadoria de Dados , Indústria Farmacêutica , Disseminação de Informação/métodos , Projetos de Pesquisa/normas , Curadoria de Dados/métodos , Curadoria de Dados/normas , Difusão de Inovações , Indústria Farmacêutica/métodos , Indústria Farmacêutica/organização & administração , Humanos , Estudo de Prova de Conceito , Padrões de Referência , Tecnologia Farmacêutica/métodos
14.
PeerJ ; 10: e12618, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35186448

RESUMO

To be computationally reproducible and efficient, integration of disparate data depends on shared entities whose matching meaning (semantics) can be computationally assessed. For biodiversity data one of the most prevalent shared entities for linking data records is the associated taxon concept. Unlike Linnaean taxon names, the traditional way in which taxon concepts are provided, phylogenetic definitions are native to phylogenetic trees and offer well-defined semantics that can be transformed into formal, computationally evaluable logic expressions. These attributes make them highly suitable for phylogeny-driven comparative biology by allowing computationally verifiable and reproducible integration of taxon-linked data against Tree of Life-scale phylogenies. To achieve this, the first step is transforming phylogenetic definitions from the natural language text in which they are published to a structured interoperable data format that maintains strong ties to semantics and lends itself well to sharing, reuse, and long-term archival. To this end, we developed the Phyloreference Exchange Format (Phyx), a JSON-LD-based text format encompassing rich metadata for all elements of a phylogenetic definition, and we created a supporting software library, phyx.js, to streamline computational management of such files. Together they form a foundation layer for digitizing and computing with phylogenetic definitions of clades.


Assuntos
Semântica , Software , Filogenia , Biologia , Registros
15.
Data Brief ; 31: 105757, 2020 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-32529012

RESUMO

The Molecular Entities in Linked Data (MEiLD) dataset comprises data of distinct atoms, molecules, ions, ion pairs, radicals, radical ions, and others that can be identifiable as separately distinguishable chemical entities. The dataset is provided in a JSON-LD format and was generated by the SDFEater, a tool that allows parsing atoms, bonds, and other molecule data. MEiLD contains 349,960 of 'small' chemical entities. Our dataset is based on the SDF files and is enriched with additional ontologies and line notation data. As a basis, the Molecular Entities in Linked Data dataset uses the Resource Description Framework (RDF) data model. Saving the data in such a model allows preserving the semantic relations, like hierarchical and associative, between them. To describe chemical molecules, vocabularies such as Chemical Vocabulary for Molecular Entities (CVME) and Simple Knowledge Organization System (SKOS) are used. The dataset can be beneficial, among others, for people concerned with research and development tools for cheminformatics and bioinformatics. In this paper, we describe various methods of access to our dataset. In addition to the MEiLD dataset, we publish the Shapes Constraint Language (SHACL) schema of our dataset and the CVME ontology. The data is available in Mendeley Data.

16.
Nanomaterials (Basel) ; 10(10)2020 Sep 24.
Artigo em Inglês | MEDLINE | ID: mdl-32987901

RESUMO

The field of nanoinformatics is rapidly developing and provides data driven solutions in the area of nanomaterials (NM) safety. Safe by Design approaches are encouraged and promoted through regulatory initiatives and multiple scientific projects. Experimental data is at the core of nanoinformatics processing workflows for risk assessment. The nanosafety data is predominantly recorded in Excel spreadsheet files. Although the spreadsheets are quite convenient for the experimentalists, they also pose great challenges for the consequent processing into databases due to variability of the templates used, specific details provided by each laboratory and the need for proper metadata documentation and formatting. In this paper, we present a workflow to facilitate the conversion of spreadsheets into a FAIR (Findable, Accessible, Interoperable, and Reusable) database, with the pivotal aid of the NMDataParser tool, developed to streamline the mapping of the original file layout into the eNanoMapper semantic data model. The NMDataParser is an open source Java library and application, making use of a JSON configuration to define the mapping. We describe the JSON configuration syntax and the approaches applied for parsing different spreadsheet layouts used by the nanosafety community. Examples of using the NMDataParser tool in nanoinformatics workflows are given. Challenging cases are discussed and appropriate solutions are proposed.

17.
J Cheminform ; 9(1): 55, 2017 Oct 30.
Artigo em Inglês | MEDLINE | ID: mdl-29086154

RESUMO

An end-to-end platform for chemical science research has been developed that integrates data from computational and experimental approaches through a modern web-based interface. The platform offers an interactive visualization and analytics environment that functions well on mobile, laptop and desktop devices. It offers pragmatic solutions to ensure that large and complex data sets are more accessible. Existing desktop applications/frameworks were extended to integrate with high-performance computing resources, and offer command-line tools to automate interaction-connecting distributed teams to this software platform on their own terms. The platform was developed openly, and all source code hosted on the GitHub platform with automated deployment possible using Ansible coupled with standard Ubuntu-based machine images deployed to cloud machines. The platform is designed to enable teams to reap the benefits of the connected web-going beyond what conventional search and analytics platforms offer in this area. It also has the goal of offering federated instances, that can be customized to the sites/research performed. Data gets stored using JSON, extending upon previous approaches using XML, building structures that support computational chemistry calculations. These structures were developed to make it easy to process data across different languages, and send data to a JavaScript-based web client.

18.
J Cheminform ; 8: 54, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27795738

RESUMO

With the move toward global, Internet enabled science there is an inherent need to capture, store, aggregate and search scientific data across a large corpus of heterogeneous data silos. As a result, standards development is needed to create an infrastructure capable of representing the diverse nature of scientific data. This paper describes a fundamental data model for scientific data that can be applied to data currently stored in any format, and an associated ontology that affords semantic representation of the structure of scientific data (and its metadata), upon which discipline specific semantics can be applied. Application of this data model to experimental and computational chemistry data are presented, implemented using JavaScript Object Notation for Linked Data. Full examples are available at the project website (Chalk in SciData: a scientific data model. http://stuchalk.github.io/scidata/, 2016).

19.
Zookeys ; (150): 117-26, 2011.
Artigo em Inglês | MEDLINE | ID: mdl-22207809

RESUMO

GeoCAT is an open source, browser based tool that performs rapid geospatial analysis to ease the process of Red Listing taxa. Developed to utilise spatially referenced primary occurrence data, the analysis focuses on two aspects of the geographic range of a taxon: the extent of occurrence (EOO) and the area of occupancy (AOO). These metrics form part of the IUCN Red List categories and criteria and have often proved challenging to obtain in an accurate, consistent and repeatable way. Within a familiar Google Maps environment, GeoCAT users can quickly and easily combine data from multiple sources such as GBIF, Flickr and Scratchpads as well as user generated occurrence data. Analysis is done with the click of a button and is visualised instantly, providing an indication of the Red List threat rating, subject to meeting the full requirements of the criteria. Outputs including the results, data and parameters used for analysis are stored in a GeoCAT file that can be easily reloaded or shared with collaborators. GeoCAT is a first step toward automating the data handling process of Red List assessing and provides a valuable hub from which further developments and enhancements can be spawned.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA