Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 869
Filtrar
2.
Int J Evid Proof ; 28(4): 280-297, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39376548

RESUMEN

This paper explores the role of Intercept Interpreters/Translators (IITs) in law enforcement communication surveillance efforts. It focuses on the production and reliability of Translated Intercept Records (TIR), which are comprehensive written records in the target language that may be produced for intelligence purposes or for use in court as Translated Intercept Evidence (TIE). The paper underscores the critical importance of reliable TIR for both evidentiary use and operational decision-making. The authors emphasise the need to establish minimal standards in the quest for reliability and dispel the misconception that literal translations fulfil evidentiary requirements. The standards proposed in this paper aim to minimise interpretation errors to enhance the overall effectiveness of investigations and safeguard the interests of justice. The paper concludes by highlighting the need to align judicial expectations with sound translation practices.

3.
Neural Netw ; 181: 106781, 2024 Oct 05.
Artículo en Inglés | MEDLINE | ID: mdl-39388994

RESUMEN

Graph Neural Networks (GNNs) play a key role in efficiently learning node representations of graph-structured data through message passing, but their predictions are often correlated with sensitive attributes and thus lead to potential discrimination against some groups. Given the increasingly widespread applications of GNNs, solutions are required urgently to prevent algorithmic discrimination associated with GNNs, to protect the rights of vulnerable groups and to build trustworthy artificial intelligence. To learn the fair node representations of graphs, we propose a novel framework, the Fair Disentangled Graph Neural Network (FDGNN). With the proposed FDGNN framework, we enhance data diversity by generating instances that have identical sensitivity values but different adjacency matrices through data augmentation. Additionally, we design a counterfactual augmentation strategy for constructing instances with varying sensitive values while preserving the same adjacency matrices, thereby balancing the distribution of sensitive values across different groups. Subsequently, we employ a disentangled contrastive learning strategy to acquire disentangled representations of non-sensitive attributes such that sensitive information does not affect the prediction of node information. Finally, the learned fair representations of non-sensitive attributes are employed for building a fair predictive model. Extensive experiments on three real-world datasets demonstrate that FDGNN yields the best fairness predictions compared to the baseline methods. Additionally, the results demonstrate the potential of disentanglement in learning fair representations.

4.
Comput Toxicol ; 302024 May 16.
Artículo en Inglés | MEDLINE | ID: mdl-39381054

RESUMEN

The National Nanotechnology Initiative organized a Nanoinformatics Conference in the 2023 Biden-Harris Administration's Year of Open Science, which included interested U.S. and EU stakeholders, and preceded the U.S.-EU COR meeting on November 15th, 2023 in Washington, D.C. Progress in the development of a common nanoinformatics infrastructure in the European Union and United States were discussed. Development of contributing, individual database projects, and their strengths and weaknesses, were highlighted. Recommendations and next steps for a U.S. nanoEHS common infrastructure were discussed in light of the pending update of the National Nanotechnology Initiative (NNI)'s Environmental, Health and Safety Research Strategy, and U.S. efforts to curate and house nano Environmental Health and Safety (nanoEHS) data from U.S. federal stakeholder groups. Improved data standards, for reporting and storage have been identified as areas where concerted efforts could most benefit initially. Areas that were not addressed at the conference, but that are critical to progress of the U.S. federal consortium effort are the evaluation of data formats according to use and sustainability measures; modeler and end user, including risk-assessor and regulator perspectives; a need for a community forum or shared data location that is not hosted by any individual U.S. federal agency, and is accessible to the public; as well as emerging needs for integration with new data types such as micro and nano plastics, and interoperability with other data and meta-data, such as adverse outcome pathway information. Future progress will depend on continued interaction of the U.S. and EU CORs, stakeholders and partners in the continued development goals for shared or interoperable infrastructure for nanoEHS.

5.
Sensors (Basel) ; 24(19)2024 Oct 04.
Artículo en Inglés | MEDLINE | ID: mdl-39409473

RESUMEN

From June to October, 2022, we recorded the weight, the internal temperature, and the hive entrance video traffic of ten managed honey bee (Apis mellifera) colonies at a research apiary of the Carl Hayden Bee Research Center in Tucson, AZ, USA. The weight and temperature were recorded every five minutes around the clock. The 30 s videos were recorded every five minutes daily from 7:00 to 20:55. We curated the collected data into a dataset of 758,703 records (280,760-weight; 322,570-temperature; 155,373-video). A principal objective of Part I of our investigation was to use the curated dataset to investigate the discrete univariate time series forecasting of hive weight, in-hive temperature, and hive entrance traffic with shallow artificial, convolutional, and long short-term memory networks and to compare their predictive performance with traditional autoregressive integrated moving average models. We trained and tested all models with a 70/30 train/test split. We varied the intake and the predicted horizon of each model from 6 to 24 hourly means. Each artificial, convolutional, and long short-term memory network was trained for 500 epochs. We evaluated 24,840 trained models on the test data with the mean squared error. The autoregressive integrated moving average models performed on par with their machine learning counterparts, and all model types were able to predict falling, rising, and unchanging trends over all predicted horizons. We made the curated dataset public for replication.


Asunto(s)
Temperatura , Animales , Abejas/fisiología , Predicción/métodos
6.
F1000Res ; 132024.
Artículo en Inglés | MEDLINE | ID: mdl-39410979

RESUMEN

Research data management (RDM) is central to the implementation of the FAIR (Findable Accessible, Interoperable, Reusable) and Open Science principles. Recognising the importance of RDM, ELIXIR Platforms and Nodes have invested in RDM and launched various projects and initiatives to ensure good data management practices for scientific excellence. These projects have resulted in a rich set of tools and resources highly valuable for FAIR data management. However, these resources remain scattered across projects and ELIXIR structures, making their dissemination and application challenging. Therefore, it becomes imminent to coordinate these efforts for sustainable and harmonised RDM practices with dedicated forums for RDM professionals to exchange knowledge and share resources. The proposed ELIXIR RDM Community will bring together RDM experts to develop ELIXIR's vision and coordinate its activities, taking advantage of the available assets. It aims to coordinate RDM best practices and illustrate how to use the existing ELIXIR RDM services. The Community will be built around three integral pillars, namely, a network of RDM professionals, RDM knowledge management and RDM training expertise and resources. It will also engage with external stakeholders to leverage benefits and provide a forum to RDM professionals for regular knowledge exchange, capacity building and development of harmonised RDM practices, keeping in line with the overall scope of the RDM Community. In the short term, the Community aims to build upon the existing resources and ensure that the content of these remain up to date and fit for purpose. In the long run, the Community will aim to strengthen the skills and knowledge of its RDM professionals to support the emerging needs of the scientific community. The Community will also devise an effective strategy to engage with other ELIXIR structures and international stakeholders to influence and align with developments and solutions in the RDM field.


Asunto(s)
Manejo de Datos , Manejo de Datos/métodos , Humanos , Investigación
7.
Front Med (Lausanne) ; 11: 1473874, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39416867

RESUMEN

Introduction: Data-driven medicine is essential for enhancing the accessibility and quality of the healthcare system. The availability of data plays a crucial role in achieving this goal. Methods: We propose implementing a robust data infrastructure of FAIRification and data fusion for clinical, genomic, and imaging data. This will be embedded within the framework of a distributed analytics platform for healthcare data analysis, utilizing the Personal Health Train paradigm. Results: This infrastructure will ensure the findability, accessibility, interoperability, and reusability of data, metadata, and results among multiple medical centers participating in the BETTER Horizon Europe project. The project focuses on studying rare diseases, such as intellectual disability and inherited retinal dystrophies. Conclusion: The anticipated impacts will benefit a wide range of healthcare practitioners and potentially influence health policymakers.

8.
Health Care Anal ; 2024 Oct 24.
Artículo en Inglés | MEDLINE | ID: mdl-39446253

RESUMEN

Priority setting of scarce resources in healthcare is high on the agenda of most healthcare systems implying a need to develop robust foundations for making fair allocation decisions. One central factor for such decisions in needs-based systems, following both empirical studies and theoretical analyses, is severity. However, it has been noted that severity is an under-theorized concept. One such aspect is how severity should relate to temporality. There is a rich discussion on temporality and distributive justice, however, this discussion needs to be adapted to the practical and ethical requirements of healthcare priority setting principles at mid-level. In this article, we analyze how temporal aspects should be taken into account when assessing severity as a modifier for cost-effectiveness. We argue that when assessing the severity of a condition, we have reason to look at complete conditions from a time-neutral perspective, meaning that we take the full affectable stretch of the condition into account without modifying severity as patients move through the temporal stretch and without discounting the future. We do not find support for taking the 'shape' of a condition into account per se, e.g. whether the severity has a declining or inclining curve, or that severity is intermittent rather than continuous. In order to take severity seriously, we argue that we have reason to apply a quantified approach where every difference in severity should impact on priority setting. In conclusion, we find that this approach is practically useful in actual priority setting.

9.
Health Informatics J ; 30(4): 14604582241287010, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39367798

RESUMEN

Objective: A comprehensive understanding of professional and technical terms is essential to achieving practical results in multidisciplinary projects dealing with health informatics and digital health. The medical informatics multilingual ontology (MIMO) initiative has been created through international cooperation. MIMO is continuously updated and comprises over 3700 concepts in 37 languages on the Health Terminology/Ontology Portal (HeTOP). Methods: We conducted case studies to assess the feasibility and impact of integrating MIMO into real-world healthcare projects. In HosmartAI, MIMO is used to index technological tools in a dedicated marketplace and improve partners' communication. Then, in SaNuRN, MIMO supports the development of a "Catalog and Index of Digital Health Teaching Resources" (CIDHR) backing digital health resources retrieval for health and allied health students. Results: In HosmartAI, MIMO facilitates the indexation of technological tools and smooths partners' interactions. In SaNuRN within CIDHR, MIMO ensures that students and practitioners access up-to-date, multilingual, and high-quality resources to enhance their learning endeavors. Conclusion: Integrating MIMO into training in smart hospital projects allows healthcare students and experts worldwide with different mother tongues and knowledge to tackle challenges facing the health informatics and digital health landscape to find innovative solutions improving initial and continuous education.


Asunto(s)
Inteligencia Artificial , Informática Médica , Humanos , Inteligencia Artificial/tendencias , Informática Médica/educación , Informática Médica/métodos , Hospitales , Salud Digital
10.
Front Big Data ; 7: 1428568, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39351001

RESUMEN

In today's data-centric landscape, effective data stewardship is critical for facilitating scientific research and innovation. This article provides an overview of essential tools and frameworks for modern data stewardship practices. Over 300 tools were analyzed in this study, assessing their utility, relevance to data stewardship, and applicability within the life sciences domain.

11.
SLAS Discov ; : 100188, 2024 Oct 18.
Artículo en Inglés | MEDLINE | ID: mdl-39427714

RESUMEN

We present a standardized metadata template for assays used in pharmaceutical drug discovery research, according to the FAIR principles. We also describe the use of an automated tool for annotating assays from a variety of sources, including PubChem, commercial assay providers, and the peer-reviewed literature, to this metadata template. Adoption of a standardized metadata template will allow drug discovery scientists to better understand and compare the increasing amounts of assay data becoming available, and will facilitate the use of artificial intelligence tools and other computational methods for analysis and prediction. Since bioassays drive advances in biomedical research, improvements in assay metadata can improve productivity in discovery of new therapeutics, platform technologies, and assay methods.

12.
JAMIA Open ; 7(4): ooae105, 2024 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-39430802

RESUMEN

Background: Procedural and reporting guidelines are crucial in framing scientific practices and communications among researchers and the broader community. These guidelines aim to ensure transparency, reproducibility, and reliability in scientific research. Despite several methodological frameworks proposed by various initiatives to foster reproducibility, challenges such as data leakage and reproducibility remain prevalent. Recent studies have highlighted the transformative potential of incorporating the FAIR (Findable, Accessible, Interoperable, and Reusable) principles into workflows, particularly in contexts like software and machine learning model development, to promote open science. Objective: This study aims to introduce a comprehensive framework, designed to calibrate existing reporting guidelines against the FAIR principles. The goal is to enhance reproducibility and promote open science by integrating these principles into the scientific reporting process. Methods: We employed the "Best fit" framework synthesis approach which involves systematically reviewing and synthesizing existing frameworks and guidelines to identify best practices and gaps. We then proposed a series of defined workflows to align reporting guidelines with FAIR principles. A use case was developed to demonstrate the practical application of the framework. Results: The integration of FAIR principles with established reporting guidelines through the framework effectively bridges the gap between FAIR metrics and traditional reporting standards. The framework provides a structured approach to enhance the findability, accessibility, interoperability, and reusability of scientific data and outputs. The use case demonstrated the practical benefits of the framework, showing improved data management and reporting practices. Discussion: The framework addresses critical challenges in scientific research, such as data leakage and reproducibility issues. By embedding FAIR principles into reporting guidelines, the framework ensures that scientific outputs are more transparent, reliable, and reusable. This integration not only benefits researchers by improving data management practices but also enhances the overall scientific process by promoting open science and collaboration. Conclusion: The proposed framework successfully combines FAIR principles with reporting guidelines, offering a robust solution to enhance reproducibility and open science. This framework can be applied across various contexts, including software and machine learning model development stages, to foster a more transparent and collaborative scientific environment.

13.
Wellcome Open Res ; 9: 523, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39360219

RESUMEN

Background: Data reusability is the driving force of the research data life cycle. However, implementing strategies to generate reusable data from the data creation to the sharing stages is still a significant challenge. Even when datasets supporting a study are publicly shared, the outputs are often incomplete and/or not reusable. The FAIR (Findable, Accessible, Interoperable, Reusable) principles were published as a general guidance to promote data reusability in research, but the practical implementation of FAIR principles in research groups is still falling behind. In biology, the lack of standard practices for a large diversity of data types, data storage and preservation issues, and the lack of familiarity among researchers are some of the main impeding factors to achieve FAIR data. Past literature describes biological curation from the perspective of data resources that aggregate data, often from publications. Methods: Our team works alongside data-generating, experimental researchers so our perspective aligns with publication authors rather than aggregators. We detail the processes for organizing datasets for publication, showcasing practical examples from data curation to data sharing. We also recommend strategies, tools and web resources to maximize data reusability, while maintaining research productivity. Conclusion: We propose a simple approach to address research data management challenges for experimentalists, designed to promote FAIR data sharing. This strategy not only simplifies data management, but also enhances data visibility, recognition and impact, ultimately benefiting the entire scientific community.


Researchers should openly share data associated with their publications unless there is a valid reason not to. Additionally, datasets have to be described with enough detail to ensure that they are reproducible and reusable by others. Since most research institutions offer limited professional support in this area, the responsibility for data sharing largely falls to researchers themselves. However, many research groups still struggle to follow data reusability principles in practice. In this work, we describe our data curation (data organization and management) efforts working directly with the researchers who create the data. We show the steps we took to organize, standardize, and share several datasets in biological sciences, pointing out the main challenges we faced. Finally, we suggest simple and practical data management actions, as well as tools that experimentalists can integrate into their daily work, to make sharing data easier and more effective.

14.
Biodivers Data J ; 12: e134364, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39474431

RESUMEN

Background: The dung beetle genus Scarabaeus (Coleoptera, Scarabaeinae, Scarabaeini), predominantly found in the arid regions of the Old World, includes three endemic species inhabiting the dry ecosystems of western and southern Madagascar. These species are presumed to form a monophyletic clade nested within the African Scarabaeus.Semantic modelling of phenotypes using ontologies represents a transformative approach to species description in biology, making phenotypic data FAIR and computable. The recently developed Phenoscript language enables the creation of semantic, computable species descriptions using a syntax akin to human natural language (NL). However, Phenoscript has not yet been tested as a tool for describing new taxa. New information: In this study, we test the utility of Phenoscript by describing a new species, Scarabaeus (sensu lato) sakalava sp. nov. from Madagascar. The initial description is composed directly in Phenoscript, replacing the traditional natural language format. This Phenoscript description is then translated into a human-readable form using the Phenospy tool for publication purposes. Additionally, the Phenoscript description is converted into an RDF graph, making it understandable by computers using semantic technologies.Scarabaeussakalava sp. nov. is found in western central Madagascar and is closely related to S.viettei (Paulian, 1953) from north-western Madagascar. We provide an updated identification key and distribution map for all Malagasy Scarabaeus and discuss their systematic placement.

15.
Genome Biol Evol ; 16(10)2024 Oct 09.
Artículo en Inglés | MEDLINE | ID: mdl-39404012

RESUMEN

The era of biodiversity genomics is characterized by large-scale genome sequencing efforts that aim to represent each living taxon with an assembled genome. Generating knowledge from this wealth of data has not kept up with this pace. We here discuss major challenges to integrating these novel genomes into a comprehensive functional and evolutionary network spanning the tree of life. In summary, the expanding datasets create a need for scalable gene annotation methods. To trace gene function across species, new methods must seek to increase the resolution of ortholog analyses, e.g. by extending analyses to the protein domain level and by accounting for alternative splicing. Additionally, the scope of orthology prediction should be pushed beyond well-investigated proteomes. This demands the development of specialized methods for the identification of orthologs to short proteins and noncoding RNAs and for the functional characterization of novel gene families. Furthermore, protein structures predicted by machine learning are now readily available, but this new information is yet to be integrated with orthology-based analyses. Finally, an increasing focus should be placed on making orthology assignments adhere to the findable, accessible, interoperable, and reusable (FAIR) principles. This fosters green bioinformatics by avoiding redundant computations and helps integrating diverse scientific communities sharing the need for comparative genetics and genomics information. It should also help with communicating orthology-related concepts in a format that is accessible to the public, to counteract existing misinformation about evolution.


Asunto(s)
Biodiversidad , Genómica , Genómica/métodos , Animales , Evolución Molecular , Anotación de Secuencia Molecular , Biología Computacional/métodos
16.
bioRxiv ; 2024 Oct 21.
Artículo en Inglés | MEDLINE | ID: mdl-39463999

RESUMEN

We created GNQA, a generative pre-trained transformer (GPT) knowledge base driven by a performant retrieval augmented generation (RAG) with a focus on aging, dementia, Alzheimer's and diabetes. We uploaded a corpus of three thousand peer reviewed publications on these topics into the RAG. To address concerns about inaccurate responses and GPT 'hallucinations', we implemented a context provenance tracking mechanism that enables researchers to validate responses against the original material and to get references to the original papers. To assess the effectiveness of contextual information we collected evaluations and feedback from both domain expert users and 'citizen scientists' on the relevance of GPT responses. A key innovation of our study is automated evaluation by way of a RAG assessment system (RAGAS). RAGAS combines human expert assessment with AI-driven evaluation to measure the effectiveness of RAG systems. When evaluating the responses to their questions, human respondents give a "thumbs-up" 76% of the time. Meanwhile, RAGAS scores 90% on answer relevance on questions posed by experts. And when GPT-generates questions, RAGAS scores 74% on answer relevance. With RAGAS we created a benchmark that can be used to continuously assess the performance of our knowledge base. Full GNQA functionality is embedded in the free GeneNetwork.org web service, an open-source system containing over 25 years of experimental data on model organisms and human. The code developed for this study is published under a free and open-source software license at https://git.genenetwork.org/gn-ai/tree/README.md.

17.
J Neurosurg ; : 1-8, 2024 Oct 25.
Artículo en Inglés | MEDLINE | ID: mdl-39454216

RESUMEN

OBJECTIVE: FAIR Health-a nonprofit, state-funded database-was created as an independent repository of healthcare claims paid data to address allegations of price fixing. Many insurers have forced physicians to negotiate payments based on Medicare rates, rather than utilizing FAIR Health. The authors' objective was to provide an overview of regional differences in reimbursement rates per several sample neurosurgical Current Procedural Terminology (CPT) codes and to compare Medicare, Medicaid, and usual, customary, and reasonable rates via FAIR Health rate estimates. METHODS: The authors compared FAIR Health rates for three common neurosurgical CPT codes: 61510 (removal of bone from skull for removal of upper brain tumor), 22630 (fusion of lower spine bones with removal of disc, posterior approach, single interspace), and 62223 (creation of a brain fluid drainage shunt, ventriculoperitoneal, ventriculopleural, or other terminus), with Medicare and Medicaid reimbursement to evaluate differences in five different regions in the US. RESULTS: Medicare and Medicaid reimbursement rates were consistently and significantly lower than FAIR Health in-network rates across all three CPT codes evaluated (p < 0.001 for all). Significant regional differences exist per census data in median age, median income, employment rates, and degree of health coverage (p < 0.001, p = 0.002, p = 0.002, and p = 0.001, respectively). Reimbursement estimates were found to have regional variation: Medicare/Medicaid rates were significantly lower than FAIR Health in-network rates for all codes across regions with a region-based interaction for reimbursement for code 62223 (p = 0.020). Medicare and Medicaid rates did not significantly vary across regions. CONCLUSIONS: Inherent differences exist between cities and states, including median income, employment rates, and health coverage. Despite geographic cost practice indices for Medicare and state-specific production of Medicaid, Medicaid/Medicare reimbursement rates did not vary across regions but were consistently and significantly lower than FAIR Health estimates throughout the US. Locale-specific variation in FAIR Health may further indicate a better accounting of regional differences in cost of practice.

18.
Stud Health Technol Inform ; 318: 156-160, 2024 Sep 24.
Artículo en Inglés | MEDLINE | ID: mdl-39320198

RESUMEN

Reoperation is the most significant complication following any surgical procedure. Developing machine learning methods that predict the need for reoperation will allow for improved shared surgical decision making and patient-specific and preoperative optimisation. Yet, no precise machine learning models have been published to perform well in predicting the need for reoperation within 30 days following primary total shoulder arthroplasty (TSA). This study aimed to build, train, and evaluate a fair (unbiased) and explainable ensemble machine learning method that predicts return to the operating room following primary TSA with an accuracy of 0.852 and AUC of 0.91.


Asunto(s)
Artroplastía de Reemplazo de Hombro , Aprendizaje Automático , Reoperación , Humanos , Complicaciones Posoperatorias , Quirófanos , Masculino , Femenino
19.
Biodivers Data J ; 12: e133775, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39346621

RESUMEN

Biodiversity data, particularly species occurrence and abundance, are indispensable for testing empirical hypothesis in natural sciences. However, datasets built for research programmes do not often meet FAIR (findable, accessible, interoperable and reusable) principles, which raises questions about data quality, accuracy and availability. The 21st century has markedly been a new era for data science and analytics and every effort to aggregate, standardise, filter and share biodiversity data from multiple sources have become increasingly necessary. In this study, we propose a framework for refining and conforming secondary biodiversity data to FAIR standards to make them available for use such as macroecological modelling and other studies. We relied on a Darwin Core base model to standardise and further facilitate the curation and validation of data related including the occurrence and abundance of multiple taxa of a region that encompasses estuarine ecosystems in an ecotonal area bordering the easternmost Amazonia. We further discuss the significance of feeding standardised public data repositories to advance scientific progress and highlight their role in contributing to the biodiversity management and conservation.

20.
JMIR Med Inform ; 12: e60293, 2024 Sep 30.
Artículo en Inglés | MEDLINE | ID: mdl-39348178

RESUMEN

BACKGROUND: Data element repositories facilitate high-quality medical data sharing by standardizing data and enhancing semantic interoperability. However, the application of repositories is confined to specific projects and institutions. OBJECTIVE: This study aims to explore potential issues and promote broader application of data element repositories within the medical field by evaluating and analyzing typical repositories. METHODS: Following the inclusion of 5 data element repositories through a literature review, a novel analysis framework consisting of 7 dimensions and 36 secondary indicators was constructed and used for evaluation and analysis. RESULTS: The study's results delineate the unique characteristics of different repositories and uncover specific issues in their construction. These issues include the absence of data reuse protocols and insufficient information regarding the application scenarios and efficacy of data elements. The repositories fully comply with only 45% (9/20) of the subprinciples for Findable and Reusable in the FAIR principle, while achieving a 90% (19/20 subprinciples) compliance rate for Accessible and 67% (10/15 subprinciples) for Interoperable. CONCLUSIONS: The recommendations proposed in this study address the issues to improve the construction and application of repositories, offering valuable insights to data managers, computer experts, and other pertinent stakeholders.


Asunto(s)
Semántica , Humanos , Almacenamiento y Recuperación de la Información/métodos , Almacenamiento y Recuperación de la Información/normas , Difusión de la Información/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...