Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 100
Filtrar
1.
J Appl Microbiol ; 2024 Aug 07.
Artículo en Inglés | MEDLINE | ID: mdl-39113269

RESUMEN

Public sector data associated with health is a highly valuable resource with multiple potential end-users, from health practitioners, researchers, public bodies, policy makers and industry. Data for infectious disease agents is used for epidemiological investigations, disease tracking and assessing emerging biological threats. Yet, there are challenges in collating and re-using it. Data may be derived from multiple sources, generated and collected for different purposes. While public sector data should be open access, providers from public health settings or from agriculture, food or environment sources have sensitivity criteria to meet with ethical restrictions in how the data can be reused. Yet, sharable datasets need to describe the pathogens with sufficient contextual metadata for maximal utility, e.g. associated disease or disease potential and the pathogen source. As data comprises the physical resources of pathogen collections and potentially associated sequences, there is an added emerging technical issue of integration of 'omics 'big data'. Thus, there is a need to identify suitable means to integrate and safely access diverse data for pathogens. Established genomics alliances and platforms interpret and meet the challenges in different ways depending on their own context. Nonetheless, their templates and frameworks provide a solution for adaption to pathogen datasets.

2.
Plant Methods ; 20(1): 103, 2024 Jul 13.
Artículo en Inglés | MEDLINE | ID: mdl-39003455

RESUMEN

BACKGROUND: Genotyping of individuals plays a pivotal role in various biological analyses, with technology choice influenced by multiple factors including genomic constraints, number of targeted loci and individuals, cost considerations, and the ease of sample preparation and data processing. Target enrichment capture of specific polymorphic regions has emerged as a flexible and cost-effective genomic reduction method for genotyping, especially adapted to the case of very large genomes. However, this approach necessitates complex bioinformatics treatment to extract genotyping data from raw reads. Existing workflows predominantly cater to phylogenetic inference, leaving a gap in user-friendly tools for genotyping analysis based on capture methods. In response to these challenges, we have developed GeCKO (Genotyping Complexity Knocked-Out). To assess the effectiveness of combining target enrichment capture with GeCKO, we conducted a case study on durum wheat domestication history, involving sequencing, processing, and analyzing variants in four relevant durum wheat groups. RESULTS: GeCKO encompasses four distinct workflows, each designed for specific steps of genomic data processing: (i) read demultiplexing and trimming for data cleaning, (ii) read mapping to align sequences to a reference genome, (iii) variant calling to identify genetic variants, and (iv) variant filtering. Each workflow in GeCKO can be easily configured and is executable across diverse computational environments. The workflows generate comprehensive HTML reports including key summary statistics and illustrative graphs, ensuring traceable, reproducible results and facilitating straightforward quality assessment. A specific innovation within GeCKO is its 'targeted remapping' feature, specifically designed for efficient treatment of targeted enrichment capture data. This process consists of extracting reads mapped to the targeted regions, constructing a smaller sub-reference genome, and remapping the reads to this sub-reference, thereby enhancing the efficiency of subsequent steps. CONCLUSIONS: The case study results showed the expected intra-group diversity and inter-group differentiation levels, confirming the method's effectiveness for genotyping and analyzing genetic diversity in species with complex genomes. GeCKO streamlined the data processing, significantly improving computational performance and efficiency. The targeted remapping enabled straightforward SNP calling in durum wheat, a task otherwise complicated by the species' large genome size. This illustrates its potential applications in various biological research contexts.

3.
J Biomed Inform ; 157: 104700, 2024 Jul 28.
Artículo en Inglés | MEDLINE | ID: mdl-39079607

RESUMEN

BACKGROUND: The future European Health Research and Innovation Cloud (HRIC), as fundamental part of the European Health Data Space (EHDS), will promote the secondary use of data and the capabilities to push the boundaries of health research within an ethical and legally compliant framework that reinforces the trust of patients and citizens. OBJECTIVE: This study aimed to analyse health data management mechanisms in Europe to determine their alignment with FAIR principles and data discovery generating best. practices for new data hubs joining the HRIC ecosystem. In this line, the compliance of health data hubs with FAIR principles and data discovery were assessed, and a set of best practices for health data hubs was concluded. METHODS: A survey was conducted in January 2022, involving 99 representative health data hubs from multiple countries, and 42 responses were obtained in June 2022. Stratification methods were employed to cover different levels of granularity. The survey data was analysed to assess compliance with FAIR and data discovery principles. The study started with a general analysis of survey responses, followed by the creation of specific profiles based on three categories: organization type, function, and level of data aggregation. RESULTS: The study produced specific best practices for data hubs regarding the adoption of FAIR principles and data discoverability. It also provided an overview of the survey study and specific profiles derived from category analysis, considering different types of data hubs. CONCLUSIONS: The study concluded that a significant number of health data hubs in Europe did not fully comply with FAIR and data discovery principles. However, the study identified specific best practices that can guide new data hubs in adhering to these principles. The study highlighted the importance of aligning health data management mechanisms with FAIR principles to enhance interoperability and reusability in the future HRIC.

4.
JMIR Public Health Surveill ; 10: e54281, 2024 07 23.
Artículo en Inglés | MEDLINE | ID: mdl-39042429

RESUMEN

Infectious disease (ID) cohorts are key to advancing public health surveillance, public policies, and pandemic responses. Unfortunately, ID cohorts often lack funding to store and share clinical-epidemiological (CE) data and high-dimensional laboratory (HDL) data long term, which is evident when the link between these data elements is not kept up to date. This becomes particularly apparent when smaller cohorts fail to successfully address the initial scientific objectives due to limited case numbers, which also limits the potential to pool these studies to monitor long-term cross-disease interactions within and across populations. CE data from 9 arbovirus (arthropod-borne viruses) cohorts in Latin America were retrospectively harmonized using the Maelstrom Research methodology and standardized to Clinical Data Interchange Standards Consortium (CDISC). We created a harmonized and standardized meta-cohort that contains CE and HDL data from 9 arbovirus studies from Latin America. To facilitate advancements in cross-population inference and reuse of cohort data, the Reconciliation of Cohort Data for Infectious Diseases (ReCoDID) Consortium harmonized and standardized CE and HDL from 9 arbovirus cohorts into 1 meta-cohort. Interested parties will be able to access data dictionaries that include information on variables across the data sets via Bio Studies. After consultation with each cohort, linked harmonized and curated human cohort data (CE and HDL) will be made accessible through the European Genome-phenome Archive platform to data users after their requests are evaluated by the ReCoDID Data Access Committee. This meta-cohort can facilitate various joint research projects (eg, on immunological interactions between sequential flavivirus infections and for the evaluation of potential biomarkers for severe arboviral disease).


Asunto(s)
Infecciones por Arbovirus , Humanos , Infecciones por Arbovirus/epidemiología , Estudios de Cohortes , América Latina/epidemiología , Masculino , Femenino , Niño , Arbovirus , Estudios Retrospectivos , Adolescente , Preescolar , Adulto
5.
Data Brief ; 55: 110687, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39049974

RESUMEN

This data article presents a set of primary, analyzed, and digitalized mechanical testing datasets for nine copper alloys. The mechanical testing methods including the Brinell and Vickers hardness, tensile, stress relaxation, and low-cycle fatigue (LCF) testing were performed according to the DIN/ISO standards. The obtained primary testing data (84 files) mainly contain the raw measured data along with the testing metadata of the processes, materials, and testing machines. Five secondary datasets were also provided for each testing method by collecting the main meta- and measurement data from the primary data and the outputs of data analyses. These datasets give materials scientists beneficial data for comparative material selection analyses by clarifying the wide range of mechanical properties of copper alloys, including Brinell and Vickers hardness, yield and tensile strengths, elongation, reduction of area, relaxed and residual stresses, and LCF fatigue life. Furthermore, both the primary and secondary datasets were digitalized by the approach introduced in the research article entitled "Toward a digital materials mechanical testing lab" [1]. The resulting open-linked data are the machine-processable semantic descriptions of data and their generation processes and can be easily queried by semantic searches to enable advanced data-driven materials research.

6.
ACS Synth Biol ; 13(8): 2621-2624, 2024 Aug 16.
Artículo en Inglés | MEDLINE | ID: mdl-39051984

RESUMEN

The BioRECIPE (Biological system Representation for Evaluation, Curation, Interoperability, Preserving, and Execution) knowledge representation format was introduced to standardize and facilitate human-machine interaction while creating, verifying, evaluating, curating, and expanding executable models of intra- and intercellular signaling. This format allows a human user to easily preview and modify any model component, while it is at the same time readable by machines and can be processed by a suite of model development and analysis tools. The BioRECIPE format is compatible with multiple representation formats, natural language processing tools, modeling tools, and databases that are used by the systems and synthetic biology communities.


Asunto(s)
Biología Sintética , Humanos , Biología Sintética/métodos , Procesamiento de Lenguaje Natural , Programas Informáticos , Modelos Biológicos , Bases de Datos Factuales , Biología de Sistemas/métodos
7.
Brief Bioinform ; 25(4)2024 May 23.
Artículo en Inglés | MEDLINE | ID: mdl-38836701

RESUMEN

Biomedical data are generated and collected from various sources, including medical imaging, laboratory tests and genome sequencing. Sharing these data for research can help address unmet health needs, contribute to scientific breakthroughs, accelerate the development of more effective treatments and inform public health policy. Due to the potential sensitivity of such data, however, privacy concerns have led to policies that restrict data sharing. In addition, sharing sensitive data requires a secure and robust infrastructure with appropriate storage solutions. Here, we examine and compare the centralized and federated data sharing models through the prism of five large-scale and real-world use cases of strategic significance within the European data sharing landscape: the French Health Data Hub, the BBMRI-ERIC Colorectal Cancer Cohort, the federated European Genome-phenome Archive, the Observational Medical Outcomes Partnership/OHDSI network and the EBRAINS Medical Informatics Platform. Our analysis indicates that centralized models facilitate data linkage, harmonization and interoperability, while federated models facilitate scaling up and legal compliance, as the data typically reside on the data generator's premises, allowing for better control of how data are shared. This comparative study thus offers guidance on the selection of the most appropriate sharing strategy for sensitive datasets and provides key insights for informed decision-making in data sharing efforts.


Asunto(s)
Disciplinas de las Ciencias Biológicas , Difusión de la Información , Humanos , Informática Médica/métodos
8.
Acta Crystallogr D Struct Biol ; 80(Pt 6): 439-450, 2024 Jun 01.
Artículo en Inglés | MEDLINE | ID: mdl-38832828

RESUMEN

The expansive scientific software ecosystem, characterized by millions of titles across various platforms and formats, poses significant challenges in maintaining reproducibility and provenance in scientific research. The diversity of independently developed applications, evolving versions and heterogeneous components highlights the need for rigorous methodologies to navigate these complexities. In response to these challenges, the SBGrid team builds, installs and configures over 530 specialized software applications for use in the on-premises and cloud-based computing environments of SBGrid Consortium members. To address the intricacies of supporting this diverse application collection, the team has developed the Capsule Software Execution Environment, generally referred to as Capsules. Capsules rely on a collection of programmatically generated bash scripts that work together to isolate the runtime environment of one application from all other applications, thereby providing a transparent cross-platform solution without requiring specialized tools or elevated account privileges for researchers. Capsules facilitate modular, secure software distribution while maintaining a centralized, conflict-free environment. The SBGrid platform, which combines Capsules with the SBGrid collection of structural biology applications, aligns with FAIR goals by enhancing the findability, accessibility, interoperability and reusability of scientific software, ensuring seamless functionality across diverse computing environments. Its adaptability enables application beyond structural biology into other scientific fields.


Asunto(s)
Programas Informáticos , Biología Computacional/métodos
9.
J Biomed Inform ; 154: 104647, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38692465

RESUMEN

OBJECTIVE: To use software, datasets, and data formats in the domain of Infectious Disease Epidemiology as a test collection to evaluate a novel M1 use case, which we introduce in this paper. M1 is a machine that upon receipt of a new digital object of research exhaustively finds all valid compositions of it with existing objects. METHOD: We implemented a data-format-matching-only M1 using exhaustive search, which we refer to as M1DFM. We then ran M1DFM on the test collection and used error analysis to identify needed semantic constraints. RESULTS: Precision of M1DFM search was 61.7%. Error analysis identified needed semantic constraints and needed changes in handling of data services. Most semantic constraints were simple, but one data format was sufficiently complex to be practically impossible to represent semantic constraints over, from which we conclude limitatively that software developers will have to meet the machines halfway by engineering software whose inputs are sufficiently simple that their semantic constraints can be represented, akin to the simple APIs of services. We summarize these insights as M1-FAIR guiding principles for composability and suggest a roadmap for progressively capable devices in the service of reuse and accelerated scientific discovery. CONCLUSION: Algorithmic search of digital repositories for valid workflow compositions has potential to accelerate scientific discovery but requires a scalable solution to the problem of knowledge acquisition about semantic constraints on software inputs. Additionally, practical limitations on the logical complexity of semantic constraints must be respected, which has implications for the design of software.


Asunto(s)
Programas Informáticos , Humanos , Semántica , Aprendizaje Automático , Algoritmos , Bases de Datos Factuales
10.
Int J Digit Humanit ; 6(1): 23-43, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38799025

RESUMEN

Reproducibility has become a requirement in the hard sciences, and its adoption is gradually extending to the digital humanities. The FAIR criteria and the publication of data papers are both indicative of this trend. However, the question that arises is whether the strict prerequisites of digital reproducibility serve only to exclude digital humanities from broader humanities scholarship. Instead of adopting a binary approach, an alternative method acknowledges the unique features of the objects, inquiries, and techniques of the humanities, including digital humanities, as well as the social and historical contexts in which the concept of reproducibility has developed in the human sciences. In the first part of this paper, I propose to examine the historical and disciplinary context in which the concept of reproducibility has developed within the human sciences, and the disciplinary struggles involved in this process, especially for art history and literature studies. In the second part, I will explore the question of reproducibility through two art history research projects that utilize various computational methods. I argue that issues of corpus, method, and interpretation cannot be separated, rendering a procedural definition of reproducibility impractical. Consequently, I propose the adoption of 'post-computational reproducibility', which is based on FAIREST criteria as far as digital corpora are concerned (FAIR + Ethics and Expertise, Source mention + Time-Stamp), but extended to include further sources that confirm computational results with other non-computational methodologies.

11.
Front Cell Infect Microbiol ; 14: 1384809, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38774631

RESUMEN

Introduction: Sharing microbiome data among researchers fosters new innovations and reduces cost for research. Practically, this means that the (meta)data will have to be standardized, transparent and readily available for researchers. The microbiome data and associated metadata will then be described with regards to composition and origin, in order to maximize the possibilities for application in various contexts of research. Here, we propose a set of tools and protocols to develop a real-time FAIR (Findable. Accessible, Interoperable and Reusable) compliant database for the handling and storage of human microbiome and host-associated data. Methods: The conflicts arising from privacy laws with respect to metadata, possible human genome sequences in the metagenome shotgun data and FAIR implementations are discussed. Alternate pathways for achieving compliance in such conflicts are analyzed. Sample traceable and sensitive microbiome data, such as DNA sequences or geolocalized metadata are identified, and the role of the GDPR (General Data Protection Regulation) data regulations are considered. For the construction of the database, procedures have been realized to make data FAIR compliant, while preserving privacy of the participants providing the data. Results and discussion: An open-source development platform, Supabase, was used to implement the microbiome database. Researchers can deploy this real-time database to access, upload, download and interact with human microbiome data in a FAIR complaint manner. In addition, a large language model (LLM) powered by ChatGPT is developed and deployed to enable knowledge dissemination and non-expert usage of the database.


Asunto(s)
Microbiota , Humanos , Microbiota/genética , Bases de Datos Factuales , Metadatos , Metagenoma , Difusión de la Información , Biología Computacional/métodos , Metagenómica/métodos , Bases de Datos Genéticas
12.
PeerJ Comput Sci ; 10: e1951, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38660149

RESUMEN

Software plays a fundamental role in research as a tool, an output, or even as an object of study. This special issue on software citation, indexing, and discoverability brings together five papers examining different aspects of how the use of software is recorded and made available to others. It describes new work on datasets that enable large-scale analysis of the evolution of software usage and citation, that presents evidence of increased citation rates when software artifacts are released, that provides guidance for registries and repositories to support software citation and findability, and that shows there are still barriers to improving and formalising software citation and publication practice. As the use of software increases further, driven by modern research methods, addressing the barriers to software citation and discoverability will encourage greater sharing and reuse of software, in turn enabling research progress.

13.
J Biomed Semantics ; 15(1): 1, 2024 Mar 04.
Artículo en Inglés | MEDLINE | ID: mdl-38438913

RESUMEN

The increasing number of articles on adverse interactions that may occur when specific foods are consumed with certain drugs makes it difficult to keep up with the latest findings. Conflicting information is available in the scientific literature and specialized knowledge bases because interactions are described in an unstructured or semi-structured format. The FIDEO ontology aims to integrate and represent information about food-drug interactions in a structured way. This article reports on the new version of this ontology in which more than 1700 interactions are integrated from two online resources: DrugBank and Hedrine. These food-drug interactions have been represented in FIDEO in the form of precompiled concepts, each of which specifies both the food and the drug involved. Additionally, competency questions that can be answered are reviewed, and avenues for further enrichment are discussed.


Asunto(s)
Interacciones Alimento-Droga , Bases del Conocimiento
15.
Stud Health Technol Inform ; 310: 154-158, 2024 Jan 25.
Artículo en Inglés | MEDLINE | ID: mdl-38269784

RESUMEN

Decision-making in healthcare is heavily reliant on data that is findable, accessible, interoperable and reusable (FAIR). Evolving advancements in genomics also heavily rely on FAIR data to steer reliable research for the future. For practical purposes, ensuring FAIRness of a clinical data set can be challenging but could be aided by using FAIR validators. The study describes the test of two open-access web-tools in their demo versions to determine the FAIR levels of three submitted genomic data files with different formats (JSON, TXT, CSV). The F-UJI tool and FAIR-Checker tools provided similar FAIR scores for the three submitted files. However, the F-UJI tool assigned a total rating whereas the FAIR-Checker gave scores clustered by FAIR principles. Neither tool was suited to determine FAIR levels of a FHIR® JSON metadata file. Despite their early developmental status, FAIR validator tools have great potential to assist clinicians in the FAIRification of their research data.


Asunto(s)
Genómica , Instituciones de Salud , Metadatos , Registros
16.
J Med Internet Res ; 25: e48702, 2023 12 28.
Artículo en Inglés | MEDLINE | ID: mdl-38153779

RESUMEN

In order to maximize the value of electronic health records (EHRs) for both health care and secondary use, it is necessary for the data to be interoperable and reusable without loss of the original meaning and context, in accordance with the findable, accessible, interoperable, and reusable (FAIR) principles. To achieve this, it is essential for health data platforms to incorporate standards that facilitate addressing needs such as formal modeling of clinical knowledge (health domain concepts) as well as the harmonized persistence, query, and exchange of data across different information systems and organizations. However, the selection of these specifications has not been consistent across the different health data initiatives, often applying standards to address needs for which they were not originally designed. This issue is essential in the current scenario of implementing the European Health Data Space, which advocates harmonization, interoperability, and reuse of data without regulating the specific standards to be applied for this purpose. Therefore, this viewpoint aims to establish a coherent, agnostic, and homogeneous framework for the use of the most impactful EHR standards in the new-generation health data spaces: OpenEHR, International Organization for Standardization (ISO) 13606, and Health Level 7 (HL7) Fast Healthcare Interoperability Resources (FHIR). Thus, a panel of EHR standards experts has discussed several critical points to reach a consensus that will serve decision-making teams in health data platform projects who may not be experts in these EHR standards. It was concluded that these specifications possess different capabilities related to modeling, flexibility, and implementation resources. Because of this, in the design of future data platforms, these standards must be applied based on the specific needs they were designed for, being likewise fully compatible with their combined functional and technical implementation.


Asunto(s)
Registros Electrónicos de Salud , Estándar HL7 , Humanos , Consenso , Conocimiento , Estándares de Referencia
17.
Neuron ; 111(23): 3710-3715, 2023 Dec 06.
Artículo en Inglés | MEDLINE | ID: mdl-37944519

RESUMEN

Sharing human brain data can yield scientific benefits, but because of various disincentives, only a fraction of these data is currently shared. We profile three successful data-sharing experiences from the NIH BRAIN Initiative Research Opportunities in Humans (ROH) Consortium and demonstrate benefits to data producers and to users.


Asunto(s)
Encéfalo , Neurofisiología , Humanos , Difusión de la Información
18.
Anim Microbiome ; 5(1): 48, 2023 Oct 05.
Artículo en Inglés | MEDLINE | ID: mdl-37798675

RESUMEN

BACKGROUND: Metagenomic data can shed light on animal-microbiome relationships and the functional potential of these communities. Over the past years, the generation of metagenomics data has increased exponentially, and so has the availability and reusability of data present in public repositories. However, identifying which datasets and associated metadata are available is not straightforward. We created the Animal-Associated Metagenome Metadata Database (AnimalAssociatedMetagenomeDB - AAMDB) to facilitate the identification and reuse of publicly available non-human, animal-associated metagenomic data, and metadata. Further, we used the AAMDB to (i) annotate common and scientific names of the species; (ii) determine the fraction of vertebrates and invertebrates; (iii) study their biogeography; and (iv) specify whether the animals were wild, pets, livestock or used for medical research. RESULTS: We manually selected metagenomes associated with non-human animals from SRA and MG-RAST.  Next, we standardized and curated 51 metadata attributes (e.g., host, compartment, geographic coordinates, and country). The AAMDB version 1.0 contains 10,885 metagenomes associated with 165 different species from 65 different countries. From the collected metagenomes, 51.1% were recovered from animals associated with medical research or grown for human consumption (i.e., mice, rats, cattle, pigs, and poultry). Further, we observed an over-representation of animals collected in temperate regions (89.2%) and a lower representation of samples from the polar zones, with only 11 samples in total. The most common genus among invertebrate animals was Trichocerca (rotifers). CONCLUSION: Our work may guide host species selection in novel animal-associated metagenome research, especially in biodiversity and conservation studies. The data available in our database will allow scientists to perform meta-analyses and test new hypotheses (e.g., host-specificity, strain heterogeneity, and biogeography of animal-associated metagenomes), leveraging existing data. The AAMDB WebApp is a user-friendly interface that is publicly available at https://webapp.ufz.de/aamdb/ .

19.
Proc Natl Acad Sci U S A ; 120(43): e2206981120, 2023 Oct 24.
Artículo en Inglés | MEDLINE | ID: mdl-37831745

RESUMEN

In January 2023, a new NIH policy on data sharing went into effect. The policy applies to both quantitative and qualitative research (QR) data such as data from interviews or focus groups. QR data are often sensitive and difficult to deidentify, and thus have rarely been shared in the United States. Over the past 5 y, our research team has engaged stakeholders on QR data sharing, developed software to support data deidentification, produced guidance, and collaborated with the ICPSR data repository to pilot the deposit of 30 QR datasets. In this perspective article, we share important lessons learned by addressing eight clusters of questions on issues such as where, when, and what to share; how to deidentify data and support high-quality secondary use; budgeting for data sharing; and the permissions needed to share data. We also offer a brief assessment of the state of preparedness of data repositories, QR journals, and QR textbooks to support data sharing. While QR data sharing could yield important benefits to the research community, we quickly need to develop enforceable standards, expertise, and resources to support responsible QR data sharing. Absent these resources, we risk violating participant confidentiality and wasting a significant amount of time and funding on data that are not useful for either secondary use or data transparency and verification.

20.
Acta Crystallogr F Struct Biol Commun ; 79(Pt 10): 267-273, 2023 Oct 01.
Artículo en Inglés | MEDLINE | ID: mdl-37815476

RESUMEN

A recent editorial in the IUCr macromolecular crystallography journals [Helliwell et al. (2019), Acta Cryst. D75, 455-457] called for the implementation of the FAIR data principles. This implies that the authors of a paper that describes research on a macromolecular structure should make their raw diffraction data available. Authors are already used to submitting the derived data (coordinates) and the processed data (structure factors, merged or unmerged) to the PDB, but may still be uncomfortable with making the raw diffraction images available. In this paper, some guidelines and instructions on depositing raw data to Zenodo are given.


Asunto(s)
Cristalografía , Cristalografía por Rayos X , Sustancias Macromoleculares
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA