Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20
Filtrar
1.
Microsc Microanal ; 2024 Jun 17.
Artículo en Inglés | MEDLINE | ID: mdl-38885135

RESUMEN

Atom probe tomography (APT) data analytics have traditionally been based on manual analytics by researchers. As newer atom probes together with focused ion beam-based specimen preparation have opened APT to many more materials, yielding much more complex mass spectra, building up a systematic understanding of the pathway from raw data to final interpretation has increasingly become important. This demands a system in which the data and treatment can be traced, ideally by any interested party. Such an approach of findable, accessible, interoperable, and reusable (FAIR) data and analysis policies is becoming increasingly important, not just in APT. In this paper, we present a toolbox, written in MATLAB, which allows the user to store the raw and processed data in a standardized FAIR format (hierarchical data format 5) and process the data in a largely scriptable environment to minimize manual user input. This allows for the experiment data to be interchanged without owner explanations and the analysis to be reproduced. We have devised a metadata scheme that is extensible to other experiments in the materials science domain. With this toolbox, collective knowledge can be built up, and a large number of data sets can be analyzed in a fully automated fashion.

2.
BMC Bioinformatics ; 23(1): 16, 2022 Jan 06.
Artículo en Inglés | MEDLINE | ID: mdl-34991457

RESUMEN

BACKGROUND: Single-cell RNA sequencing is becoming a powerful tool to identify cell states, reconstruct developmental trajectories, and deconvolute spatial expression. The rapid development of computational methods promotes the insight of heterogeneous single-cell data. An increasing number of tools have been provided for biological analysts, of which two programming languages- R and Python are widely used among researchers. R and Python are complementary, as many methods are implemented specifically in R or Python. However, the different platforms immediately caused the data sharing and transformation problem, especially for Scanpy, Seurat, and SingleCellExperiemnt. Currently, there is no efficient and user-friendly software to perform data transformation of single-cell omics between platforms, which makes users spend unbearable time on data Input and Output (IO), significantly reducing the efficiency of data analysis. RESULTS: We developed scDIOR for single-cell data transformation between platforms of R and Python based on Hierarchical Data Format Version 5 (HDF5). We have created a data IO ecosystem between three R packages (Seurat, SingleCellExperiment, Monocle) and a Python package (Scanpy). Importantly, scDIOR accommodates a variety of data types across programming languages and platforms in an ultrafast way, including single-cell RNA-seq and spatial resolved transcriptomics data, using only a few codes in IDE or command line interface. For large scale datasets, users can partially load the needed information, e.g., cell annotation without the gene expression matrices. scDIOR connects the analytical tasks of different platforms, which makes it easy to compare the performance of algorithms between them. CONCLUSIONS: scDIOR contains two modules, dior in R and diopy in Python. scDIOR is a versatile and user-friendly tool that implements single-cell data transformation between R and Python rapidly and stably. The software is freely accessible at https://github.com/JiekaiLab/scDIOR .


Asunto(s)
Ecosistema , Programas Informáticos , Algoritmos , Lenguajes de Programación , RNA-Seq
3.
J Struct Biol ; 214(3): 107875, 2022 09.
Artículo en Inglés | MEDLINE | ID: mdl-35724904

RESUMEN

With larger, higher speed detectors and improved automation, individual CryoEM instruments are capable of producing a prodigious amount of data each day, which must then be stored, processed and archived. While it has become routine to use lossless compression on raw counting-mode movies, the averages which result after correcting these movies no longer compress well. These averages could be considered sufficient for long term archival, yet they are conventionally stored with 32 bits of precision, despite high noise levels. Derived images are similarly stored with excess precision, providing an opportunity to decrease project sizes and improve processing speed. We present a simple argument based on propagation of uncertainty for safe bit truncation of flat-fielded images combined with lossless compression. The same method can be used for most derived images throughout the processing pipeline. We test the proposed strategy on two standard, data-limited CryoEM data sets, demonstrating that these limits are safe for real-world use. We find that 5 bits of precision is sufficient for virtually any raw CryoEM data and that 8-12 bits is sufficient for intermediate averages or final 3-D structures. Additionally, we detail and recommend specific rules for discretization of data as well as a practical compressed data representation that is tuned to the specific needs of CryoEM.


Asunto(s)
Compresión de Datos , Automatización , Microscopía por Crioelectrón/métodos , Recolección de Datos , Compresión de Datos/métodos
4.
J Proteome Res ; 20(1): 172-183, 2021 01 01.
Artículo en Inglés | MEDLINE | ID: mdl-32864978

RESUMEN

With ever-increasing amounts of data produced by mass spectrometry (MS) proteomics and metabolomics, and the sheer volume of samples now analyzed, the need for a common open format possessing both file size efficiency and faster read/write speeds has become paramount to drive the next generation of data analysis pipelines. The Proteomics Standards Initiative (PSI) has established a clear and precise extensible markup language (XML) representation for data interchange, mzML, receiving substantial uptake; nevertheless, storage and file access efficiency has not been the main focus. We propose an HDF5 file format "mzMLb" that is optimized for both read/write speed and storage of the raw mass spectrometry data. We provide an extensive validation of the write speed, random read speed, and storage size, demonstrating a flexible format that with or without compression is faster than all existing approaches in virtually all cases, while with compression is comparable in size to proprietary vendor file formats. Since our approach uniquely preserves the XML encoding of the metadata, the format implicitly supports future versions of mzML and is straightforward to implement: mzMLb's design adheres to both HDF5 and NetCDF4 standard implementations, which allows it to be easily utilized by third parties due to their widespread programming language support. A reference implementation within the established ProteoWizard toolkit is provided.


Asunto(s)
Lenguajes de Programación , Proteómica , Bases de Datos de Proteínas , Espectrometría de Masas , Metabolómica , Programas Informáticos
5.
Mass Spectrom Rev ; 36(5): 668-673, 2017 09.
Artículo en Inglés | MEDLINE | ID: mdl-27741559

RESUMEN

The evolution of data exchange in Mass Spectrometry spans decades and has ranged from human-readable text files representing individual scans or collections thereof (McDonald et al., 2004) through the official standard XML-based (Harold, Means, & Udemadu, 2005) data interchange standard (Deutsch, 2012), to increasingly compressed (Teleman et al., 2014) variants of this standard sometimes requiring purely binary adjunct files (Römpp et al., 2011). While the desire to maintain even partial human readability is understandable, the inherent mismatch between XML's textual and irregular format relative to the numeric and highly regular nature of actual spectral data, along with the explosive growth in dataset scales and the resulting need for efficient (binary and indexed) access has led to a phenomenon referred to as "technical drift" (Davis, 2013). While the drift is being continuously corrected using adjunct formats, compression schemes, and programs (Röst et al., 2015), we propose that the future of Mass Spectrometry Exchange Formats lies in the continued reliance and development of the PSI-MS (Mayer et al., 2014) controlled vocabulary, along with an expedited shift to an alternative, thriving and well-supported ecosystem for scientific data-exchange, storage, and access in binary form, namely that of HDF5 (Koranne, 2011). Indeed, pioneering efforts to leverage this universal, binary, and hierarchical data-format have already been published (Wilhelm et al., 2012; Rübel et al., 2013) though they have under-utilized self-description, a key property shared by HDF5 and XML. We demonstrate that a straightforward usage of plain ("vanilla") HDF5 yields immediate returns including, but not limited to, highly efficient data access, platform independent data viewers, a variety of libraries (Collette, 2014) for data retrieval and manipulation in many programming languages and remote data access through comprehensive RESTful data-servers. © 2016 Wiley Periodicals, Inc. Mass Spec Rev 36:668-673, 2017.

6.
Acta Neurochir Suppl ; 126: 121-125, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-29492546

RESUMEN

OBJECTIVES: Modern neuro-critical care units generate high volumes of data. These data originate from a multitude of devices in various formats and levels of granularity. We present a new data format intended to store these data in an ordered and homogenous way. MATERIAL AND METHODS: The adopted data format was based on the hierarchical model, HDF5, which is capable of dealing with a mixture of small and very large datasets with equal ease. It is possible to access and manipulate individual data elements directly within a single file, and this is extensible and versatile. RESULTS: The file structure that was agreed divided the patient data into four different groups: 'Annotations' for clinical events and sporadic observations, 'Numerics' for all the low-frequency data, 'Waves' for all the high-frequency data and 'Summaries' for the trend data and calculated parameters. The addition of attributes to every group and dataset makes the file self-described. More than 200 files have been successfully collected and stored using this format. CONCLUSION: The new file format was implemented in ICM+ software and validated as part of a collaboration with participating centres across Europe.


Asunto(s)
Lesiones Traumáticas del Encéfalo/terapia , Gestión de la Información en Salud/métodos , Monitoreo Fisiológico , Conjuntos de Datos como Asunto , Manejo de la Enfermedad , Europa (Continente) , Humanos , Reproducibilidad de los Resultados , Programas Informáticos
7.
Sci Technol Adv Mater ; 17(1): 410-430, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27877892

RESUMEN

The property of any material is essentially determined by its microstructure. Numerical models are increasingly the focus of modern engineering as helpful tools for tailoring and optimization of custom-designed microstructures by suitable processing and alloy design. A huge variety of software tools is available to predict various microstructural aspects for different materials. In the general frame of an integrated computational materials engineering (ICME) approach, these microstructure models provide the link between models operating at the atomistic or electronic scales, and models operating on the macroscopic scale of the component and its processing. In view of an improved interoperability of all these different tools it is highly desirable to establish a standardized nomenclature and methodology for the exchange of microstructure data. The scope of this article is to provide a comprehensive system of metadata descriptors for the description of a 3D microstructure. The presented descriptors are limited to a mere geometric description of a static microstructure and have to be complemented by further descriptors, e.g. for properties, numerical representations, kinetic data, and others in the future. Further attributes to each descriptor, e.g. on data origin, data uncertainty, and data validity range are being defined in ongoing work. The proposed descriptors are intended to be independent of any specific numerical representation. The descriptors defined in this article may serve as a first basis for standardization and will simplify the data exchange between different numerical models, as well as promote the integration of experimental data into numerical models of microstructures. An HDF5 template data file for a simple, three phase Al-Cu microstructure being based on the defined descriptors complements this article.

8.
Biochim Biophys Acta ; 1844(1 Pt A): 98-107, 2014 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-23429179

RESUMEN

This paper focuses on the use of controlled vocabularies (CVs) and ontologies especially in the area of proteomics, primarily related to the work of the Proteomics Standards Initiative (PSI). It describes the relevant proteomics standard formats and the ontologies used within them. Software and tools for working with these ontology files are also discussed. The article also examines the "mapping files" used to ensure correct controlled vocabulary terms that are placed within PSI standards and the fulfillment of the MIAPE (Minimum Information about a Proteomics Experiment) requirements. This article is part of a Special Issue entitled: Computational Proteomics in the Post-Identification Era. Guest Editors: Martin Eisenacher and Christian Stephan.


Asunto(s)
Proteómica , Vocabulario Controlado , Lenguajes de Programación , Programas Informáticos
9.
J Synchrotron Radiat ; 21(Pt 6): 1224-30, 2014 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-25343788

RESUMEN

Data Exchange is a simple data model designed to interface, or `exchange', data among different instruments, and to enable sharing of data analysis tools. Data Exchange focuses on technique rather than instrument descriptions, and on provenance tracking of analysis steps and results. In this paper the successful application of the Data Exchange model to a variety of X-ray techniques, including tomography, fluorescence spectroscopy, fluorescence tomography and photon correlation spectroscopy, is described.

10.
Artículo en Inglés | MEDLINE | ID: mdl-35992769

RESUMEN

Photon-HDF5 is an open-source and open file format for storing photon-counting data from single molecule microscopy experiments, introduced to simplify data exchange and increase the reproducibility of data analysis. Part of the Photon-HDF5 ecosystem, is phconvert, an extensible python library that allows converting proprietary formats into Photon-HDF5 files. However, its use requires some proficiency with command line instructions, the python programming language, and the YAML markup format. This creates a significant barrier for potential users without that expertise, but who want to benefit from the advantages of releasing their files in an open format. In this work, we present a GUI that lowers this barrier, thus simplifying the use of Photon-HDF5. This tool uses the phconvert python library to convert data files originally saved in proprietary data formats to Photon-HDF5 files, without users having to write a single line of code. Because reproducible analyses depend on essential experimental information, such as laser power or sample description, the GUI also includes (currently limited) functionality to associate valid metadata with the converted file, without having to write any YAML. Finally, the GUI includes several productivity-enhancing features such as whole-directory batch conversion and the ability to re-run a failed batch, only converting the files that could not be converted in the previous run.

11.
MethodsX ; 8: 101456, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34430337

RESUMEN

The analysis techniques and the corresponding software suite GRITI (General Resource for Ionospheric Transient Investigations) are described. GRITI was used to develop the Dinsmore et al. [2] results, which found a novel classification of traveling ionospheric disturbances (TIDs) called semi-coherent ionospheric pulsing structures (SCIPS). The any-geographic range (local-to-global), any-azimuth angle keogram algorithm used to analyze SCIPS in that work is detailed. The keogram algorithm in GRITI is applied to detrended vTEC (vertical Total Electron Content) data, called delta-vTEC herein, in Dinsmore et al. [2] and the follow-on paper Dinsmore et al. [3], but is also applicable to any other two-dimensional dataset that evolves through time. GRITI's delta-vTEC processing algorithm is also described in detail, which is used to provide the delta-vTEC data for Dinsmore et al. [3]. •We detail a keogram algorithm for analysis of delta-vTEC data in Dinsmore et al. [2] and the follow-on paper Dinsmore et al. [3].•We detail a delta-vTEC processing algorithm that converts vTEC data to delta-vTEC through detrending that is used to provide the delta-vTEC data used in Dinsmore et al. [3].•GRITI is an open-source Python 3 analysis codebase that encompasses the delta-vTEC processing and keogram algorithms. GRITI has additional support for other data sources and is designed for flexibility in adding new data sources and analysis methods. GRITI is available for download at: https://github.com/dinsmoro/GRITI.

12.
Data Brief ; 28: 104971, 2020 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-31890809

RESUMEN

Magnetic particle imaging is a tomographic imaging technique capable of measuring the local concentration of magnetic nanoparticles that can be used as tracers in biomedical applications. Since MPI is still at a very early stage of development, there are only a few MPI systems worldwide that are primarily operated by technical research groups that develop the systems themselves. It is therefore difficult for researchers without direct access to an MPI system to obtain experimental MPI data. The purpose of the OpenMPIData initiative is to make experimental MPI data freely accessible via a web platform. Measurements are performed with multiple phantoms and different image sequences from 1D to 3D. The datasets are stored in the magnetic particle image data format (MDF), an open document standard for storing MPI data. The open data is mainly intended for mathematicians and algorithm developers working on new reconstruction algorithms. Each dataset is designed to pose a specific challenge to image reconstruction. In addition to the measurement data, computer aided design (CAD) drawings of the phantoms are also provided so that the exact dimensions of the particle concentrations are known. Thus, the phantoms can be reproduced by other research groups using additive manufacturing. These reproduced phantoms can be used to compare different MPI systems.

13.
Front Neuroinform ; 14: 27, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33041776

RESUMEN

The Neurodata Without Borders (abbreviation NWB) format is a current technology for storing neurophysiology data along with the associated metadata. Data stored in the format is organized into separate HDF5 files, each file usually storing the data associated with a single recording session. While the NWB format provides a structured method for storing data, so far there have not been tools which enable searching a collection of NWB files in order to find data of interest for a particular purpose. We describe here three tools to enable searching NWB files. The tools have different features making each of them most useful for a particular task. The first tool, called the NWB Query Engine, is written in Java. It allows searching the complete content of NWB files. It was designed for the first version of NWB (NWB 1) and supports most (but not all) features of the most recent version (NWB 2). For some searches, it is the fastest tool. The second tool, called "search_nwb" is written in Python and also allow searching the complete contents of NWB files. It works with both NWB 1 and NWB 2, as does the third tool. The third tool, called "nwbindexer" enables searching a collection of NWB files using a two-step process. In the first step, a utility is run which creates an SQLite database containing the metadata in a collection of NWB files. This database is then searched in the second step, using another utility. Once the index is built, this two-step processes allows faster searches than are done by the other tools, but does not enable as complete of searches. All three tools use a simple query language which was developed for this project. Software integrating the three tools into a web-interface is provided which enables searching NWB files by submitting a web form.

14.
IUCrJ ; 7(Pt 5): 784-792, 2020 Sep 01.
Artículo en Inglés | MEDLINE | ID: mdl-32939270

RESUMEN

Macromolecular crystallography (MX) is the dominant means of determining the three-dimensional structures of biological macromolecules. Over the last few decades, most MX data have been collected at synchrotron beamlines using a large number of different detectors produced by various manufacturers and taking advantage of various protocols and goniometries. These data came in their own formats: sometimes proprietary, sometimes open. The associated metadata rarely reached the degree of completeness required for data management according to Findability, Accessibility, Interoperability and Reusability (FAIR) principles. Efforts to reuse old data by other investigators or even by the original investigators some time later were often frustrated. In the culmination of an effort dating back more than two decades, a large portion of the research community concerned with high data-rate macromolecular crystallography (HDRMX) has now agreed to an updated specification of data and metadata for diffraction images produced at synchrotron light sources and X-ray free-electron lasers (XFELs). This 'Gold Standard' will facilitate the processing of data sets independent of the facility at which they were collected and enable data archiving according to FAIR principles, with a particular focus on interoperability and reusability. This agreed standard builds on the NeXus/HDF5 NXmx application definition and the International Union of Crystallo-graphy (IUCr) imgCIF/CBF dictionary, and it is compatible with major data-processing programs and pipelines. Just as with the IUCr CBF/imgCIF standard from which it arose and to which it is tied, the NeXus/HDF5 NXmx Gold Standard application definition is intended to be applicable to all detectors used for crystallography, and all hardware and software developers in the field are encouraged to adopt and contribute to the standard.

15.
F1000Res ; 8: 21, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-30828438

RESUMEN

Bioconductor's SummarizedExperiment class unites numerical assay quantifications with sample- and experiment-level metadata.  SummarizedExperiment is the standard Bioconductor class for assays that produce matrix-like data, used by over 200 packages.  We describe the restfulSE package, a deployment of  this data model that supports remote storage.  We illustrate use of SummarizedExperiment with remote HDF5 and Google BigQuery back ends, with two applications in cancer genomics.  Our intent is to allow the use of familiar and semantically meaningful programmatic idioms to query genomic data, while abstracting the remote interface from end users and developers.


Asunto(s)
Genómica , Programas Informáticos , Genoma
16.
Proc IEEE Int Conf Big Data ; 2019: 165-179, 2019 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-34632466

RESUMEN

A ubiquitous problem in aggregating data across different experimental and observational data sources is a lack of software infrastructure that enables flexible and extensible standardization of data and metadata. To address this challenge, we developed HDMF, a hierarchical data modeling framework for modern science data standards. With HDMF, we separate the process of data standardization into three main components: (1) data modeling and specification, (2) data I/O and storage, and (3) data interaction and data APIs. To enable standards to support the complex requirements and varying use cases throughout the data life cycle, HDMF provides object mapping infrastructure to insulate and integrate these various components. This approach supports the flexible development of data standards and extensions, optimized storage backends, and data APIs, while allowing the other components of the data standards ecosystem to remain stable. To meet the demands of modern, large-scale science data, HDMF provides advanced data I/O functionality for iterative data write, lazy data load, and parallel I/O. It also supports optimization of data storage via support for chunking, compression, linking, and modular data storage. We demonstrate the application of HDMF in practice to design NWB 2.0 [13], a modern data standard for collaborative science across the neurophysiology community.

17.
Neuroinformatics ; 15(1): 87-99, 2017 01.
Artículo en Inglés | MEDLINE | ID: mdl-27837401

RESUMEN

A major challenge in experimental data analysis is the validation of analytical methods in a fully controlled scenario where the justification of the interpretation can be made directly and not just by plausibility. In some sciences, this could be a mathematical proof, yet biological systems usually do not satisfy assumptions of mathematical theorems. One solution is to use simulations of realistic models to generate ground truth data. In neuroscience, creating such data requires plausible models of neural activity, access to high performance computers, expertise and time to prepare and run the simulations, and to process the output. To facilitate such validation tests of analytical methods we provide rich data sets including intracellular voltage traces, transmembrane currents, morphologies, and spike times. Moreover, these data can be used to study the effects of different tissue models on the measurement. The data were generated using the largest publicly available multicompartmental model of thalamocortical network (Traub et al., Journal of Neurophysiology, 93(4), 2194-2232 (Traub et al. 2005)), with activity evoked by different thalamic stimuli.


Asunto(s)
Corteza Cerebral/fisiología , Simulación por Computador , Modelos Neurológicos , Redes Neurales de la Computación , Neuronas/fisiología , Tálamo/fisiología , Animales , Conjuntos de Datos como Asunto , Humanos , Difusión de la Información , Potenciales de la Membrana , Vías Nerviosas/fisiología , Programas Informáticos
18.
Front Neuroinform ; 10: 35, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27563289

RESUMEN

It is often useful that an imaging data format can afford rich metadata, be flexible, scale to very large file sizes, support multi-modal data, and have strong inbuilt mechanisms for data provenance. Beginning in 1992, MINC was developed as a system for flexible, self-documenting representation of neuroscientific imaging data with arbitrary orientation and dimensionality. The MINC system incorporates three broad components: a file format specification, a programming library, and a growing set of tools. In the early 2000's the MINC developers created MINC 2.0, which added support for 64-bit file sizes, internal compression, and a number of other modern features. Because of its extensible design, it has been easy to incorporate details of provenance in the header metadata, including an explicit processing history, unique identifiers, and vendor-specific scanner settings. This makes MINC ideal for use in large scale imaging studies and databases. It also makes it easy to adapt to new scanning sequences and modalities.

19.
Proc SPIE Int Soc Opt Eng ; 97142016 Feb 13.
Artículo en Inglés | MEDLINE | ID: mdl-28649160

RESUMEN

Archival of experimental data in public databases has increasingly become a requirement for most funding agencies and journals. These data-sharing policies have the potential to maximize data reuse, and to enable confirmatory as well as novel studies. However, the lack of standard data formats can severely hinder data reuse. In photon-counting-based single-molecule fluorescence experiments, data is stored in a variety of vendor-specific or even setup-specific (custom) file formats, making data interchange prohibitively laborious, unless the same hardware-software combination is used. Moreover, the number of available techniques and setup configurations make it difficult to find a common standard. To address this problem, we developed Photon-HDF5 (www.photon-hdf5.org), an open data format for timestamp-based single-molecule fluorescence experiments. Building on the solid foundation of HDF5, Photon-HDF5 provides a platform- and language-independent, easy-to-use file format that is self-describing and supports rich metadata. Photon-HDF5 supports different types of measurements by separating raw data (e.g. photon-timestamps, detectors, etc) from measurement metadata. This approach allows representing several measurement types and setup configurations within the same core structure and makes possible extending the format in backward-compatible way. Complementing the format specifications, we provide open source software to create and convert Photon-HDF5 files, together with code examples in multiple languages showing how to read Photon-HDF5 files. Photon-HDF5 allows sharing data in a format suitable for long term archival, avoiding the effort to document custom binary formats and increasing interoperability with different analysis software. We encourage participation of the single-molecule community to extend interoperability and to help defining future versions of Photon-HDF5.

20.
J Appl Crystallogr ; 48(Pt 1): 301-305, 2015 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-26089752

RESUMEN

NeXus is an effort by an international group of scientists to define a common data exchange and archival format for neutron, X-ray and muon experiments. NeXus is built on top of the scientific data format HDF5 and adds domain-specific rules for organizing data within HDF5 files, in addition to a dictionary of well defined domain-specific field names. The NeXus data format has two purposes. First, it defines a format that can serve as a container for all relevant data associated with a beamline. This is a very important use case. Second, it defines standards in the form of application definitions for the exchange of data between applications. NeXus provides structures for raw experimental data as well as for processed data.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA